threads
listlengths
1
275
[ { "msg_contents": "Hi again!\n\nI have finally got my Ubuntu VirtualBox VM running PostgreSQL with PL/Python\nand am now looking at performance.\n\nSo here's the scenario:\n\nWe have a great big table:\n\ncse=# \\d nlpg.match_data\n Table \"nlpg.match_data\"\n Column | Type |\nModifiers\n-------------------+----------+--------------------------------------------------------------------------\n premise_id | integer |\n usrn | bigint |\n org | text |\n sao | text |\n level | text |\n pao | text |\n name | text |\n street | text |\n town | text |\n postcode | text |\n match_data_id | integer | not null default\nnextval('nlpg.match_data_match_data_id_seq1'::regclass)\n addr_str | text |\n tssearch_name | tsvector |\n tssearch_street | tsvector |\n tssearch_addr_str | tsvector |\nIndexes:\n \"match_data_pkey1\" PRIMARY KEY, btree (match_data_id)\n \"index_match_data_mid\" btree (match_data_id)\n \"index_match_data_pid\" btree (premise_id)\n \"index_match_data_tssearch_addr_str\" gin (tssearch_addr_str)\n \"index_match_data_tssearch_name\" gin (tssearch_name)\n \"index_match_data_tssearch_street\" gin (tssearch_street)\n \"index_match_data_usrn\" btree (usrn)\n\nKEY NOTE:\nnlpg.match_data has approximately 27,000,000 rows..\n\nRunning this query:\n\nEXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id <\n1000000;\n\nI get this:\n\n\"Index Scan using match_data_pkey1 on match_data (cost=0.00..1452207.14\nrows=1913756 width=302) (actual time=23.448..61559.652 rows=999999 loops=1)\"\n\" Index Cond: (match_data_id < 1000000)\"\n\"Total runtime: 403855.675 ms\"\n\nI copied a chunk of the table like this:\n\nCREATE TABLE nlpg.md_copy AS SELECT * FROM nlpg.match_data WHERE\nmatch_data_id < 1000000;\n\nThen ran the same query on the smaller copy table:\n\nEXPLAIN ANALYZE UPDATE nlpg.md_copy SET org = org WHERE match_data_id <\n1000000;\n\nAnd get this:\n\n\"Seq Scan on md_copy (cost=0.00..96935.99 rows=999899 width=301) (actual\ntime=26.745..33944.923 rows=999999 loops=1)\"\n\" Filter: (match_data_id < 1000000)\"\n\"Total runtime: 57169.419 ms\"\n\nAs you can see this is much faster per row with the smaller table chunk. I\nthen tried running the same first query with 10 times the number of rows:\n\nEXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id <\n10000000;\n\nThis takes a massive amount of time (still running) and is definitely a\nnon-linear increase in the run time in comparison with the previous query.\n\nEXPLAIN UPDATE nlpg.match_data SET org = org WHERE match_data_id < 10000000;\n\"Seq Scan on match_data (cost=0.00..3980053.11 rows=19172782 width=302)\"\n\" Filter: (match_data_id < 10000000)\"\n\nAny suggestions on what I can do to speed things up? I presume if I turn off\nSequential Scan then it might default to Index Scan.. Is there anything\nelse?\n\nCheers,\nTom\n\nHi again!I have finally got my Ubuntu VirtualBox VM running PostgreSQL with PL/Python and am now looking at performance.So here's the scenario:We have a great big table: cse=# \\d nlpg.match_data\n                                         Table \"nlpg.match_data\"      Column       |   Type   |                                Modifiers                                 -------------------+----------+--------------------------------------------------------------------------\n premise_id        | integer  |  usrn              | bigint   |  org               | text     |  sao               | text     |  level             | text     |  pao               | text     |  name              | text     | \n street            | text     |  town              | text     |  postcode          | text     |  match_data_id     | integer  | not null default nextval('nlpg.match_data_match_data_id_seq1'::regclass)\n addr_str          | text     |  tssearch_name     | tsvector |  tssearch_street   | tsvector |  tssearch_addr_str | tsvector | Indexes:    \"match_data_pkey1\" PRIMARY KEY, btree (match_data_id)\n    \"index_match_data_mid\" btree (match_data_id)    \"index_match_data_pid\" btree (premise_id)    \"index_match_data_tssearch_addr_str\" gin (tssearch_addr_str)    \"index_match_data_tssearch_name\" gin (tssearch_name)\n    \"index_match_data_tssearch_street\" gin (tssearch_street)    \"index_match_data_usrn\" btree (usrn)KEY NOTE:nlpg.match_data has approximately 27,000,000 rows..Running this query:\nEXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id < 1000000;I get this:\"Index Scan using match_data_pkey1 on match_data  (cost=0.00..1452207.14 rows=1913756 width=302) (actual time=23.448..61559.652 rows=999999 loops=1)\"\n\"  Index Cond: (match_data_id < 1000000)\"\"Total runtime: 403855.675 ms\"I copied a chunk of the table like this:CREATE TABLE nlpg.md_copy AS SELECT * FROM nlpg.match_data WHERE match_data_id < 1000000;\nThen ran the same query on the smaller copy table:EXPLAIN ANALYZE UPDATE nlpg.md_copy SET org = org WHERE match_data_id < 1000000;And get this:\"Seq Scan on md_copy  (cost=0.00..96935.99 rows=999899 width=301) (actual time=26.745..33944.923 rows=999999 loops=1)\"\n\"  Filter: (match_data_id < 1000000)\"\"Total runtime: 57169.419 ms\"As you can see this is much faster per row with the smaller table chunk. I then tried running the same first query with 10 times the number of rows:\nEXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id < 10000000;This takes a massive amount of time (still running) and is definitely a non-linear increase in the run time in comparison with the previous query.\nEXPLAIN UPDATE nlpg.match_data SET org = org WHERE match_data_id < 10000000;\"Seq Scan on match_data  (cost=0.00..3980053.11 rows=19172782 width=302)\"\"  Filter: (match_data_id < 10000000)\"\nAny suggestions on what I can do to speed things up? I presume if I turn off Sequential Scan then it might default to Index Scan.. Is there anything else?Cheers,Tom", "msg_date": "Thu, 24 Jun 2010 11:45:25 +0100", "msg_from": "Tom Wilcox <[email protected]>", "msg_from_op": true, "msg_subject": "Small Queries Really Fast, Large Queries Really Slow..." }, { "msg_contents": "> Any suggestions on what I can do to speed things up? I presume if I turn\n> off\n> Sequential Scan then it might default to Index Scan.. Is there anything\n> else?\n>\n> Cheers,\n> Tom\n\nWell, I doubt turning off the sequential scan will improve the performance\nin this case - actually the first case (running 400 sec) uses an index\nscan, while the 'fast' one uses sequential scan.\n\nActually I'd try exactly the oposite - disabling the index scan, i.e.\nforcing it to use sequential scan in the first case. You're selecting\nabout 4% of the rows, but we don't know how 'spread' are those rows\nthrough the table. It might happen PostgreSQL actually has to read all the\nblocks of the table.\n\nThis might be improved by clustering - create and index on the\n'match_data_id' colunm and then run\n\nCLUSTER match_data_id_idx ON match_data;\n\nThis will sort the table accoring to match_data_id column, which should\nimprove the performance. But it won't last forever - it degrades through\ntime, so you'll have to perform clustering once a while (and it locks the\ntable, so be careful).\n\nHow large is the table anyway? How many rows / pages are there? Try\nsomething like this\n\nSELECT reltuples, relpages FROM pg_class WHERE relname = 'match_data';\n\nMultiply the blocks by 8k and you'll get the occupied space. How much is\nit? How much memory (shared_buffers) is there?\n\nYou could try partitioning accoring to the match_data_id column, but there\nare various disadvantages related to foreign keys etc. and it's often a\nmajor change in the application, so I'd consider other solutions first.\n\nBTW I have no experience with running PostgreSQL inside a Virtual Box VM,\nso it might be another source of problems. I do remember we had some\nserious problems with I/O (network and disks) when running vmware, but it\nwas a long time ago and now it works fine. But maybe this the root cause?\nCan you run dstat / vmstat / iostat or something like that in the host OS\nto see which of the resources is causing problems (if any)?\n\nTomas\n\n", "msg_date": "Thu, 24 Jun 2010 13:20:04 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Small Queries Really Fast,\n Large Queries Really Slow..." } ]
[ { "msg_contents": "Hi,\n\nat the moment we encounter some performance problems with our database server.\n\nWe have a 12 GB RAM machine with intel i7-975 and using\n3 disks \"Seagate Barracuda 7200.11, ST31500341AS (1.5 GB)\" \nOne disk for the system and WAL etc. and one SW RAID-0 with two disks for \npostgresql data. Our database is about 24GB.\n\nOur munin graph reports at 9:00 a clock writes of 3000 blocks per second and \nreads of about 1000 blocks per second on our disk which holds the data \ndirectories of postgresql (WAL are on a different disk)\n\n3000 blocks ~ about 3 MB/s write\n1000 blocks ~ about 1 MB/s read\n\nAt the same time we have nearly 50% CPU I/O wait and only 12% user CPU load \n(so 4 of 8 cpu cores are in use for io wait)\n\nWe know, its a poor man disk setup (but we can not find a hoster with rather \nadvanced disk configuration at an affordable price). Anyway, we ran some tests \non it:\n\n\n# time sh -c \"dd if=/dev/zero of=bigfile bs=8k count=3000000 && sync\"\n3000000+0 records in\n3000000+0 records out\n24576000000 bytes (25 GB) copied, 276.03 s, 89.0 MB/s\n\nreal\t4m48.658s\nuser\t0m0.580s\nsys\t0m51.579s\n\n# time dd if=bigfile of=/dev/null bs=8k\n3000000+0 records in\n3000000+0 records out\n24576000000 bytes (25 GB) copied, 222.841 s, 110 MB/s\n\nreal\t3m42.879s\nuser\t0m0.468s\nsys\t0m18.721s\n\n\n\nOf course, writing large chunks is quite a different usage pattern. But I am \nwondering that writing 3MB/s and reading 1 MB/s seams to be a limit if i can \nrun a test with 89 MB/s writing and 110MB/s reading.\n\nCan you give some hints, if this numbers seems to be reasonable? \n\nkind regards\nJanning\n\n\n\n\n", "msg_date": "Thu, 24 Jun 2010 14:43:33 +0200", "msg_from": "Janning <[email protected]>", "msg_from_op": true, "msg_subject": "Write performance" }, { "msg_contents": "On Thu, Jun 24, 2010 at 02:43:33PM +0200, Janning wrote:\n> Hi,\n> \n> at the moment we encounter some performance problems with our database server.\n> \n> We have a 12 GB RAM machine with intel i7-975 and using\n> 3 disks \"Seagate Barracuda 7200.11, ST31500341AS (1.5 GB)\" \n> One disk for the system and WAL etc. and one SW RAID-0 with two disks for \n> postgresql data. Our database is about 24GB.\n> \n> Our munin graph reports at 9:00 a clock writes of 3000 blocks per second and \n> reads of about 1000 blocks per second on our disk which holds the data \n> directories of postgresql (WAL are on a different disk)\n> \n> 3000 blocks ~ about 3 MB/s write\n> 1000 blocks ~ about 1 MB/s read\n> \n> At the same time we have nearly 50% CPU I/O wait and only 12% user CPU load \n> (so 4 of 8 cpu cores are in use for io wait)\n> \n> We know, its a poor man disk setup (but we can not find a hoster with rather \n> advanced disk configuration at an affordable price). Anyway, we ran some tests \n> on it:\n> \n> \n> # time sh -c \"dd if=/dev/zero of=bigfile bs=8k count=3000000 && sync\"\n> 3000000+0 records in\n> 3000000+0 records out\n> 24576000000 bytes (25 GB) copied, 276.03 s, 89.0 MB/s\n> \n> real\t4m48.658s\n> user\t0m0.580s\n> sys\t0m51.579s\n> \n> # time dd if=bigfile of=/dev/null bs=8k\n> 3000000+0 records in\n> 3000000+0 records out\n> 24576000000 bytes (25 GB) copied, 222.841 s, 110 MB/s\n> \n> real\t3m42.879s\n> user\t0m0.468s\n> sys\t0m18.721s\n> \n> \n> \n> Of course, writing large chunks is quite a different usage pattern. But I am \n> wondering that writing 3MB/s and reading 1 MB/s seams to be a limit if i can \n> run a test with 89 MB/s writing and 110MB/s reading.\n> \n> Can you give some hints, if this numbers seems to be reasonable? \n> \n> kind regards\n> Janning\n> \n\nYes, these are typical random I/O versus sequential I/O rates for\nhard drives. Your I/O is extremely under-powered relative to your\nCPU/memory. For DB servers, many times you need much more I/O\ninstead.\n\nCheers,\nKen\n", "msg_date": "Thu, 24 Jun 2010 07:47:34 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Write performance" }, { "msg_contents": "On Thu, 24 Jun 2010, Janning wrote:\n> We have a 12 GB RAM machine with intel i7-975 and using\n> 3 disks \"Seagate Barracuda 7200.11, ST31500341AS (1.5 GB)\"\n\nThose discs are 1.5TB, not 1.5GB.\n\n> One disk for the system and WAL etc. and one SW RAID-0 with two disks for\n> postgresql data. Our database is about 24GB.\n\nBeware of RAID-0 - make sure you can recover the data when (not if) a disc \nfails.\n\n> Our munin graph reports at 9:00 a clock writes of 3000 blocks per second and\n> reads of about 1000 blocks per second on our disk which holds the data\n> directories of postgresql (WAL are on a different disk)\n>\n> 3000 blocks ~ about 3 MB/s write\n> 1000 blocks ~ about 1 MB/s read\n>\n> At the same time we have nearly 50% CPU I/O wait and only 12% user CPU load\n> (so 4 of 8 cpu cores are in use for io wait)\n\nNot quite sure what situation you are measuring these figures under. \nHowever, as a typical figure, let's say you are doing random access with \n8kB blocks (as in Postgres), and the access time on your drive is 8.5ms \n(as with these drives).\n\nFor each drive, you will be able to read/write approximately 8kB / \n0.0085s, giving 941kB per second. If you have multiple processes all doing \nrandom access, then you may be able to utilise both discs and get double \nthat.\n\n> Of course, writing large chunks is quite a different usage pattern. But I am\n> wondering that writing 3MB/s and reading 1 MB/s seams to be a limit if i can\n> run a test with 89 MB/s writing and 110MB/s reading.\n\nThat's quite right, and typical performance figures for a drive like that.\n\nMatthew\n\n-- \n Don't criticise a man until you have walked a mile in his shoes; and if\n you do at least he will be a mile behind you and bare footed.\n", "msg_date": "Thu, 24 Jun 2010 13:53:57 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Write performance" }, { "msg_contents": "On Thursday 24 June 2010 14:53:57 Matthew Wakeling wrote:\n> On Thu, 24 Jun 2010, Janning wrote:\n> > We have a 12 GB RAM machine with intel i7-975 and using\n> > 3 disks \"Seagate Barracuda 7200.11, ST31500341AS (1.5 GB)\"\n>\n> Those discs are 1.5TB, not 1.5GB.\n\nsorry, my fault.\n\n> > One disk for the system and WAL etc. and one SW RAID-0 with two disks for\n> > postgresql data. Our database is about 24GB.\n>\n> Beware of RAID-0 - make sure you can recover the data when (not if) a disc\n> fails.\n\noh sorry again, its a raid-1 of course. shame on me.\n\n> > Our munin graph reports at 9:00 a clock writes of 3000 blocks per second\n> > and reads of about 1000 blocks per second on our disk which holds the\n> > data directories of postgresql (WAL are on a different disk)\n> >\n> > 3000 blocks ~ about 3 MB/s write\n> > 1000 blocks ~ about 1 MB/s read\n> >\n> > At the same time we have nearly 50% CPU I/O wait and only 12% user CPU\n> > load (so 4 of 8 cpu cores are in use for io wait)\n>\n> Not quite sure what situation you are measuring these figures under.\n> However, as a typical figure, let's say you are doing random access with\n> 8kB blocks (as in Postgres), and the access time on your drive is 8.5ms\n> (as with these drives).\n>\n> For each drive, you will be able to read/write approximately 8kB /\n> 0.0085s, giving 941kB per second. If you have multiple processes all doing\n> random access, then you may be able to utilise both discs and get double\n> that.\n\nSo with your calculation I have a maximum of 2MB/s random access. So i really \nneed to upgrade my disk configuration!\n\n> > Of course, writing large chunks is quite a different usage pattern. But I\n> > am wondering that writing 3MB/s and reading 1 MB/s seams to be a limit if\n> > i can run a test with 89 MB/s writing and 110MB/s reading.\n>\n> That's quite right, and typical performance figures for a drive like that.\n\nthanks for your help.\n\nkind regards \nJanning\n\n> Matthew\n>\n> --\n> Don't criticise a man until you have walked a mile in his shoes; and if\n> you do at least he will be a mile behind you and bare footed.\n\n", "msg_date": "Thu, 24 Jun 2010 15:16:05 +0200", "msg_from": "Janning <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Write performance" }, { "msg_contents": "thanks for your quick response, kenneth\n\nOn Thursday 24 June 2010 14:47:34 you wrote:\n> On Thu, Jun 24, 2010 at 02:43:33PM +0200, Janning wrote:\n> > Hi,\n> >\n> > at the moment we encounter some performance problems with our database\n> > server.\n> >\n> > We have a 12 GB RAM machine with intel i7-975 and using\n> > 3 disks \"Seagate Barracuda 7200.11, ST31500341AS (1.5 GB)\"\n> > One disk for the system and WAL etc. and one SW RAID-0 with two disks for\n> > postgresql data. Our database is about 24GB.\n[...]\n> Your I/O is extremely under-powered relative to your\n> CPU/memory. For DB servers, many times you need much more I/O\n> instead.\n\nSo at the moment we are using this machine as our primary database server:\nhttp://www.hetzner.de/en/hosting/produkte_rootserver/eq9/\n\nSadly, our hoster is not offering advanced disk setup. Now we have two options\n\n1. buying a server on our own and renting a co-location.\nI fear we do not know enough about hardware to vote for this option. I think \nfor co-locating your own server one should have more knowledge about hardware.\n\n2. renting a server from a hoster with an advanced disk setup.\nCan anybody recommend a good hosting solution in germany with a good disk \nsetup for postgresql? \n\n\nkind regards\nJanning\n\n", "msg_date": "Thu, 24 Jun 2010 15:30:53 +0200", "msg_from": "Janning <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Write performance" }, { "msg_contents": "On Thursday 24 June 2010 15:16:05 Janning wrote:\n> On Thursday 24 June 2010 14:53:57 Matthew Wakeling wrote:\n> > On Thu, 24 Jun 2010, Janning wrote:\n> > > We have a 12 GB RAM machine with intel i7-975 and using\n> > > 3 disks \"Seagate Barracuda 7200.11, ST31500341AS (1.5 TB)\"\n> > >\n> > For each drive, you will be able to read/write approximately 8kB /\n> > 0.0085s, giving 941kB per second. If you have multiple processes all\n> > doing random access, then you may be able to utilise both discs and get\n> > double that.\n>\n> So with your calculation I have a maximum of 2MB/s random access. So i\n> really need to upgrade my disk configuration!\n\ni was looking at tomshardware.com and the fastest disk is\n\n Maxtor Atlas 15K II * 8K147S0,SAS,147 GB, 16 MB Cache,15000 rpm\n\nwith 5.5 ms random access time. \n\nSo even if i switch to those disks i can only reach a perfomace gain of 1.5, \nright? \n\nTo achieve a better disk performance by factor of ten, i need a raid-10 setup \nwith 12 disks (so i have 6 raid-1 bundles). Or are there other factors with \nhigh end disks? \n\nkind regards \nJanning\n\n", "msg_date": "Thu, 24 Jun 2010 15:45:31 +0200", "msg_from": "Janning Vygen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Write performance" }, { "msg_contents": "As others have already pointed out, your disk performance here is \ncompletely typical of a single pair of drives doing random read/write \nactivity. So the question you should be asking is how to reduce the \namount of reading and writing needed to run your application. The \nsuggestions at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server address \nthat. Increases to shared_buffers and checkpoint_segments in particular \ncan dramatically reduce the amount of I/O needed to run an application. \nOn the last server I turned, random reads went from a constant stream of \n1MB/s (with default value of shared_buffers at 32MB) to an average of \n0.1MB/s just by adjusting those two parameters upwards via those guidelines.\n\nIf you haven't already made large increases to those values, I'd suggest \nstarting there before presuming you must get a different disk setup.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 24 Jun 2010 09:57:50 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Write performance" }, { "msg_contents": "On 2010-06-24 15:45, Janning Vygen wrote:\n> On Thursday 24 June 2010 15:16:05 Janning wrote:\n> \n>> On Thursday 24 June 2010 14:53:57 Matthew Wakeling wrote:\n>> \n>>> On Thu, 24 Jun 2010, Janning wrote:\n>>> \n>>>> We have a 12 GB RAM machine with intel i7-975 and using\n>>>> 3 disks \"Seagate Barracuda 7200.11, ST31500341AS (1.5 TB)\"\n>>>>\n>>>> \n>>> For each drive, you will be able to read/write approximately 8kB /\n>>> 0.0085s, giving 941kB per second. If you have multiple processes all\n>>> doing random access, then you may be able to utilise both discs and get\n>>> double that.\n>>> \n>> So with your calculation I have a maximum of 2MB/s random access. So i\n>> really need to upgrade my disk configuration!\n>> \n> i was looking at tomshardware.com and the fastest disk is\n>\n> Maxtor Atlas 15K II * 8K147S0,SAS,147 GB, 16 MB Cache,15000 rpm\n>\n> with 5.5 ms random access time.\n>\n> So even if i switch to those disks i can only reach a perfomace gain of 1.5,\n> right?\n>\n> To achieve a better disk performance by factor of ten, i need a raid-10 setup\n> with 12 disks (so i have 6 raid-1 bundles). Or are there other factors with\n> high end disks?\n> \n\nWell. On the write-side, you can add in a Raid controller with Battery \nbacked\nwrite cache to not make the writes directly hit disk. This improves\nthe amount of writing you can do.\n\nOn the read-side you can add more memory to your server so a significant\npart of your most active dataset is cached in memory.\n\nIt depends on the actual sizes and workload what gives the most benefit\nfor you.\n\n-- \nJesper\n", "msg_date": "Thu, 24 Jun 2010 19:02:45 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Write performance" }, { "msg_contents": "\nOn Jun 24, 2010, at 6:16 AM, Janning wrote:\n\n> On Thursday 24 June 2010 14:53:57 Matthew Wakeling wrote:\n>> On Thu, 24 Jun 2010, Janning wrote:\n>>> We have a 12 GB RAM machine with intel i7-975 and using\n>>> 3 disks \"Seagate Barracuda 7200.11, ST31500341AS (1.5 GB)\"\n>> \n>> Those discs are 1.5TB, not 1.5GB.\n> \n> sorry, my fault.\n> \n>>> One disk for the system and WAL etc. and one SW RAID-0 with two disks for\n>>> postgresql data. Our database is about 24GB.\n>> \n>> Beware of RAID-0 - make sure you can recover the data when (not if) a disc\n>> fails.\n> \n> oh sorry again, its a raid-1 of course. shame on me.\n\nIf your WAL is not on RAID but your data is, you will lose data if the WAL log drive dies. You will then have a difficult time recovering data from the data drives even though they are RAID protected. Most likely indexes and some data will be corrupted since the last checkpoint. I have lost a WAL before, and the result was a lot of corrupted system indexes that had to be rebuilt in single user mode, and one system table (stats related) that had to be purged and regenerated from scratch. This was not fun. Most of the data was fine, but the cleanup is messy if you lose WAL, and there is no guarantee that your data is safe if you don't have the WAL available.\n\n\n", "msg_date": "Fri, 25 Jun 2010 10:30:06 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Write performance" } ]
[ { "msg_contents": "A scary phenomenon is being exhibited by the server , which is the server\nis slurping all the swap suddenly , some of the relevant sar -r output are:\n\n\n\n10:30:01 AM kbmemfree kbmemused %memused kbbuffers kbcached\nkbswpfree kbswpused %swpused kbswpcad\n10:40:01 AM 979068 31892208 97.02 10588 28194876\n1781568 314872 15.02 66500\n10:50:01 AM 1791536 31079740 94.55 10480 27426512\n1782848 313592 14.96 43880\n11:00:01 AM 4678768 28192508 85.77 9692 27213312\n1784888 311552 14.86 33296\n11:10:01 AM 179208 32692068 99.45 3180 27569008\n1725136 371304 17.71 65444\n11:20:01 AM 225604 32645672 99.31 2604 29817192\n1693672 402768 19.21 78312 <-------\n\n11:30:01 AM 520224 32351052 98.42 1780 26863576\n0 2096440 100.00 1585772 <------ within 10mins\n11:40:02 AM 483532 32387744 98.53 2672 27220404\n0 2096440 100.00 43876\n11:50:01 AM 162700 32708576 99.51 3316 27792540\n0 2096440 100.00 43708\n12:00:01 PM 420176 32451100 98.72 3772 28181316\n0 2096440 100.00 43708\n12:10:01 PM 331624 32539652 98.99 3236 27857760\n0 2096440 100.00 0\n12:20:01 PM 1023428 31847848 96.89 4632 27450504\n0 2096440 100.00 0\n12:30:01 PM 763296 32107980 97.68 4988 28270704\n0 2096440 100.00 0\n12:40:01 PM 770280 32100996 97.66 5260 28423292\n0 2096440 100.00 0\n\nThen i added more swap made it 4GB from 2GB\n\n02:10:05 PM 8734144 24137132 73.43 5532 21219972\n2096788 2096124 49.99 52\n02:12:01 PM 5989044 26882232 81.78 6108 23606680\n2096788 2096124 49.99 52\n02:14:01 PM 1517724 31353552 95.38 6320 26988280\n2096788 2096124 49.99 52\n02:16:01 PM 316692 32554584 99.04 6516 28840264\n1844856 2348056 56.00 251984\n02:18:01 PM 450672 32420604 98.63 7748 27238712\n0 4192912 100.00 2096840 <---- all swap gone.\n02:20:01 PM 164388 32706888 99.50 7556 27118104\n0 4192912 100.00 2096840\n02:22:01 PM 848544 32022732 97.42 6212 26718712\n0 4192912 100.00 2096840\n02:24:01 PM 231332 32639944 99.30 6136 27276720\n0 4192912 100.00 2096840\n02:26:01 PM 639560 32231716 98.05 5608 27029372\n0 4192912 100.00 2096840\n02:28:01 PM 868824 32002452 97.36 4648 26253996\n0 4192912 100.00 2096840\n.......\n03:04:01 PM 854408 32016868 97.40 4976 27182140\n0 4192912 100.00 0\n03:06:01 PM 1571904 31299372 95.22 5184 27513232\n0 4192912 100.00 0\n03:08:02 PM 304600 32566676 99.07 5420 27850780\n0 4192912 100.00 0\n03:10:01 PM 915352 31955924 97.22 5632 28076320\n0 4192912 100.00 0\n03:12:01 PM 705132 32166144 97.85 5680 28057444\n0 4192912 100.00 0\n03:14:01 PM 369516 32501760 98.88 6136 27684364\n0 4192912 100.00 0\n\nin vmstat the system does not seems to be swapping\nvmstat 5\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n24 2 4192912 947796 6036 27785324 1 0 451 208 0 0\n50 6 39 5 0\n22 3 4192912 1028956 6044 27795728 0 0 1730 555 13445\n14736 67 12 17 4 0\n24 0 4192912 877508 6052 27806172 0 0 1595 2292 13334 15666\n67 9 19 5 0\n14 8 4192912 820432 6068 27819756 0 0 2331 1351 13208 16192\n66 9 14 11 0\n23 1 4192912 925960 6076 27831644 0 0 1932 1584 13144 16291\n71 9 14 5 0\n 2 3 4192912 895288 6084 27846432 0 0 2496 991 13450 16303\n70 9 13 8 0\n17 0 4192912 936252 6092 27859868 0 0 2122 826 13438 16233\n69 9 17 5 0\n 8 1 4192912 906164 6100 27873640 0 0 2277 858 13440 16235\n63 8 19 10 0\n\nI reduced work_mem from 4GB to 2GB to 512MB (now). I clearly remember that this\nabnormal consumption of swap was NOT there even when work_mem was 4GB.\neg during happier times swap utilisation was: http://pastebin.com/bnE1pFZ9\n\nthe question is whats making postgres slurp the swap? i am posting my\ncurrent postgresql.conf\nonce again.\n\n# cat postgresql.conf | grep -v \"^\\s*#\" | grep -v \"^\\s*$\"\nlisten_addresses = '*' # what IP address(es) to listen on;\nport = 5432 # (change requires restart)\nmax_connections = 300 # (change requires restart)\nshared_buffers = 10GB # min 128kB\nwork_mem = 512MB # min 64kB\nfsync = on # turns forced synchronization on or off\nsynchronous_commit = on # immediate fsync at commit\ncheckpoint_segments = 30 # in logfile segments, min 1, 16MB each\narchive_mode = on # allows archiving to be done\narchive_command = '/opt/scripts/archive_wal.sh %p %f '\narchive_timeout = 600 # force a logfile segment switch after this\neffective_cache_size = 18GB\nconstraint_exclusion = on # on, off, or partition\nlogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = '/var/log/postgresql' # directory where log\nfiles are written,\nlog_filename = 'postgresql.log' # log file name pattern,\nlog_truncate_on_rotation = on # If on, an existing log file of the\nlog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_error_verbosity = verbose # terse, default, or verbose messages\nlog_min_duration_statement = 5000 # -1 is disabled, 0 logs all statements\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system\nerror message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\nadd_missing_from = on\ncustom_variable_classes = 'general' # list of custom\nvariable class names\ngeneral.report_level = ''\ngeneral.disable_audittrail2 = ''\ngeneral.employee=''\n", "msg_date": "Fri, 25 Jun 2010 15:25:39 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "sudden spurt in swap utilization (was:cpu bound postgresql setup.)" }, { "msg_contents": "On Fri, 2010-06-25 at 15:25 +0530, Rajesh Kumar Mallah wrote:\n> shared_buffers = 10GB # min 128kB\n> work_mem = 512MB # min 64kB \n\nThese are still pretty high IMHO. How many *concurrent* connections do\nyou have?\n-- \nDevrim GÜNDÜZ\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nPostgreSQL RPM Repository: http://yum.pgrpms.org\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Fri, 25 Jun 2010 14:26:59 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound postgresql\n setup.)" }, { "msg_contents": "Rajesh Kumar Mallah wrote:\n> A scary phenomenon is being exhibited by the server , which is the server\n> is slurping all the swap suddenly\n> \n> 8 1 4192912 906164 6100 27873640 0 0 2277 858 13440 16235\n> 63 8 19 10 0\n>\n> I reduced work_mem from 4GB to 2GB to 512MB (now). I clearly remember that this\n> abnormal consumption of swap was NOT there even when work_mem was 4GB.\n> eg during happier times swap utilisation was: http://pastebin.com/bnE1pFZ9\n> \n> the question is whats making postgres slurp the swap? i am posting my\n> current postgresql.conf\n> once again.\n>\n> # cat postgresql.conf | grep -v \"^\\s*#\" | grep -v \"^\\s*$\"\n> listen_addresses = '*' # what IP address(es) to listen on;\n> port = 5432 # (change requires restart)\n> max_connections = 300 # (change requires restart)\n> \nHello Rajesh,\n\nIn constrast with e.g. shared_buffers and effective_cache_size, work_mem \nis amount of memory per 'thing' (e.g. order/group by) that wants some \nworking memory, so even a single backend can use several pieces of \nwork_mem memory.\n\nLooking at your postgresql.conf, other memory values seem a bit too high \nas well for a 32GB ram server. It is probably a good idea to use pgtune \n(on pgfoundry) to get some reasonable ball park settings for your hardware.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Fri, 25 Jun 2010 13:28:57 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound postgresql\n setup.)" }, { "msg_contents": "Dear List,\n\nHmmm , lemme test efficacy of pg_tune.\nI would reduce shared buffers also.\n\nregarding concurrent queries:\n\nits now non business hours and\nSELECT procpid,current_query from pg_stat_activity where\ncurrent_query not ilike '%idle%' ;\nis just 5-10, i am yet to measure it during business hours.\n\nWarm Regds\nRajesh Kumar Mallah.\n\nOn Fri, Jun 25, 2010 at 4:58 PM, Yeb Havinga <[email protected]> wrote:\n> Rajesh Kumar Mallah wrote:\n>>\n>> A scary phenomenon is being exhibited by the server , which is the server\n>> is slurping all the swap suddenly\n>> 8 1 4192912 906164 6100 27873640 0 0 2277 858 13440 16235\n>> 63 8 19 10 0\n>>\n>> I reduced work_mem from 4GB to 2GB to 512MB (now). I clearly remember that\n>> this\n>> abnormal consumption of swap was NOT there even when work_mem was 4GB.\n>> eg during happier times swap utilisation was: http://pastebin.com/bnE1pFZ9\n>> the question is whats making postgres slurp the swap? i am posting my\n>> current postgresql.conf\n>> once again.\n>>\n>> # cat postgresql.conf | grep -v \"^\\s*#\" | grep -v \"^\\s*$\"\n>> listen_addresses = '*' # what IP address(es) to listen on;\n>> port = 5432 # (change requires restart)\n>> max_connections = 300 # (change requires restart)\n>>\n>\n> Hello Rajesh,\n>\n> In constrast with e.g. shared_buffers and effective_cache_size, work_mem is\n> amount of memory per 'thing' (e.g. order/group by) that wants some working\n> memory, so even a single backend can use several pieces of work_mem memory.\n>\n> Looking at your postgresql.conf, other memory values seem a bit too high as\n> well for a 32GB ram server. It is probably a good idea to use pgtune (on\n> pgfoundry) to get some reasonable ball park settings for your hardware.\n>\n> regards,\n> Yeb Havinga\n>\n>\n", "msg_date": "Fri, 25 Jun 2010 19:44:48 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound\n\tpostgresql setup.)" }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> wrote:\n \n> its now non business hours and\n> SELECT procpid,current_query from pg_stat_activity where\n> current_query not ilike '%idle%' ;\n> is just 5-10, i am yet to measure it during business hours.\n \nBe careful about '<IDLE> in transaction' status. Those are a\nproblem if the transaction remains active for very long, because\nvacuum (autovacuum or otherwise) can't free space for dead rows\nwhich could still be visible to the '<IDLE> in transaction'\nconnection. It's normal to see this status briefly between\nstatements in a transaction, but it's a problem if a connection just\nsits there in this status.\n \n-Kevin\n", "msg_date": "Fri, 25 Jun 2010 09:32:18 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound\n\tpostgresql setup.)" }, { "msg_contents": "I changed shared_buffers from 10G to 4G ,\nswap usage has almost become nil.\n\n# free\n total used free shared buffers cached\nMem: 32871276 24575824 8295452 0 11064 22167324\n-/+ buffers/cache: 2397436 30473840\nSwap: 4192912 352 4192560\n\nI also observed that there was a huge IO wait and load spike initially\nwhich gradually reduced to normal levels. Now things seems to be\nfine. but real test shall be during business hours.\n\nvmstat output:\nhttp://pastebin.com/ygu8gUhS\n\nthe iowait now is very respectable < 10% and CPU is idling most of\nthe time.\n\n# vmstat 10\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n 2 1 352 8482444 11336 22299100 1 0 450 208 0 0\n50 6 39 5 0\n 4 0 352 8393840 11352 22304484 0 0 480 163 9260 12717\n32 4 62 3 0\n 5 1 352 8474788 11360 22308980 0 0 304 445 8295 12358\n28 4 67 2 0\n 3 0 352 8370672 11376 22316676 0 0 648 158 8760 13214\n38 4 55 3 0\n11 0 352 8193824 11392 22323572 0 0 621 577 8800 13163\n37 4 56 3 0\n 2 0 352 8229012 11408 22326664 0 0 169 405 9588 13696\n34 4 61 1 0\n 6 1 352 8319176 11424 22333144 0 0 559 170 8830 12929\n32 4 61 3 0\n\nI shall also try pgtune in a while.\n", "msg_date": "Fri, 25 Jun 2010 20:44:41 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound\n\tpostgresql setup.)" }, { "msg_contents": "Dear List,\n\npgtune suggests the following:\n(current value are in braces via reason) , (*) indicates significant\ndifference from current value.\n\n default_statistics_target = 50 # pgtune wizard 2010-06-25 (current 100\nvia default)\n(*) maintenance_work_mem = 1GB # pgtune wizard 2010-06-25 (16MB via default)\n checkpoint_completion_target = 0.9 # pgtune wizard 2010-06-25 (0.5 via\ndefault)\n(*) effective_cache_size = 22GB # pgtune wizard 2010-06-25 (18GB ,\nspecified)\n work_mem = 192MB # pgtune wizard 2010-06-25 (256MB , specified)\n(*) wal_buffers = 8MB # pgtune wizard 2010-06-25 ( 64kb , via default)\n checkpoint_segments = 16 # pgtune wizard 2010-06-25 (30 , specified)\n shared_buffers = 7680MB # pgtune wizard 2010-06-25 ( 4096 MB ,\nspecified)\n(*) max_connections = 80 # pgtune wizard 2010-06-25 ( 300 , ;-) specified )\n\nwhen i reduce max_connections i start getting errors, i will see again\nconcurrent connections\nduring business hours. lot of our connections are in <IDLE in transaction\nstate> during business\nthis peculiar behavior of mod_perl servers have been discussed in past i\nthink. dont' remember\nif there was any resolution.\n\nDear List,pgtune suggests the following:(current value are in braces via reason) , (*) indicates significant difference from current value.     default_statistics_target = 50 # pgtune wizard 2010-06-25  (current 100 via default)\n(*) maintenance_work_mem = 1GB # pgtune wizard 2010-06-25 (16MB via default)     checkpoint_completion_target = 0.9 # pgtune wizard 2010-06-25 (0.5 via default)(*) effective_cache_size = 22GB # pgtune wizard 2010-06-25 (18GB , specified)\n     work_mem = 192MB # pgtune wizard 2010-06-25  (256MB , specified)(*) wal_buffers = 8MB # pgtune wizard 2010-06-25 ( 64kb , via default)     checkpoint_segments = 16 # pgtune wizard 2010-06-25 (30 , specified)\n     shared_buffers = 7680MB # pgtune wizard 2010-06-25 ( 4096 MB , specified)(*) max_connections = 80 # pgtune wizard 2010-06-25 ( 300 , ;-)  specified )when i reduce max_connections i start getting errors, i will see again concurrent connections\nduring business hours. lot of our connections are in <IDLE in transaction state> during businessthis peculiar  behavior of  mod_perl servers have been discussed in past i think. dont' rememberif there was any resolution.", "msg_date": "Fri, 25 Jun 2010 21:29:21 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound\n\tpostgresql setup.)" }, { "msg_contents": "On 25/06/10 16:59, Rajesh Kumar Mallah wrote:\n> when i reduce max_connections i start getting errors, i will see again \n> concurrent connections\n> during business hours. lot of our connections are in <IDLE in \n> transaction state> during business\n> this peculiar behavior of mod_perl servers have been discussed in \n> past i think. dont' remember\n> if there was any resolution.\n\nIf connections spend any significant amount of time in <IDLE in \ntransaction> state, that might indicate you're not committing/rolling \nback after running queries - can you show an example of the code you're \nusing?\n\ne.g. something like my $dbh = DBI->connect(...); my $sth = \n$dbh->prepare(q{select ... }); $sth->fetchall_arrayref; $sth->rollback;\n\nTom\n\n", "msg_date": "Fri, 25 Jun 2010 17:39:05 +0100", "msg_from": "Tom Molesworth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound \tpostgresql\n\tsetup.)" }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> wrote:\n \n> pgtune suggests the following:\n> (current value are in braces via reason) , (*) indicates\n> significant difference from current value.\n \nDifferent people have come to different conclusions on some of these\nsettings. I believe that's probably because differences in hardware\nand workloads actually make the best choice different in different\nenvironments, and it's not always clear how to characterize that to\nmake the best choice. If yo get conflicting advice on particular\nsettings, I would strongly recommend testing to establish what works\nbest for your actual workload on your hardware and OS.\n \nThat said, my experience suggests...\n \n> default_statistics_target = 50 # pgtune wizard 2010-06-25\n> (current 100 via default)\n \nHigher values add a little bit to the planning time of complex\nqueries, but reduce the risk of choosing a bad plan. I would\nrecommend leaving this at 100 unless you notice problems with long\nplan times.\n \n> (*) maintenance_work_mem = 1GB # pgtune wizard 2010-06-25\n> (16MB via default)\n \nYeah, I'd boost this to 1GB.\n \n> checkpoint_completion_target = 0.9 # pgtune wizard 2010-06-25\n> (0.5 via default)\n \nI'd change this one by itself, and probably after some of the other\ntuning is done, so you can get a good sense of \"before\" and \"after\".\nI'm guessing that 0.9 would be better, but I would test it.\n \n> (*) effective_cache_size = 22GB # pgtune wizard 2010-06-25\n> (18GB , specified)\n \nUnless you're running other processes on the box which consume a lot\nof RAM, 18GB is probably lower than ideal, although this setting\nisn't too critical -- it doesn't affect actual RAM allocation; it\njust gives the optimizer a hint about how much might get cached. A\nhigher setting encourages index use; a lower setting encourages\ntable scans.\n \n> work_mem = 192MB # pgtune wizard 2010-06-25\n> (256MB , specified)\n \nWith 300 connections, I think that either of these could lead you to\nexperience intermittent bursts of extreme swapping. I'd drop it to\nsomewhere in the 16MB to 32MB range until I had a connection pool\nconfigured such that it was actually keeping the number of active\nconnections much lower.\n \n> (*) wal_buffers = 8MB # pgtune wizard 2010-06-25\n> (64kb , via default)\n \nSure, I'd boost this.\n \n> checkpoint_segments = 16 # pgtune wizard 2010-06-25\n> (30 , specified)\n \nIf you have the disk space for the 30 segments, I wouldn't reduce\nit.\n \n> shared_buffers = 7680MB # pgtune wizard 2010-06-25\n> (4096 MB , specified)\n \nThis one is perhaps the most sensitive to workload. Anywhere\nbetween 1GB and 8GB might be best for you. Greg Smith has some\ngreat advice on how to tune this for your workload.\n \n> (*) max_connections = 80 # pgtune wizard 2010-06-25\n> (300 , ;-) specified)\n> \n> when i reduce max_connections i start getting errors, i will see\n> again concurrent connections during business hours.\n \nThat's probably a good number to get to, but you have to reduce the\nnumber of actual connections before you set the limit that low.\n \n> lot of our connections are in <IDLE> in transaction state\n \nIf any of these stay in that state for more than a minute or two,\nyou need to address that if you want to get your connection count\nunder control. If any of them persist for hours or days, you need\nto fix it to avoid bloat which can kill performance.\n \n-Kevin\n", "msg_date": "Fri, 25 Jun 2010 11:41:51 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound\n\tpostgresql setup.)" }, { "msg_contents": "Rajesh Kumar Mallah wrote:\n> default_statistics_target = 50 # pgtune wizard 2010-06-25 \n> (current 100 via default)\n> (*) effective_cache_size = 22GB # pgtune wizard 2010-06-25 (18GB , \n> specified)\n> checkpoint_segments = 16 # pgtune wizard 2010-06-25 (30 , specified)\n\nYou probably want to keep your existing values for all of these. Your \neffective_cache_size setting may be a little low, but I wouldn't worry \nabout changing that right now--you have bigger problems right now.\n\n> (*) maintenance_work_mem = 1GB # pgtune wizard 2010-06-25 (16MB via \n> default)\n> (*) wal_buffers = 8MB # pgtune wizard 2010-06-25 ( 64kb , via default)\n> checkpoint_completion_target = 0.9 # pgtune wizard 2010-06-25 \n> (0.5 via default)\n> shared_buffers = 7680MB # pgtune wizard 2010-06-25 ( 4096 MB , \n> specified)\n\nThese are all potentially better for your system, but you'll use more \nRAM if you make these changes. For example, if you're having swap \ntrouble, you definitely don't want to increase maintenance_work_mem.\n\nI suspect that 8GB of shared_buffers is probably the most you want to \nuse. Most systems stop gaining any more benefit from that somewhere \nbetween 8GB and 10GB, and instead performance gets worse; it's better to \nbe on the low side of that drop. You can probably support 8GB just fine \nif you sort out the work_mem issues.\n\n> (*) max_connections = 80 # pgtune wizard 2010-06-25 ( 300 , ;-) \n> specified )\n> work_mem = 192MB # pgtune wizard 2010-06-25 (256MB , specified)\n\npgtune makes a guess at how many connections you'll have based on \nspecified workload. If you know you have more connections than that, \nyou should specify that on the command line:\n\npgtune -c 300 ...\n\nIt will then re-compute the work_mem figure more accurately using that \nhigher connection count. Right now, it's guessing 192MB based on 80 \nconnections, which is on the high side of reasonable. 192MB with *300* \nconnections is way oversized. My rough computation says that if you \ntell it the number of connections correctly, pgtune will suggest to you \naround 50MB for work_mem.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 25 Jun 2010 13:06:26 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound \tpostgresql\n\tsetup.)" }, { "msg_contents": "Dear tom, we have autocommit off in dbi. Any commit or rollback from\nthe persistent modperl process immediately issues begin work; if the\nmodperl process is waiting for request the database backend remains in\nidle in transaction state. Unless we modify data in a http request we\nneighter issue a commit nor rollback.\n\nOn 6/25/10, Tom Molesworth <[email protected]> wrote:\n> On 25/06/10 16:59, Rajesh Kumar Mallah wrote:\n>> when i reduce max_connections i start getting errors, i will see again\n>> concurrent connections\n>> during business hours. lot of our connections are in <IDLE in\n>> transaction state> during business\n>> this peculiar behavior of mod_perl servers have been discussed in\n>> past i think. dont' remember\n>> if there was any resolution.\n>\n> If connections spend any significant amount of time in <IDLE in\n> transaction> state, that might indicate you're not committing/rolling\n> back after running queries - can you show an example of the code you're\n> using?\n>\n> e.g. something like my $dbh = DBI->connect(...); my $sth =\n> $dbh->prepare(q{select ... }); $sth->fetchall_arrayref; $sth->rollback;\n>\n> Tom\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n-- \nSent from Gmail for mobile | mobile.google.com\n", "msg_date": "Sat, 26 Jun 2010 00:30:35 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound postgresql\n\tsetup.)" }, { "msg_contents": "On 25/06/10 20:00, Rajesh Kumar Mallah wrote:\n> Dear tom, we have autocommit off in dbi. Any commit or rollback from\n> the persistent modperl process immediately issues begin work; if the\n> modperl process is waiting for request the database backend remains in\n> idle in transaction state. Unless we modify data in a http request we\n> neighter issue a commit nor rollback.\n> \n\nThe backend shouldn't go to 'idle in transaction' state until there is \nsome activity within the transaction. I've attached an example script to \ndemonstrate this - note that even SELECT queries will leave the handle \nas 'IDLE in transaction' unless you've changed the transaction isolation \nlevel from the default.\n\nAny queries that are idle in transaction will block connection pooling \nand cause old versions of table rows to hang around, as described in \nother replies. Note that this is nothing to do with mod_perl, it's \npurely due to the way transactions are handled - a one-off script would \nalso have this issue, but on exit issues an implicit rollback and \ndisconnects.\n\nTypically your database wrapper would handle this (I think DBIx::Class \nshould take care of this automatically, although I haven't used it myself).\n\nTom", "msg_date": "Fri, 25 Jun 2010 21:18:52 +0100", "msg_from": "Tom Molesworth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: sudden spurt in swap utilization (was:cpu bound\n\tpostgresql \tsetup.)" }, { "msg_contents": "Dear Greg/Kevin/List ,\n\nMany thanks for the comments regarding the params, I am however able to\nchange an\nexperiment on production in a certain time window , when that arrives i\nshall post\nmy observations.\n\nRajesh Kumar Mallah.\nTradeindia.com - India's Largest B2B eMarketPlace.\n\nDear Greg/Kevin/List ,Many thanks for the comments regarding the params, I am however able to change anexperiment on production in a certain time window , when that arrives i shall postmy observations.  \nRajesh Kumar Mallah.Tradeindia.com - India's Largest B2B eMarketPlace.", "msg_date": "Sat, 26 Jun 2010 08:18:09 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound\n\tpostgresql setup.)" }, { "msg_contents": "Dear List,\n\nToday has been good since morning. Although it is a lean day\nfor us but the indications are nice. I thank everyone who shared\nthe concern. I think the most significant change has been to reduce\nshared_buffers from 10G to 4G , this has lead to reduced memory\nusage and some breathing space to the OS.\n\nAlthough i am yet to incorporate the suggestions from pgtune but\ni think the issue of max_connection needs to be addressed first.\n\nI am investigating application issues and about the mechanism that\nputs many backend to '<IDLE> in transaction ' mode for significant\ntimes. I thank Tom for the script he sent. Once that resolves i shall\ncheck pooling as suggested by Kevin, then eventually max_connections\ncan be reduced. I shall also check pgpool and pgbouncer if they are\nhelpful in this regard.\n\nI observed that the number of simultaneous connection today (lean day)\nhovers between 1 to 10 , occasionally shooting to 15 but never more than\n20 i would say.\n\n\nI am happy that i/o waits are negligible and cpu is idling also for a while.\n\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy\nid wa st\n22 0 18468 954120 13460 28491772 0 0 568 1558 13645 18355 62 10\n27 2 0\n16 0 18468 208100 13476 28469084 0 0 580 671 14039 17055 67 13\n19 1 0\n10 2 18812 329032 13400 28356972 0 46 301 1768 13848 17884 68 10\n20 1 0\n16 2 18812 366596 13416 28361620 0 0 325 535 13957 16649 72 11\n16 1 0\n50 1 18812 657048 13432 28366548 0 0 416 937 13823 16667 62 9\n28 1 0\n 6 1 18812 361040 13452 28371908 0 0 323 522 14352 16789 74 12\n14 0 0\n33 0 18812 162760 12604 28210152 0 0 664 1544 14701 16315 66 11\n22 2 0\n 5 0 18812 212028 10764 27921800 0 0 552 648 14567 17737 67 10\n21 1 0\n 6 0 18796 279920 10548 27890388 3 0 359 562 12635 15976 60 9\n30 1 0\n 8 0 18796 438820 10564 27894440 0 0 289 2144 12234 15770 57 8\n34 1 0\n 5 0 18796 531800 10580 27901700 0 0 514 394 12169 16005 59 8\n32 1 0\n17 0 18796 645868 10596 27890704 0 0 423 948 13369 16554 67 10\n23 1 0\n 9 1 18796 1076540 10612 27898604 0 0 598 403 12703 17363 71 10\n18 1 0\n 8 0 18796 1666508 10628 27904748 0 0 430 1123 13314 17421 57 9\n32 1 0\n 9 1 18776 1541444 10644 27913092 1 0 653 954 13194 16822 75 11\n12 1 0\n 8 0 18776 1526728 10660 27921380 0 0 692 788 13073 16987 74 9\n15 1 0\n 8 0 18776 1482304 10676 27933176 0 0 966 2029 13017 16651 76 12\n11 1 0\n21 0 18776 1683260 10700 27937492 0 0 298 663 13110 15796 67 10\n23 1 0\n18 0 18776 2087664 10716 27943512 0 0 406 622 12399 17072 62 9\n28 1 0\n\nWith 300 connections, I think that either of these could lead you to\n> experience intermittent bursts of extreme swapping. I'd drop it to\n> somewhere in the 16MB to 32MB range until I had a connection pool\n> configured such that it was actually keeping the number of active\n> connections much lower.\n>\n> > (*) wal_buffers = 8MB # pgtune wizard 2010-06-25\n> > (64kb , via default)\n>\n> Sure, I'd boost this.\n>\n> > checkpoint_segments = 16 # pgtune wizard 2010-06-25\n> > (30 , specified)\n>\n> If you have the disk space for the 30 segments, I wouldn't reduce\n> it.\n>\n> > shared_buffers = 7680MB # pgtune wizard 2010-06-25\n> > (4096 MB , specified)\n>\n> This one is perhaps the most sensitive to workload. Anywhere\n> between 1GB and 8GB might be best for you. Greg Smith has some\n> great advice on how to tune this for your workload.\n>\n> > (*) max_connections = 80 # pgtune wizard 2010-06-25\n> > (300 , ;-) specified)\n> >\n> > when i reduce max_connections i start getting errors, i will see\n> > again concurrent connections during business hours.\n>\n> That's probably a good number to get to, but you have to reduce the\n> number of actual connections before you set the limit that low.\n>\n> > lot of our connections are in <IDLE> in transaction state\n>\n> If any of these stay in that state for more than a minute or two,\n> you need to address that if you want to get your connection count\n> under control. If any of them persist for hours or days, you need\n> to fix it to avoid bloat which can kill performance.\n>\n> -Kevin\n>\n\nDear List,Today  has been good since morning. Although it is a lean dayfor us but the indications are nice. I thank everyone who sharedthe concern. I think the most significant change has been to reduce\nshared_buffers from 10G to 4G , this has lead to reduced memory usage and some breathing space to the OS.Although i am yet to incorporate the suggestions from pgtune but i think the issue of max_connection needs to be addressed first.\nI am investigating application issues and about the mechanism thatputs many backend to '<IDLE> in transaction ' mode for significanttimes. I thank Tom for the script he sent. Once that resolves i shall\ncheck pooling as suggested by Kevin, then eventually max_connections can be reduced. I shall also check pgpool and pgbouncer if they arehelpful in this regard.I observed that the number of simultaneous connection today (lean day)\nhovers between 1 to 10 , occasionally shooting to 15 but never more than 20 i would say.I am happy that i/o waits are negligible and cpu is idling also for a while.procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------\n r  b   swpd   free   buff  cache     si   so    bi    bo   in   cs   us sy id wa st22  0  18468 954120  13460 28491772    0    0   568  1558 13645 18355 62 10 27  2  0\n16  0  18468 208100  13476 28469084    0    0   580   671 14039 17055 67 13 19  1  010  2  18812 329032  13400 28356972    0   46   301  1768 13848 17884 68 10 20  1  0\n16  2  18812 366596  13416 28361620    0    0   325   535 13957 16649 72 11 16  1  050  1  18812 657048  13432 28366548    0    0   416   937 13823 16667 62  9 28  1  0\n 6  1  18812 361040  13452 28371908    0    0   323   522 14352 16789 74 12 14  0  033  0  18812 162760  12604 28210152    0    0   664  1544 14701 16315 66 11 22  2  0\n 5  0  18812 212028  10764 27921800    0    0   552   648 14567 17737 67 10 21  1  0 6  0  18796 279920  10548 27890388    3    0   359   562 12635 15976 60  9 30  1  0\n 8  0  18796 438820  10564 27894440    0    0   289  2144 12234 15770 57  8 34  1  0 5  0  18796 531800  10580 27901700    0    0   514   394 12169 16005 59  8 32  1  0\n17  0  18796 645868  10596 27890704    0    0   423   948 13369 16554 67 10 23  1  0 9  1  18796 1076540  10612 27898604   0    0   598   403 12703 17363 71 10 18  1  0\n 8  0  18796 1666508  10628 27904748   0    0   430  1123 13314 17421 57  9 32  1  0 9  1  18776 1541444  10644 27913092   1    0   653   954 13194 16822 75 11 12  1  0\n 8  0  18776 1526728  10660 27921380   0    0   692   788 13073 16987 74  9 15  1  0 8  0  18776 1482304  10676 27933176   0    0   966  2029 13017 16651 76 12 11  1  0\n21  0  18776 1683260  10700 27937492   0    0   298   663 13110 15796 67 10 23  1  018  0  18776 2087664  10716 27943512   0    0   406   622 12399 17072 62  9 28  1  0\n\nWith 300 connections, I think that either of these could lead you to\nexperience intermittent bursts of extreme swapping.  I'd drop it to\nsomewhere in the 16MB to 32MB range until I had a connection pool\nconfigured such that it was actually keeping the number of active\nconnections much lower.\n\n> (*) wal_buffers = 8MB # pgtune wizard 2010-06-25\n> (64kb , via default)\n\nSure, I'd boost this.\n\n> checkpoint_segments = 16 # pgtune wizard 2010-06-25\n> (30 , specified)\n\nIf you have the disk space for the 30 segments, I wouldn't reduce\nit.\n\n> shared_buffers = 7680MB # pgtune wizard 2010-06-25\n> (4096 MB , specified)\n\nThis one is perhaps the most sensitive to workload.  Anywhere\nbetween 1GB and 8GB might be best for you.  Greg Smith has some\ngreat advice on how to tune this for your workload.\n\n> (*) max_connections = 80 # pgtune wizard 2010-06-25\n> (300 , ;-) specified)\n>\n> when i reduce max_connections i start getting errors, i will see\n> again concurrent connections during business hours.\n\nThat's probably a good number to get to, but you have to reduce the\nnumber of actual connections before you set the limit that low.\n\n> lot of our connections are in <IDLE> in transaction state\n\nIf any of these stay in that state for more than a minute or two,\nyou need to address that if you want to get your connection count\nunder control.  If any of them persist for hours or days, you need\nto fix it to avoid bloat which can kill performance.\n\n-Kevin", "msg_date": "Sat, 26 Jun 2010 15:23:43 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound\n\tpostgresql setup.)" }, { "msg_contents": "Dear List ,\n\nA simple (perl) script was made to 'watch' the state transitions of\nback ends. On startup It captures a set of pids for watching\nand displays a visual representation of the states for next 30 intervals\nof 1 seconds each. The X axis is interval cnt, Y axis is pid and the\norigin is on top-left.\n\nThe state value can be Active Query (*) , or <IDLE> indicated by '.' or\n'<IDLE> in transaction' indicated by '?' . for my server below is a random\noutput (during lean hours and on a lean day).\n\n----------------------------------------------------------------------------------------------------\n PID 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23\n24 25 26 27 28 29 30\n----------------------------------------------------------------------------------------------------\n 4334 ? ? ? ? * ? ? ? ? ? * ? ? ? ? ? ? ? ?\n 6904 ? ? . . . * ? . . . . . . ? ? .\n 6951 ? ? ? . . . . ? ? ? ? ? . . . ? ? ? . . . ? .\n. . . . ? ? .\n 7009 ? * ? ? . . . . . . . . . * * . * ? ? . . . *\n? ? ? . . . ?\n 7077 ? . ? . . . * . ? . . . . ? . . . . . . . ? .\n. ? . . . ? ?\n 7088 ? . . ? . ? ? ? . . . . . . ? . . ? ? * . . .\n. . ? . ? . *\n 7091 ? . . * ? ? ? ? ? ? ? * ? . . ? * . * . . . .\n. . . . . . .\n 7093 ? ? . ? . . . . ? . ? * . . . . . . . . . ? ?\n? . ? ? ? . .\n 7112 * * . . . ? ? ? . . . . . . . . ? ? . ? . ? .\n. ? . . . . .\n 7135 ? . . * . ? ? ? . ? ? . . . ? . . . . . . . ?\n. . . ? ? . .\n 7142 ? . ? . . . . . . * . . . ? . . . . . . . . .\n. . . . .\n 7166 ? . ? ? ? * * . ? * . ? . . . ? . ? ? . . . *\n. . . ? . . .\n 8202 ? ? . . . * . ? . . . . . . . * ? . . . ? ? .\n. . . ? ? ? .\n 8223 ? . . . . . . ?\n 8237 ? ? ? . ? ? ? ? . . . . . . . ? . . . . . ? .\n. * ? . . . .\n 8251 ? . ? . . . . . ? ? . . . * ? . . . ? . . . .\n. . . . . . .\n 8278 ? ? . . . . ? . . . . . . . ? . . . . . . ? ?\n. . * . . . .\n 8290 ? . .\n 8294 ? ? . . . . . . . . . . . . ? . . . ? ? . . .\n. . . . . * *\n 8303 ? * ? . ? ? ? . ? ? ? . . . . * . . . . . . .\n. . . . . . .\n 8306 ? ? . . . ? . . . ? . . . . . . * . . .\n 8309 * ? ? ? ? . . . ? . . .\n 8329 ? . * * . . . . . . . * . ? . * . ? . * . * ?\n. . .\n----------------------------------------------------------------------------------------------------\n (*) Active Query , (.) Idle , (?) Idle in transaction,<blank> backend\nover.\n----------------------------------------------------------------------------------------------------\n\nLooks like most of the graph space is filled with (.) or (?) and very\nless active queries (long running queries > 1s). on a busy day and busi hour\ni shall check the and post again. The script is presented which depends only\non perl , DBI and DBD::Pg.\n\nscript pasted here:\nhttp://pastebin.com/mrjSZfLB\n\nRegds\nmallah.\n\n\nOn Sat, Jun 26, 2010 at 3:23 PM, Rajesh Kumar Mallah <\[email protected]> wrote:\n\n> Dear List,\n>\n> Today has been good since morning. Although it is a lean day\n> for us but the indications are nice. I thank everyone who shared\n> the concern. I think the most significant change has been to reduce\n> shared_buffers from 10G to 4G , this has lead to reduced memory\n> usage and some breathing space to the OS.\n>\n> Although i am yet to incorporate the suggestions from pgtune but\n> i think the issue of max_connection needs to be addressed first.\n>\n> I am investigating application issues and about the mechanism that\n> puts many backend to '<IDLE> in transaction ' mode for significant\n> times. I thank Tom for the script he sent. Once that resolves i shall\n> check pooling as suggested by Kevin, then eventually max_connections\n> can be reduced. I shall also check pgpool and pgbouncer if they are\n> helpful in this regard.\n>\n> I observed that the number of simultaneous connection today (lean day)\n> hovers between 1 to 10 , occasionally shooting to 15 but never more than\n> 20 i would say.\n>\n>\n> I am happy that i/o waits are negligible and cpu is idling also for a\n> while.\n>\n>\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n> r b swpd free buff cache si so bi bo in cs us sy\n> id wa st\n> 22 0 18468 954120 13460 28491772 0 0 568 1558 13645 18355 62 10\n> 27 2 0\n> 16 0 18468 208100 13476 28469084 0 0 580 671 14039 17055 67 13\n> 19 1 0\n> 10 2 18812 329032 13400 28356972 0 46 301 1768 13848 17884 68 10\n> 20 1 0\n> 16 2 18812 366596 13416 28361620 0 0 325 535 13957 16649 72 11\n> 16 1 0\n> 50 1 18812 657048 13432 28366548 0 0 416 937 13823 16667 62 9\n> 28 1 0\n> 6 1 18812 361040 13452 28371908 0 0 323 522 14352 16789 74 12\n> 14 0 0\n> 33 0 18812 162760 12604 28210152 0 0 664 1544 14701 16315 66 11\n> 22 2 0\n> 5 0 18812 212028 10764 27921800 0 0 552 648 14567 17737 67 10\n> 21 1 0\n> 6 0 18796 279920 10548 27890388 3 0 359 562 12635 15976 60 9\n> 30 1 0\n> 8 0 18796 438820 10564 27894440 0 0 289 2144 12234 15770 57 8\n> 34 1 0\n> 5 0 18796 531800 10580 27901700 0 0 514 394 12169 16005 59 8\n> 32 1 0\n> 17 0 18796 645868 10596 27890704 0 0 423 948 13369 16554 67 10\n> 23 1 0\n> 9 1 18796 1076540 10612 27898604 0 0 598 403 12703 17363 71 10\n> 18 1 0\n> 8 0 18796 1666508 10628 27904748 0 0 430 1123 13314 17421 57 9\n> 32 1 0\n> 9 1 18776 1541444 10644 27913092 1 0 653 954 13194 16822 75 11\n> 12 1 0\n> 8 0 18776 1526728 10660 27921380 0 0 692 788 13073 16987 74 9\n> 15 1 0\n> 8 0 18776 1482304 10676 27933176 0 0 966 2029 13017 16651 76 12\n> 11 1 0\n> 21 0 18776 1683260 10700 27937492 0 0 298 663 13110 15796 67 10\n> 23 1 0\n> 18 0 18776 2087664 10716 27943512 0 0 406 622 12399 17072 62 9\n> 28 1 0\n>\n>\n> With 300 connections, I think that either of these could lead you to\n>> experience intermittent bursts of extreme swapping. I'd drop it to\n>> somewhere in the 16MB to 32MB range until I had a connection pool\n>> configured such that it was actually keeping the number of active\n>> connections much lower.\n>>\n>> > (*) wal_buffers = 8MB # pgtune wizard 2010-06-25\n>> > (64kb , via default)\n>>\n>> Sure, I'd boost this.\n>>\n>> > checkpoint_segments = 16 # pgtune wizard 2010-06-25\n>> > (30 , specified)\n>>\n>> If you have the disk space for the 30 segments, I wouldn't reduce\n>> it.\n>>\n>> > shared_buffers = 7680MB # pgtune wizard 2010-06-25\n>> > (4096 MB , specified)\n>>\n>> This one is perhaps the most sensitive to workload. Anywhere\n>> between 1GB and 8GB might be best for you. Greg Smith has some\n>> great advice on how to tune this for your workload.\n>>\n>> > (*) max_connections = 80 # pgtune wizard 2010-06-25\n>> > (300 , ;-) specified)\n>> >\n>> > when i reduce max_connections i start getting errors, i will see\n>> > again concurrent connections during business hours.\n>>\n>> That's probably a good number to get to, but you have to reduce the\n>> number of actual connections before you set the limit that low.\n>>\n>> > lot of our connections are in <IDLE> in transaction state\n>>\n>> If any of these stay in that state for more than a minute or two,\n>> you need to address that if you want to get your connection count\n>> under control. If any of them persist for hours or days, you need\n>> to fix it to avoid bloat which can kill performance.\n>>\n>> -Kevin\n>>\n>\n>\n\nDear List ,A simple (perl) script was made to 'watch' the state transitions of back ends. On startup It captures a set of pids for watching and displays  a visual representation of the states for next 30 intervals\nof 1 seconds each. The X axis is interval cnt, Y axis is pid and the origin is on top-left.The state value can be Active Query (*) , or <IDLE> indicated by '.' or '<IDLE> in transaction' indicated by '?' . for my server below is a random \noutput (during lean hours and on a lean day).----------------------------------------------------------------------------------------------------   PID 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30\n----------------------------------------------------------------------------------------------------  4334  ?  ?  ?  ?  *  ?  ?  ?  ?  ?  *  ?  ?  ?  ?  ?  ?  ?  ?  6904  ?  ?  .  .  .  *  ?  .  .  .  .  .  .  ?  ?  .\n  6951  ?  ?  ?  .  .  .  .  ?  ?  ?  ?  ?  .  .  .  ?  ?  ?  .  .  .  ?  .  .  .  .  .  ?  ?  .  7009  ?  *  ?  ?  .  .  .  .  .  .  .  .  .  *  *  .  *  ?  ?  .  .  .  *  ?  ?  ?  .  .  .  ?  7077  ?  .  ?  .  .  .  *  .  ?  .  .  .  .  ?  .  .  .  .  .  .  .  ?  .  .  ?  .  .  .  ?  ?\n  7088  ?  .  .  ?  .  ?  ?  ?  .  .  .  .  .  .  ?  .  .  ?  ?  *  .  .  .  .  .  ?  .  ?  .  *  7091  ?  .  .  *  ?  ?  ?  ?  ?  ?  ?  *  ?  .  .  ?  *  .  *  .  .  .  .  .  .  .  .  .  .  .  7093  ?  ?  .  ?  .  .  .  .  ?  .  ?  *  .  .  .  .  .  .  .  .  .  ?  ?  ?  .  ?  ?  ?  .  .\n  7112  *  *  .  .  .  ?  ?  ?  .  .  .  .  .  .  .  .  ?  ?  .  ?  .  ?  .  .  ?  .  .  .  .  .  7135  ?  .  .  *  .  ?  ?  ?  .  ?  ?  .  .  .  ?  .  .  .  .  .  .  .  ?  .  .  .  ?  ?  .  .  7142  ?  .  ?  .  .  .  .  .  .  *  .  .  .  ?  .  .  .  .  .  .  .  .  .  .  .  .  .  .\n  7166  ?  .  ?  ?  ?  *  *  .  ?  *  .  ?  .  .  .  ?  .  ?  ?  .  .  .  *  .  .  .  ?  .  .  .  8202  ?  ?  .  .  .  *  .  ?  .  .  .  .  .  .  .  *  ?  .  .  .  ?  ?  .  .  .  .  ?  ?  ?  .  8223  ?  .  .  .  .  .  .  ?\n  8237  ?  ?  ?  .  ?  ?  ?  ?  .  .  .  .  .  .  .  ?  .  .  .  .  .  ?  .  .  *  ?  .  .  .  .  8251  ?  .  ?  .  .  .  .  .  ?  ?  .  .  .  *  ?  .  .  .  ?  .  .  .  .  .  .  .  .  .  .  .  8278  ?  ?  .  .  .  .  ?  .  .  .  .  .  .  .  ?  .  .  .  .  .  .  ?  ?  .  .  *  .  .  .  .\n  8290  ?  .  .  8294  ?  ?  .  .  .  .  .  .  .  .  .  .  .  .  ?  .  .  .  ?  ?  .  .  .  .  .  .  .  .  *  *  8303  ?  *  ?  .  ?  ?  ?  .  ?  ?  ?  .  .  .  .  *  .  .  .  .  .  .  .  .  .  .  .  .  .  .  8306  ?  ?  .  .  .  ?  .  .  .  ?  .  .  .  .  .  .  *  .  .  .\n  8309  *  ?  ?  ?  ?  .  .  .  ?  .  .  .  8329  ?  .  *  *  .  .  .  .  .  .  .  *  .  ?  .  *  .  ?  .  *  .  *  ?  .  .  .----------------------------------------------------------------------------------------------------\n       (*) Active Query , (.) Idle , (?) Idle in transaction,<blank> backend over.----------------------------------------------------------------------------------------------------Looks like most of the graph space is filled with (.) or (?) and very\nless active queries (long running queries > 1s). on a busy day and busi hour i shall check the and post again. The script is presented which depends only on perl , DBI and DBD::Pg.script pasted here:http://pastebin.com/mrjSZfLB\nRegdsmallah.On Sat, Jun 26, 2010 at 3:23 PM, Rajesh Kumar Mallah <[email protected]> wrote:\nDear List,Today  has been good since morning. Although it is a lean dayfor us but the indications are nice. I thank everyone who shared\nthe concern. I think the most significant change has been to reduce\nshared_buffers from 10G to 4G , this has lead to reduced memory usage and some breathing space to the OS.Although i am yet to incorporate the suggestions from pgtune but i think the issue of max_connection needs to be addressed first.\nI am investigating application issues and about the mechanism thatputs many backend to '<IDLE> in transaction ' mode for significanttimes. I thank Tom for the script he sent. Once that resolves i shall\n\ncheck pooling as suggested by Kevin, then eventually max_connections can be reduced. I shall also check pgpool and pgbouncer if they arehelpful in this regard.I observed that the number of simultaneous connection today (lean day)\n\nhovers between 1 to 10 , occasionally shooting to 15 but never more than 20 i would say.I am happy that i/o waits are negligible and cpu is idling also for a while.procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------\n r  b   swpd   free   buff  cache     si   so    bi    bo   in   cs   us sy id wa st22  0  18468 954120  13460 28491772    0    0   568  1558 13645 18355 62 10 27  2  0\n16  0  18468 208100  13476 28469084    0    0   580   671 14039 17055 67 13 19  1  010  2  18812 329032  13400 28356972    0   46   301  1768 13848 17884 68 10 20  1  0\n16  2  18812 366596  13416 28361620    0    0   325   535 13957 16649 72 11 16  1  050  1  18812 657048  13432 28366548    0    0   416   937 13823 16667 62  9 28  1  0\n 6  1  18812 361040  13452 28371908    0    0   323   522 14352 16789 74 12 14  0  033  0  18812 162760  12604 28210152    0    0   664  1544 14701 16315 66 11 22  2  0\n 5  0  18812 212028  10764 27921800    0    0   552   648 14567 17737 67 10 21  1  0 6  0  18796 279920  10548 27890388    3    0   359   562 12635 15976 60  9 30  1  0\n 8  0  18796 438820  10564 27894440    0    0   289  2144 12234 15770 57  8 34  1  0 5  0  18796 531800  10580 27901700    0    0   514   394 12169 16005 59  8 32  1  0\n17  0  18796 645868  10596 27890704    0    0   423   948 13369 16554 67 10 23  1  0 9  1  18796 1076540  10612 27898604   0    0   598   403 12703 17363 71 10 18  1  0\n 8  0  18796 1666508  10628 27904748   0    0   430  1123 13314 17421 57  9 32  1  0 9  1  18776 1541444  10644 27913092   1    0   653   954 13194 16822 75 11 12  1  0\n 8  0  18776 1526728  10660 27921380   0    0   692   788 13073 16987 74  9 15  1  0 8  0  18776 1482304  10676 27933176   0    0   966  2029 13017 16651 76 12 11  1  0\n21  0  18776 1683260  10700 27937492   0    0   298   663 13110 15796 67 10 23  1  018  0  18776 2087664  10716 27943512   0    0   406   622 12399 17072 62  9 28  1  0\n\n\nWith 300 connections, I think that either of these could lead you to\nexperience intermittent bursts of extreme swapping.  I'd drop it to\nsomewhere in the 16MB to 32MB range until I had a connection pool\nconfigured such that it was actually keeping the number of active\nconnections much lower.\n\n> (*) wal_buffers = 8MB # pgtune wizard 2010-06-25\n> (64kb , via default)\n\nSure, I'd boost this.\n\n> checkpoint_segments = 16 # pgtune wizard 2010-06-25\n> (30 , specified)\n\nIf you have the disk space for the 30 segments, I wouldn't reduce\nit.\n\n> shared_buffers = 7680MB # pgtune wizard 2010-06-25\n> (4096 MB , specified)\n\nThis one is perhaps the most sensitive to workload.  Anywhere\nbetween 1GB and 8GB might be best for you.  Greg Smith has some\ngreat advice on how to tune this for your workload.\n\n> (*) max_connections = 80 # pgtune wizard 2010-06-25\n> (300 , ;-) specified)\n>\n> when i reduce max_connections i start getting errors, i will see\n> again concurrent connections during business hours.\n\nThat's probably a good number to get to, but you have to reduce the\nnumber of actual connections before you set the limit that low.\n\n> lot of our connections are in <IDLE> in transaction state\n\nIf any of these stay in that state for more than a minute or two,\nyou need to address that if you want to get your connection count\nunder control.  If any of them persist for hours or days, you need\nto fix it to avoid bloat which can kill performance.\n\n-Kevin", "msg_date": "Sat, 26 Jun 2010 18:52:42 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sudden spurt in swap utilization (was:cpu bound\n\tpostgresql setup.)" } ]
[ { "msg_contents": "I'm trying to find someone who has a system with an AMD \"Magny Cours\" \n6100 series processor in it, like the Opteron 6174 or 6176 SE, who'd be \nwilling to run a short test for me during an idle period to collect some \nperformance data about it. Can't be running Windows, probably easiest \nto compile the test programs under Linux. If you have one of those \nprocessors and would be willing to help me out, please drop me an \noff-list note and I'll tell you what I'm looking for. Will need \npermission to publish the results to the community.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 25 Jun 2010 14:48:07 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Any recent AMD purchases?" } ]
[ { "msg_contents": "I am in the process of moving a system that has been built around FoxPro\ntables for the last 18 years into a PostgreSQL based system.\n\nOver time I came up with decent strategies for making the FoxPro tables\nwork well with the workload that was placed on them, but we are getting to\nthe point that the locking mechanisms are causing problems when some of\nthe more used tables are being written to.\n\nWith the FoxPro tables I had one directory that contained the tables that\nhad global data that was common to all clients. Things like documents that\nhad been received and logged, checks that had been cut, etc. Then each\nclient had his own directory which housed tables that had information\nrelating to that specific client. Setting things up like this kept me from\nhaving any tables that were too terribly large so record addition and\nindex creation were not very time consuming.\n\nI am wondering how I should architect this in PostgreSQL. Should I follow\na similar strategy and have a separate database for each client and one\ndatabase that contains the global data? With the dBase and ISAM tables I\nhave a good idea of how to handle them since I have been working with them\nsince dBASE originally came out. With the PostgreSQL type tables I am not\nso certain how the data is arranged within the one file. Does having the\ndata all in one database allow PostgreSQL to better utilize indexes and\ncaches or does having a number of smaller databases provide performance\nincreases? In case it is important, there are 2000 clients involved, so\nthat would be 2000 databases if I followed my current FoxPro related\nstructure. Of course, I suppose it is always possible to combine a number\nof groups into a database if the number of databases is an issue.\n\nTables within the client specific databases are generally name and address\ninformation as well as tables for 10 different types of accounts which\nrequire different structures and those tables hold anywhere from 10,000\ntransactions a piece for some smaller groups and 1 million for larger\ngroups. I believe we have read to write ratio of about 1 to 15.\n\nThanks for any input.\n\n", "msg_date": "Fri, 25 Jun 2010 15:36:07 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Architecting a database" }, { "msg_contents": "On 26/06/2010 3:36 AM, [email protected] wrote:\n> I am in the process of moving a system that has been built around FoxPro\n> tables for the last 18 years into a PostgreSQL based system.\n>\n> Over time I came up with decent strategies for making the FoxPro tables\n> work well with the workload that was placed on them, but we are getting to\n> the point that the locking mechanisms are causing problems when some of\n> the more used tables are being written to.\n>\n> With the FoxPro tables I had one directory that contained the tables that\n> had global data that was common to all clients. Things like documents that\n> had been received and logged, checks that had been cut, etc. Then each\n> client had his own directory which housed tables that had information\n> relating to that specific client.\n\n> I am wondering how I should architect this in PostgreSQL. Should I follow\n> a similar strategy and have a separate database for each client and one\n> database that contains the global data?\n\nNo - use separate schema within a single database.\n\nYou can't do inter-database queries in PostgreSQL, and most things \nyou're used to using different \"databases\" for are best done with \nseparate schema (namespaces) within one database.\n\nA schema is almost a logical directory, really.\n\n> With the dBase and ISAM tables I\n> have a good idea of how to handle them since I have been working with them\n> since dBASE originally came out. With the PostgreSQL type tables I am not\n> so certain how the data is arranged within the one file. Does having the\n> data all in one database allow PostgreSQL to better utilize indexes and\n> caches or does having a number of smaller databases provide performance\n> increases?\n\nIt doesn't really make much difference, and for easier management a \nsingle database for a single app is very much the way to go.\n\n> In case it is important, there are 2000 clients involved, so\n> that would be 2000 databases if I followed my current FoxPro related\n> structure.\n\nNonono! Definitely use different schema if you need to separate things \nthis way.\n\n--\nCraig Ringer\n", "msg_date": "Sat, 26 Jun 2010 12:42:59 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architecting a database" }, { "msg_contents": "On Fri, Jun 25, 2010 at 3:36 PM, <[email protected]> wrote:\n> I am in the process of moving a system that has been built around FoxPro\n> tables for the last 18 years into a PostgreSQL based system.\n>\n> Over time I came up with decent strategies for making the FoxPro tables\n> work well with the workload that was placed on them, but we are getting to\n> the point that the locking mechanisms are causing problems when some of\n> the more used tables are being written to.\n>\n> With the FoxPro tables I had one directory that contained the tables that\n> had global data that was common to all clients. Things like documents that\n> had been received and logged, checks that had been cut, etc. Then each\n> client had his own directory which housed tables that had information\n> relating to that specific client. Setting things up like this kept me from\n> having any tables that were too terribly large so record addition and\n> index creation were not very time consuming.\n>\n> I am wondering how I should architect this in PostgreSQL. Should I follow\n> a similar strategy and have a separate database for each client and one\n> database that contains the global data? With the dBase and ISAM tables I\n> have a good idea of how to handle them since I have been working with them\n> since dBASE originally came out. With the PostgreSQL type tables I am not\n> so certain how the data is arranged within the one file. Does having the\n> data all in one database allow PostgreSQL to better utilize indexes and\n> caches or does having a number of smaller databases provide performance\n> increases? In case it is important, there are 2000 clients involved, so\n> that would be 2000 databases if I followed my current FoxPro related\n> structure. Of course, I suppose it is always possible to combine a number\n> of groups into a database if the number of databases is an issue.\n>\n> Tables within the client specific databases are generally name and address\n> information as well as tables for 10 different types of accounts which\n> require different structures and those tables hold anywhere from 10,000\n> transactions a piece for some smaller groups and 1 million for larger\n> groups. I believe we have read to write ratio of about 1 to 15.\n>\n> Thanks for any input.\n\ncongratulations. I developed on foxpro for years and I can tell you\nyou've come to the right place: your porting process should be\nrelatively pain free. foxpro had a couple of nice features that\naren't found in too many other places: expression indexes (which we\nhave) and first class queries (we have, if you count pl/pgsql).\nfoxpro was also an enormous headache on so many levels which is why I\nassume you are here. I've long harbored suspicion that Microsoft\nenjoyed adding to those headaches rather than subtracting from them.\n\nOthers have answered the data organization question. You definitely\nwant to use schemas to logically separate private application data\ninside your database...this is the purpose of schemas basically.\n\nData in SQL tables is considered unordered (we have no concept of\nrecno) unless an explicit ordering criteria is given. Direct access\nto the tables (BROWSE) has no analog in SQL. A query is sent to the\ndatabase, results are gathered, buffered, and sent back. This is the\n#1 thing you will have to get used to coming from dbase style coding.\n\nLocking model in postgres is completely different (better). Records\nare implicitly locked by writing to them and the locks are released at\ntransaction end (optimistic locking plus). As a bonus, data doesn't\nget corrupted when you break the rules =).\n\nFor backend data processing tasks I advise you to use pl/pgsql.\nComing from foxpro you should have no problems. You are going to have\nto replace your GUI and report generator. First question is whether\nor not go web...and following that which technologies to use. You may\nhave already figured all this out but perhaps you haven't. Foxpro\ndoes have odbc connectivity so you may have entertained ideas of\nsimply moving your application w/o porting the code. This may or may\nnot work (just a heads up) -- the performance of foxpro odbc\ntranslation is not so great and some of your code won't translate\nwell. If you didn't use foxpro for the front end, it's going to\ndepend on what you're using.\n\nOnce you get used to postgres and how it reads and writes data, don't\nworry so much about performance. As long as you avoid certain\nparadigms postgres doesn't write, the performance of your new database\nshould absolutely nuke what you're used to, especially in the multi\nuser case. You will have no problems on the backend -- it's the front\nend where your main concerns should be. good luck.\n\nmerlin\n", "msg_date": "Sat, 26 Jun 2010 11:49:27 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architecting a database" }, { "msg_contents": "[email protected] writes:\n> I am wondering how I should architect this in PostgreSQL. Should I follow\n> a similar strategy and have a separate database for each client and one\n> database that contains the global data? \n\nAs others said already, there's more problems to foresee doing so that\nthere are advantages. If you must separate data for security concerns,\nyour situation would be much more comfortable using schema.\n\nIf it's all about performances, see about partitioning the data, and\nmaybe not even on the client id but monthly, e.g., depending on the\nqueries you run in your application.\n\nRegards,\n-- \ndim\n", "msg_date": "Mon, 28 Jun 2010 10:01:19 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architecting a database" }, { "msg_contents": "Thanks for all of the input everyone.\n\nI believe I am going to put together a test case using schemas and\npartitioning and then doubling the amount of data currently in the system\nto give me an idea of how things will be performing a couple of years down\nthe road.\n\nI was looking at a server using the new Opteron 6100 series for the new\nserver and it would have 32 cores, but the speed is 2ghz. I read a post\nearlier today that mentioned in passing that it was better to have a\nfaster processor than more cores. I was wondering whether or not this\nwould be a good selection since there are CPUs in the Intel branch that\nare quad core up to 3.3ghz.\n\n", "msg_date": "Wed, 30 Jun 2010 14:12:27 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Architecting a database" }, { "msg_contents": "On Jun 30, 2010, at 11:12 AM, [email protected] wrote:\n\n> I read a post\n> earlier today that mentioned in passing that it was better to have a\n> faster processor than more cores.\n\nThis really depends on your workload and how much you value latency vs. throughput. If you tend to have a lot of very simple queries, more cores => more throughput, and it may not matter much if your queries take 20ms or 30ms if you can be doing a dozen or two more of them concurrently in an AMD system than in an Intel one. On the other hand, if you have less clients, or more latency-sensitive clients, then fewer-but-faster cores is usually a win.\n\nEither way, the amount of power you can get for your money is pretty impressive.", "msg_date": "Wed, 30 Jun 2010 11:18:33 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architecting a database" } ]
[ { "msg_contents": "Hello,\n\nWhen I run an SQL to create new tables and indexes is when Postgres consumes all CPU and impacts other users on the server.\n\nWe are running Postgres 8.3.7 on a Sun M5000 with 2 x quad core CPUs (16 threads) running Solaris 10.\n\nI've attached the sar data at the time of the run- here's a snip-it below.\n\nAny ideas would be greatly appreciated.\n\nThanks!\nDeb\n\n****************************************************\n\nHere, note the run queue, the left column. That is the number of processes waiting to run. 97 processes waiting to run at any time with only eight CPU cores looks very busy. \n\nroot@core2 # sar -q 5 500\n\nSunOS core2 5.10 Generic_142900-11 sun4u 06/17/2010\n\n12:01:50 runq-sz %runocc swpq-sz %swpocc\n12:01:55 1.8 80 0.0 0\n12:02:00 1.0 20 0.0 0\n12:02:05 1.0 20 0.0 0\n12:02:10 0.0 0 0.0 0\n12:02:15 0.0 0 0.0 0\n12:02:21 3.3 50 0.0 0\n12:02:26 1.0 20 0.0 0\n12:02:31 1.0 60 0.0 0\n12:02:36 1.0 20 0.0 0\n12:02:42 27.0 50 0.0 0\n12:02:49 32.8 83 0.0 0\n12:02:55 76.0 100 0.0 0\n12:03:01 66.1 100 0.0 0\n12:03:07 43.8 100 0.0 0\n12:03:13 52.0 100 0.0 0\n12:03:19 91.2 100 0.0 0\n12:03:26 97.8 83 0.0 0\n12:03:33 63.7 100 0.0 0\n12:03:39 67.4 100 0.0 0\n12:03:47 41.5 100 0.0 0\n12:03:53 82.0 83 0.0 0\n12:03:59 88.7 100 0.0 0\n12:04:06 87.7 50 0.0 0\n12:04:12 41.3 100 0.0 0\n12:04:17 94.3 50 0.0 0\n12:04:22 1.0 20 0.0 0\n12:04:27 3.3 60 0.0 0\n12:04:32 1.0 20 0.0 0\n12:04:38 0.0 0 0.0 0\n\n", "msg_date": "Fri, 25 Jun 2010 22:25:41 +0000", "msg_from": "Deborah Fuentes <[email protected]>", "msg_from_op": true, "msg_subject": "Extremely high CPU usage when building tables" }, { "msg_contents": "Hi,\n\n1. Did you also check vmstat output , from sar output the i/o wait is not\nclear.\n2. i gues you must be populating the database between creating tables and\ncreating\n indexes. creating indexes require sorting of data that may be cpu\nintensive, loading/populating\n the data may saturate the i/o bandwidth . I think you should check when\nthe max cpu utilisation\n is taking place exactly.\n\nregds\nRajesh Kumar Mallah.\n\nOn Sat, Jun 26, 2010 at 3:55 AM, Deborah Fuentes <[email protected]>wrote:\n\n> Hello,\n>\n> When I run an SQL to create new tables and indexes is when Postgres\n> consumes all CPU and impacts other users on the server.\n>\n> We are running Postgres 8.3.7 on a Sun M5000 with 2 x quad core CPUs (16\n> threads) running Solaris 10.\n>\n> I've attached the sar data at the time of the run- here's a snip-it below.\n>\n> Any ideas would be greatly appreciated.\n>\n> Thanks!\n> Deb\n>\n> ****************************************************\n>\n> Here, note the run queue, the left column. That is the number of processes\n> waiting to run. 97 processes waiting to run at any time with only eight CPU\n> cores looks very busy.\n>\n> root@core2 # sar -q 5 500\n>\n> SunOS core2 5.10 Generic_142900-11 sun4u 06/17/2010\n>\n> 12:01:50 runq-sz %runocc swpq-sz %swpocc\n> 12:01:55 1.8 80 0.0 0\n> 12:02:00 1.0 20 0.0 0\n> 12:02:05 1.0 20 0.0 0\n> 12:02:10 0.0 0 0.0 0\n> 12:02:15 0.0 0 0.0 0\n> 12:02:21 3.3 50 0.0 0\n> 12:02:26 1.0 20 0.0 0\n> 12:02:31 1.0 60 0.0 0\n> 12:02:36 1.0 20 0.0 0\n> 12:02:42 27.0 50 0.0 0\n> 12:02:49 32.8 83 0.0 0\n> 12:02:55 76.0 100 0.0 0\n> 12:03:01 66.1 100 0.0 0\n> 12:03:07 43.8 100 0.0 0\n> 12:03:13 52.0 100 0.0 0\n> 12:03:19 91.2 100 0.0 0\n> 12:03:26 97.8 83 0.0 0\n> 12:03:33 63.7 100 0.0 0\n> 12:03:39 67.4 100 0.0 0\n> 12:03:47 41.5 100 0.0 0\n> 12:03:53 82.0 83 0.0 0\n> 12:03:59 88.7 100 0.0 0\n> 12:04:06 87.7 50 0.0 0\n> 12:04:12 41.3 100 0.0 0\n> 12:04:17 94.3 50 0.0 0\n> 12:04:22 1.0 20 0.0 0\n> 12:04:27 3.3 60 0.0 0\n> 12:04:32 1.0 20 0.0 0\n> 12:04:38 0.0 0 0.0 0\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi,1. Did you also check vmstat output , from sar output the i/o wait is not clear.2.  i gues you must be populating the database between creating tables and creating     indexes. creating indexes require sorting of data that may be cpu intensive, loading/populating\n     the data may saturate the i/o bandwidth . I think you should check when the max cpu utilisation     is taking place exactly.regdsRajesh Kumar Mallah.On Sat, Jun 26, 2010 at 3:55 AM, Deborah Fuentes <[email protected]> wrote:\nHello,\n\nWhen I run an SQL to create new tables and indexes is when Postgres consumes all CPU and impacts other users on the server.\n\nWe are running Postgres 8.3.7 on a Sun M5000 with 2 x quad core CPUs (16 threads) running Solaris 10.\n\nI've attached the sar data at the time of the run- here's a snip-it below.\n\nAny ideas would be greatly appreciated.\n\nThanks!\nDeb\n\n****************************************************\n\nHere, note the run queue, the left column. That is the number of processes waiting to run. 97 processes waiting to run at any time with only eight CPU cores looks very busy.\n\nroot@core2 # sar -q 5 500\n\nSunOS core2 5.10 Generic_142900-11 sun4u    06/17/2010\n\n12:01:50 runq-sz %runocc swpq-sz %swpocc\n12:01:55     1.8      80     0.0       0\n12:02:00     1.0      20     0.0       0\n12:02:05     1.0      20     0.0       0\n12:02:10     0.0       0     0.0       0\n12:02:15     0.0       0     0.0       0\n12:02:21     3.3      50     0.0       0\n12:02:26     1.0      20     0.0       0\n12:02:31     1.0      60     0.0       0\n12:02:36     1.0      20     0.0       0\n12:02:42    27.0      50     0.0       0\n12:02:49    32.8      83     0.0       0\n12:02:55    76.0     100     0.0       0\n12:03:01    66.1     100     0.0       0\n12:03:07    43.8     100     0.0       0\n12:03:13    52.0     100     0.0       0\n12:03:19    91.2     100     0.0       0\n12:03:26    97.8      83     0.0       0\n12:03:33    63.7     100     0.0       0\n12:03:39    67.4     100     0.0       0\n12:03:47    41.5     100     0.0       0\n12:03:53    82.0      83     0.0       0\n12:03:59    88.7     100     0.0       0\n12:04:06    87.7      50     0.0       0\n12:04:12    41.3     100     0.0       0\n12:04:17    94.3      50     0.0       0\n12:04:22     1.0      20     0.0       0\n12:04:27     3.3      60     0.0       0\n12:04:32     1.0      20     0.0       0\n12:04:38     0.0       0     0.0       0\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 1 Jul 2010 15:19:49 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely high CPU usage when building tables" }, { "msg_contents": "We did see a small spike in disk I/O, but only had wait I/O for less than 10 seconds total. The low CPU idle event goes on for several minutes, wait I/O or heavier I/O does not correlate to the extended period.\n\nSystem time does jump up at the same time as the user time. System times of 15% when CPU us at 60% (25% idle) is around the average for this test. We believe that jump is related to showing time spent getting processes on and off CPU to execute. No general system wait I/O is observed during this time.\n\nFive second samples of the Fiber disk SAN volumes. Wait I/O is listed in red. Note it's ten seconds or less.\n\n extended device statistics\n r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device\n 1.0 296.9 0.0 136.5 0.0 62.3 0.0 209.0 0 168 c3\n 0.0 11.4 0.0 0.1 0.0 0.5 0.0 41.6 0 30 c3t60A98000572D4275684A563761586D71d0\n 0.4 28.0 0.0 0.9 0.0 3.0 0.0 104.4 0 41 c3t60A98000572D4275684A5638364D644Ed0\n 0.6 257.5 0.0 135.6 0.0 58.8 0.0 227.9 0 98 c3t60A98000572D4275684A56385468434Fd0\n extended device statistics\n r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device\n 13.8 721.2 0.1 133.9 0.0 75.0 0.0 102.0 0 200 c3\n 0.0 88.8 0.0 6.0 0.0 19.0 0.0 213.7 0 65 c3t60A98000572D4275684A563761586D71d0\n 2.4 86.6 0.0 1.2 0.0 1.6 0.0 18.0 0 39 c3t60A98000572D4275684A5638364D644Ed0\n 11.4 545.8 0.1 126.7 0.0 54.4 0.0 97.7 0 97 c3t60A98000572D4275684A56385468434Fd0\n extended device statistics\n r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device\n 3.6 769.0 0.0 123.2 29.4 182.9 38.1 236.7 5 220 c3\n 0.0 104.2 0.0 1.4 0.0 34.3 0.0 329.0 0 46 c3t60A98000572D4275684A563761586D71d0\n 1.0 77.0 0.0 13.1 0.0 8.1 0.0 103.2 0 77 c3t60A98000572D4275684A5638364D644Ed0\n 2.6 587.8 0.0 108.8 29.4 140.5 49.9 238.0 41 98 c3t60A98000572D4275684A56385468434Fd0\n extended device statistics\n r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device\n 9.4 761.2 0.1 133.1 3.3 122.6 4.3 159.1 1 196 c3\n 0.0 33.8 0.0 0.3 0.0 2.1 0.0 63.5 0 30 c3t60A98000572D4275684A563761586D71d0\n 7.4 94.8 0.1 1.8 0.0 16.2 0.0 158.6 0 66 c3t60A98000572D4275684A5638364D644Ed0\n 2.0 632.6 0.0 131.0 3.3 104.3 5.2 164.3 10 99 c3t60A98000572D4275684A56385468434Fd0\n extended device statistics\n r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device\n 2.8 588.2 0.0 126.0 0.0 112.6 0.0 190.5 0 239 c3\n 0.0 25.0 0.0 0.2 0.0 1.8 0.0 72.3 0 52 c3t60A98000572D4275684A563761586D71d0\n 0.0 157.4 0.0 12.0 0.0 10.7 0.2 68.0 0 87 c3t60A98000572D4275684A5638364D644Ed0\n 2.8 405.8 0.0 113.8 0.0 100.1 0.0 244.9 0 100 c3t60A98000572D4275684A56385468434Fd0\n\n\nThanks!\nDeb\n\nFrom: Rajesh Kumar Mallah [mailto:[email protected]]\nSent: Thursday, July 01, 2010 2:50 AM\nTo: Deborah Fuentes\nCc: [email protected]\nSubject: Re: [PERFORM] Extremely high CPU usage when building tables\n\nHi,\n\n1. Did you also check vmstat output , from sar output the i/o wait is not clear.\n2. i gues you must be populating the database between creating tables and creating\n indexes. creating indexes require sorting of data that may be cpu intensive, loading/populating\n the data may saturate the i/o bandwidth . I think you should check when the max cpu utilisation\n is taking place exactly.\n\nregds\nRajesh Kumar Mallah.\nOn Sat, Jun 26, 2010 at 3:55 AM, Deborah Fuentes <[email protected]<mailto:[email protected]>> wrote:\nHello,\n\nWhen I run an SQL to create new tables and indexes is when Postgres consumes all CPU and impacts other users on the server.\n\nWe are running Postgres 8.3.7 on a Sun M5000 with 2 x quad core CPUs (16 threads) running Solaris 10.\n\nI've attached the sar data at the time of the run- here's a snip-it below.\n\nAny ideas would be greatly appreciated.\n\nThanks!\nDeb\n\n****************************************************\n\nHere, note the run queue, the left column. That is the number of processes waiting to run. 97 processes waiting to run at any time with only eight CPU cores looks very busy.\n\nroot@core2 # sar -q 5 500\n\nSunOS core2 5.10 Generic_142900-11 sun4u 06/17/2010\n\n12:01:50 runq-sz %runocc swpq-sz %swpocc\n12:01:55 1.8 80 0.0 0\n12:02:00 1.0 20 0.0 0\n12:02:05 1.0 20 0.0 0\n12:02:10 0.0 0 0.0 0\n12:02:15 0.0 0 0.0 0\n12:02:21 3.3 50 0.0 0\n12:02:26 1.0 20 0.0 0\n12:02:31 1.0 60 0.0 0\n12:02:36 1.0 20 0.0 0\n12:02:42 27.0 50 0.0 0\n12:02:49 32.8 83 0.0 0\n12:02:55 76.0 100 0.0 0\n12:03:01 66.1 100 0.0 0\n12:03:07 43.8 100 0.0 0\n12:03:13 52.0 100 0.0 0\n12:03:19 91.2 100 0.0 0\n12:03:26 97.8 83 0.0 0\n12:03:33 63.7 100 0.0 0\n12:03:39 67.4 100 0.0 0\n12:03:47 41.5 100 0.0 0\n12:03:53 82.0 83 0.0 0\n12:03:59 88.7 100 0.0 0\n12:04:06 87.7 50 0.0 0\n12:04:12 41.3 100 0.0 0\n12:04:17 94.3 50 0.0 0\n12:04:22 1.0 20 0.0 0\n12:04:27 3.3 60 0.0 0\n12:04:32 1.0 20 0.0 0\n12:04:38 0.0 0 0.0 0\n\n\n--\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n\n\n\n\nWe did see a small spike in disk\nI/O, but only had wait I/O for less than 10 seconds total. The low CPU idle\nevent goes on for several minutes, wait I/O or heavier I/O does not correlate\nto the extended period.\n \nSystem time does jump up at the\nsame time as the user time. System times of 15% when CPU us at 60% (25% idle)\nis around the average for this test. We believe that jump is related to showing\ntime spent getting processes on and off CPU to execute. No general system wait\nI/O is observed during this time.\n \nFive second samples of the Fiber\ndisk SAN volumes. Wait I/O is listed in red. Note it’s ten seconds or less.\n \n                   \nextended device\nstatistics             \n\n   \nr/s    w/s   Mr/s   Mw/s wait actv wsvc_t\nasvc_t  %w  %b device\n   \n1.0  296.9    0.0  136.5  0.0\n62.3    0.0  209.0   0 168 c3\n   \n0.0   11.4    0.0    0.1 \n0.0  0.5    0.0   41.6   0  30 c3t60A98000572D4275684A563761586D71d0\n   \n0.4   28.0    0.0    0.9 \n0.0  3.0    0.0  104.4   0  41\nc3t60A98000572D4275684A5638364D644Ed0\n   \n0.6  257.5    0.0  135.6  0.0\n58.8    0.0  227.9   0  98\nc3t60A98000572D4275684A56385468434Fd0\n                   \nextended device\nstatistics             \n\n   \nr/s    w/s   Mr/s   Mw/s wait actv wsvc_t\nasvc_t  %w  %b device\n  \n13.8  721.2    0.1  133.9  0.0\n75.0    0.0  102.0   0 200 c3\n   \n0.0   88.8    0.0    6.0  0.0\n19.0    0.0  213.7   0  65\nc3t60A98000572D4275684A563761586D71d0\n   \n2.4   86.6    0.0    1.2 \n0.0  1.6    0.0   18.0   0  39\nc3t60A98000572D4275684A5638364D644Ed0\n  \n11.4  545.8    0.1  126.7  0.0\n54.4    0.0   97.7   0  97\nc3t60A98000572D4275684A56385468434Fd0\n                   \nextended device statistics             \n\n   \nr/s    w/s   Mr/s   Mw/s wait actv wsvc_t\nasvc_t  %w  %b device\n   \n3.6  769.0    0.0  123.2 29.4 182.9  \n38.1  236.7   5 220 c3\n   \n0.0  104.2    0.0    1.4  0.0\n34.3    0.0  329.0   0  46\nc3t60A98000572D4275684A563761586D71d0\n   \n1.0   77.0    0.0   13.1  0.0 \n8.1    0.0  103.2   0  77\nc3t60A98000572D4275684A5638364D644Ed0\n   \n2.6 \n587.8    0.0  108.8 29.4 140.5   49.9 \n238.0  41  98 c3t60A98000572D4275684A56385468434Fd0\n                   \nextended device\nstatistics             \n\n   \nr/s    w/s   Mr/s   Mw/s wait actv wsvc_t\nasvc_t  %w  %b device\n   \n9.4  761.2    0.1  133.1  3.3\n122.6    4.3  159.1   1 196 c3\n   \n0.0   33.8    0.0    0.3 \n0.0  2.1    0.0   63.5   0  30\nc3t60A98000572D4275684A563761586D71d0\n   \n7.4   94.8    0.1    1.8  0.0\n16.2    0.0  158.6   0  66\nc3t60A98000572D4275684A5638364D644Ed0\n   \n2.0 \n632.6    0.0  131.0  3.3 104.3   \n5.2  164.3  10  99 c3t60A98000572D4275684A56385468434Fd0\n                   \nextended device\nstatistics             \n\n   \nr/s    w/s   Mr/s   Mw/s wait actv wsvc_t\nasvc_t  %w  %b device\n   \n2.8  588.2    0.0  126.0  0.0\n112.6    0.0  190.5   0 239 c3\n   \n0.0   25.0    0.0    0.2 \n0.0  1.8    0.0   72.3   0  52\nc3t60A98000572D4275684A563761586D71d0\n   \n0.0  157.4    0.0   12.0  0.0\n10.7    0.2   68.0   0  87\nc3t60A98000572D4275684A5638364D644Ed0\n   \n2.8  405.8    0.0  113.8  0.0\n100.1    0.0  244.9   0 100\nc3t60A98000572D4275684A56385468434Fd0\n \n \nThanks!\nDeb\n \n\nFrom: Rajesh Kumar Mallah\n[mailto:[email protected]] \nSent: Thursday, July 01, 2010 2:50 AM\nTo: Deborah Fuentes\nCc: [email protected]\nSubject: Re: [PERFORM] Extremely high CPU usage when building tables\n\n \nHi,\n\n1. Did you also check vmstat output , from sar output the i/o wait is not\nclear.\n2.  i gues you must be populating the database between creating tables and\ncreating\n     indexes. creating indexes require sorting of data that\nmay be cpu intensive, loading/populating\n     the data may saturate the i/o bandwidth . I think you\nshould check when the max cpu utilisation\n     is taking place exactly.\n\nregds\nRajesh Kumar Mallah.\n\nOn Sat, Jun 26, 2010 at 3:55 AM, Deborah Fuentes <[email protected]> wrote:\nHello,\n\nWhen I run an SQL to create new tables and indexes is when Postgres consumes\nall CPU and impacts other users on the server.\n\nWe are running Postgres 8.3.7 on a Sun M5000 with 2 x quad core CPUs (16\nthreads) running Solaris 10.\n\nI've attached the sar data at the time of the run- here's a snip-it below.\n\nAny ideas would be greatly appreciated.\n\nThanks!\nDeb\n\n****************************************************\n\nHere, note the run queue, the left column. That is the number of processes\nwaiting to run. 97 processes waiting to run at any time with only eight CPU\ncores looks very busy.\n\nroot@core2 # sar -q 5 500\n\nSunOS core2 5.10 Generic_142900-11 sun4u    06/17/2010\n\n12:01:50 runq-sz %runocc swpq-sz %swpocc\n12:01:55     1.8      80     0.0  \n    0\n12:02:00     1.0      20     0.0  \n    0\n12:02:05     1.0      20     0.0  \n    0\n12:02:10     0.0       0     0.0  \n    0\n12:02:15     0.0       0     0.0  \n    0\n12:02:21     3.3      50     0.0  \n    0\n12:02:26     1.0      20     0.0  \n    0\n12:02:31     1.0      60     0.0  \n    0\n12:02:36     1.0      20     0.0  \n    0\n12:02:42    27.0      50     0.0  \n    0\n12:02:49    32.8      83     0.0  \n    0\n12:02:55    76.0     100     0.0    \n  0\n12:03:01    66.1     100     0.0    \n  0\n12:03:07    43.8     100     0.0    \n  0\n12:03:13    52.0     100     0.0    \n  0\n12:03:19    91.2     100     0.0    \n  0\n12:03:26    97.8      83     0.0  \n    0\n12:03:33    63.7     100     0.0    \n  0\n12:03:39    67.4     100     0.0    \n  0\n12:03:47    41.5     100     0.0    \n  0\n12:03:53    82.0      83     0.0  \n    0\n12:03:59    88.7     100     0.0    \n  0\n12:04:06    87.7      50     0.0  \n    0\n12:04:12    41.3     100     0.0    \n  0\n12:04:17    94.3      50     0.0  \n    0\n12:04:22     1.0      20     0.0  \n    0\n12:04:27     3.3      60     0.0  \n    0\n12:04:32     1.0      20     0.0  \n    0\n12:04:38     0.0       0     0.0  \n    0\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 2 Jul 2010 19:07:58 +0000", "msg_from": "Deborah Fuentes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely high CPU usage when building tables" }, { "msg_contents": "Dear Deb,\n\ni feel three distinct steps are present\n1. CREATE TABLE\n2. LOAding of data in tables\n3. Creation of indexes\n\nIt is still not clear when you are seeing your system becoming unresponsive\nto\nother application. Is it during loading of data or creation of indexes?\n\n1. can you give any idea about how much data you are loading ? rows count or\nGB data etc\n2. how many indexes are you creation ?\n\nregds\nRajesh Kumar Mallah.\n\n Dear Deb,i feel three distinct steps are present1. CREATE TABLE2. LOAding of data in tables3. Creation of indexesIt is still not clear when you are seeing your system becoming unresponsive to\nother application. Is it during loading of data or creation of indexes?1. can you give any idea about how much data you are loading ? rows count or GB data etc2. how many indexes are you creation ?regds\nRajesh Kumar Mallah.", "msg_date": "Sat, 3 Jul 2010 01:23:03 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely high CPU usage when building tables" }, { "msg_contents": "Rajesh,\n\nWe are not loading any data. There are only two steps present:\n\n\n1. Create tables - 1127\n\n2. Create indexes - approximately 7000\n\nThe CPU spikes immediately when the tables are being created.\n\nRegards,\nDeb\n\nFrom: Rajesh Kumar Mallah [mailto:[email protected]]\nSent: Friday, July 02, 2010 12:53 PM\nTo: Deborah Fuentes\nCc: [email protected]\nSubject: Re: [PERFORM] Extremely high CPU usage when building tables\n\n Dear Deb,\n\ni feel three distinct steps are present\n1. CREATE TABLE\n2. LOAding of data in tables\n3. Creation of indexes\n\nIt is still not clear when you are seeing your system becoming unresponsive to\nother application. Is it during loading of data or creation of indexes?\n\n1. can you give any idea about how much data you are loading ? rows count or GB data etc\n2. how many indexes are you creation ?\n\nregds\nRajesh Kumar Mallah.\n\n\n\n\n\n\n\n\n\n\nRajesh,\n \nWe are not loading any data. There are only two steps present:\n \n1.      \nCreate tables –  1127\n2.      \nCreate indexes – approximately 7000\n \nThe CPU spikes immediately when the tables are being created.\n \nRegards,\nDeb\n \n\nFrom: Rajesh Kumar Mallah\n[mailto:[email protected]] \nSent: Friday, July 02, 2010 12:53 PM\nTo: Deborah Fuentes\nCc: [email protected]\nSubject: Re: [PERFORM] Extremely high CPU usage when building tables\n\n \n Dear Deb,\n\ni feel three distinct steps are present\n1. CREATE TABLE\n2. LOAding of data in tables\n3. Creation of indexes\n\nIt is still not clear when you are seeing your system becoming unresponsive to\nother application. Is it during loading of data or creation of indexes?\n\n1. can you give any idea about how much data you are loading ? rows count or GB\ndata etc\n2. how many indexes are you creation ?\n\nregds\nRajesh Kumar Mallah.", "msg_date": "Tue, 6 Jul 2010 15:59:23 +0000", "msg_from": "Deborah Fuentes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely high CPU usage when building tables" }, { "msg_contents": "Deborah Fuentes <[email protected]> wrote:\n \n> 1. Create tables - 1127\n> \n> 2. Create indexes - approximately 7000\n \nWhat does your postgresql.conf look like (excluding all comments)?\n \nHow many connections are you using to create these tables and indexes?\n \nWhat else is running on the machine?\n \n-Kevin\n", "msg_date": "Tue, 06 Jul 2010 12:43:29 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely high CPU usage when building tables" } ]
[ { "msg_contents": "<[email protected]> wrote:\n \n> With the dBase and ISAM tables I have a good idea of how to handle\n> them since I have been working with them since dBASE originally\n> came out.\n \nAh, someone with whom I can reminisce about CP/M and WordStar? :-)\n \n> With the PostgreSQL type tables I am not so certain how the data\n> is arranged within the one file. Does having the data all in one\n> database allow PostgreSQL to better utilize indexes and caches or\n> does having a number of smaller databases provide performance\n> increases? In case it is important, there are 2000 clients\n> involved, so that would be 2000 databases if I followed my current\n> FoxPro related structure.\n \nWell, there are many options here. You could have:\n - one PostgreSQL cluster for each client,\n - one database for each client (all in one cluster),\n - one schema for each client (all in one database), or\n - a client_id column in each table to segregate data.\n \nThe first would probably be a maintenance nightmare; it's just\nlisted for completeness. The cluster is the level at which you\nstart and stop the database engine, do real-time backups through the\ndatabase transaction logging, etc. You probably don't want to do\nthat individually for each of 2,000 clients, I'm assuming. Besides\nthat, each cluster has its own memory cache, which would probably be\na problem for you. (The caching issues go away for all the\nfollowing options.)\n \nThe database is the level at which you can get a connection. You\ncan see some cluster-level resources within all databases, like the\nlist of databases and the list of users, but for the most part, each\ndatabase is independent, even though they're running in the same\nexecutable engine. It would be relatively easy to keep the whole\ncluster (all databases) backed up (especially after 9.0 is release\nthis summer), and you could have a cluster on another machine for\nstandby, if desired. You are able to do dumps of individual\ndatabases, but only as snapshots of a moment in time or through\nexternal tools. It's hard to efficiently join data from a table in\none database to a table in another.\n \nA schema is a logical separation within a database. Table\nclient1.account is a different table from client2.account. While a\nuser can be limited to tables within a single schema, a user with\nrights to all the tables can join between them as needed. You could\nput common reference data in a public schema which all users could\naccess in addition to their private schemas.\n \nThe implications of putting multiple clients in a table, with a\nclient's rows identified by a client_id column, are probably fairly\nobvious. If many of those 2,000 clients have tables with millions of\nrows, performance could suffer without very careful indexing,\nmanaging tables with billions of rows can become challenging, and\nthere could be concerns about how to ensure that data from one\nclient isn't accidentally shown to another.\n \nHopefully that's enough to allow you to make a good choice. If any\nof that wasn't clear, please ask.\n \n-Kevin\n", "msg_date": "Fri, 25 Jun 2010 17:28:55 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Architecting a database" }, { "msg_contents": "On 6/25/10 3:28 PM, Kevin Grittner wrote:\n> <[email protected]> wrote:\n>> With the PostgreSQL type tables I am not so certain how the data\n>> is arranged within the one file. Does having the data all in one\n>> database allow PostgreSQL to better utilize indexes and caches or\n>> does having a number of smaller databases provide performance\n>> increases? In case it is important, there are 2000 clients\n>> involved, so that would be 2000 databases if I followed my current\n>> FoxPro related structure.\n>\n> The implications of putting multiple clients in a table, with a\n> client's rows identified by a client_id column, are probably fairly\n> obvious. If many of those 2,000 clients have tables with millions of\n> rows, performance could suffer without very careful indexing,\n> managing tables with billions of rows can become challenging, and\n> there could be concerns about how to ensure that data from one\n> client isn't accidentally shown to another.\n\nYou should also ask whether there are social (that is, nontechncal) reasons to avoid multiple clients per table.\n\nWhen a customer asks about security and you tell them, \"You get your own database, nobody else can log in,\" they tend to like that. If you tell them that their data is mixed with everyone else's, but \"we've done a really good job with our app software and we're pretty sure there are no bugs that would let anyone see your data,\" that may not fly.\n\nPeople will trust Postgres security (assuming you actually do it right) because it's an open source, trusted product used by some really big companies. But your own app? Do you even trust it?\n\nEven if your application IS secure, it may not matter. It's what the customer believes or worries about that can sell your product.\n\nWe've also found another really good reason for separate databases. It lets you experiment without any impact on anything else. We have scripts that can create a database in just a few minutes, load it up, and have it ready to demo in just a few minutes. If we don't end up using it, we just blow it off and its gone. No other database is impacted at all.\n\nCraig\n", "msg_date": "Fri, 25 Jun 2010 16:30:55 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architecting a database" }, { "msg_contents": "Kevin Grittner wrote:\n> A schema is a logical separation within a database. Table\n> client1.account is a different table from client2.account. While a\n> user can be limited to tables within a single schema, a user with\n> rights to all the tables can join between them as needed. You could\n> put common reference data in a public schema which all users could\n> access in addition to their private schemas\n\nMy guess would be that this app will end up being best split by schema. \nI wonder whether it *also* needs to be split by database, too. 2000 \nclusters is clearly a nightmare, and putting all the client data into \none big table has both performance and security issues; that leaves \ndatabase and schema as possible splits. However, having 2000 databases \nin a cluster is probably too many; having 2000 schemas in a database \nmight also be too many. There are downsides to expanding either of \nthose to such a high quantity.\n\nIn order to keep both those in the domain where they perform well and \nare managable, it may be that what's needed is, say, 50 databases with \n40 schemas each, rather than 2000 of either. Hard to say the ideal \nratio. However, I think that at the application design level, it would \nbe wise to consider each client as having a database+schema pair unique \nto them, and with the assumption some shared data may need to be \nreplicated to all the databases in the cluster. Then it's possible to \nshift the trade-off around as needed once the app is built. Building \nthat level of flexibility in shouldn't be too hard if it's in the design \nfrom day one, but it would be painful bit of refactoring to do later. \nOnce there's a prototype, then some benchmark work running that app \ncould be done to figure out the correct ratio between the two. It might \neven make sense to consider full scalability from day one and make the \nunique client connection info host:port:database:schema.\n\nP.S. Very refreshing to get asked about this before rather than after a \ngiant app that doesn't perform well is deployed.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 25 Jun 2010 20:02:51 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architecting a database" }, { "msg_contents": "Interesting point you made about the read to write ratio of 1 to 15.\nHow frequently will you be adding new entities or in the case of storing the\ncustomers in one database table, how frequently will you be adding new\nobjects of a certain entity type. How many entity types do you foresee\nexisting? i.e. \"Customer?\" Will Customer have subtypes or is a Customer the\nsingle entity in the database?\nHow frequent and for how long are write operations and are they heavily\ntransaction based? Will you need to support complex reporting in the\nfuture? What is the max number of customers? And how much data\n(approximate) will a single customer record consume in bytes? At what rate\ndoes it grow? (in bytes)\nWill your system need to support any type of complex reporting in the future\n(despite it being write intensive)?\n\nI'd take a look at memcached, plproxy, pgpool, and some of the other cool\nstuff in the postgresql community.\nAt a minimum, it might help you architect the system in such a manner that\nyou don't box yourself in.\nLast, KV stores for heavy write intensive operations in distributed\nenvironments are certainly interesting - a hybrid solution could work.\n\nSounds like a fun project!\n\nBryan\n\n\n\nOn Fri, Jun 25, 2010 at 7:02 PM, Greg Smith <[email protected]> wrote:\n\n> Kevin Grittner wrote:\n>\n>> A schema is a logical separation within a database. Table\n>> client1.account is a different table from client2.account. While a\n>> user can be limited to tables within a single schema, a user with\n>> rights to all the tables can join between them as needed. You could\n>> put common reference data in a public schema which all users could\n>> access in addition to their private schemas\n>>\n>\n> My guess would be that this app will end up being best split by schema. I\n> wonder whether it *also* needs to be split by database, too. 2000 clusters\n> is clearly a nightmare, and putting all the client data into one big table\n> has both performance and security issues; that leaves database and schema as\n> possible splits. However, having 2000 databases in a cluster is probably\n> too many; having 2000 schemas in a database might also be too many. There\n> are downsides to expanding either of those to such a high quantity.\n>\n> In order to keep both those in the domain where they perform well and are\n> managable, it may be that what's needed is, say, 50 databases with 40\n> schemas each, rather than 2000 of either. Hard to say the ideal ratio.\n> However, I think that at the application design level, it would be wise to\n> consider each client as having a database+schema pair unique to them, and\n> with the assumption some shared data may need to be replicated to all the\n> databases in the cluster. Then it's possible to shift the trade-off around\n> as needed once the app is built. Building that level of flexibility in\n> shouldn't be too hard if it's in the design from day one, but it would be\n> painful bit of refactoring to do later. Once there's a prototype, then some\n> benchmark work running that app could be done to figure out the correct\n> ratio between the two. It might even make sense to consider full\n> scalability from day one and make the unique client connection info\n> host:port:database:schema.\n>\n> P.S. Very refreshing to get asked about this before rather than after a\n> giant app that doesn't perform well is deployed.\n>\n> --\n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nInteresting point you made about the read to write ratio of 1 to 15.How frequently will you be adding new entities or in the case of storing the customers in one database table, how frequently will you be adding new objects of a certain entity type. How many entity types do you foresee existing? i.e. \"Customer?\" Will Customer have subtypes or is a Customer the single entity in the database?\nHow frequent and for how long are write operations and are they heavily transaction based?  Will you need to support complex reporting in the future?   What is the max number of customers?  And how much data (approximate) will a single customer record consume in bytes?   At what rate does it grow? (in bytes)\nWill your system need to support any type of complex reporting in the future (despite it being write intensive)?I'd take a look at memcached, plproxy, pgpool, and some of the other cool stuff in the postgresql community.\nAt a minimum, it might help you architect the system in such a manner that you don't box yourself in.Last, KV stores for heavy write intensive operations in distributed environments are certainly interesting - a hybrid solution could work.\nSounds like a fun project!BryanOn Fri, Jun 25, 2010 at 7:02 PM, Greg Smith <[email protected]> wrote:\nKevin Grittner wrote:\n\nA schema is a logical separation within a database.  Table\nclient1.account is a different table from client2.account.  While a\nuser can be limited to tables within a single schema, a user with\nrights to all the tables can join between them as needed.  You could\nput common reference data in a public schema which all users could\naccess in addition to their private schemas\n\n\nMy guess would be that this app will end up being best split by schema.  I wonder whether it *also* needs to be split by database, too.  2000 clusters is clearly a nightmare, and putting all the client data into one big table has both performance and security issues; that leaves database and schema as possible splits.  However, having 2000 databases in a cluster is probably too many; having 2000 schemas in a database might also be too many.  There are downsides to expanding either of those to such a high quantity.\n\nIn order to keep both those in the domain where they perform well and are managable, it may be that what's needed is, say, 50 databases with 40 schemas each, rather than 2000 of either.  Hard to say the ideal ratio.  However, I think that at the application design level, it would be wise to consider each client as having a database+schema pair unique to them, and with the assumption some shared data may need to be replicated to all the databases in the cluster.  Then it's possible to shift the trade-off around as needed once the app is built.  Building that level of flexibility in shouldn't be too hard if it's in the design from day one, but it would be painful bit of refactoring to do later.  Once there's a prototype, then some benchmark work running that app could be done to figure out the correct ratio between the two.  It might even make sense to consider full scalability from day one and make the unique client connection info host:port:database:schema.\n\nP.S. Very refreshing to get asked about this before rather than after a giant app that doesn't perform well is deployed.\n\n-- \nGreg Smith  2ndQuadrant US  Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected]   www.2ndQuadrant.us\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 25 Jun 2010 20:35:06 -0500", "msg_from": "Bryan Hinton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architecting a database" } ]
[ { "msg_contents": "Dear List,\n\njust by removing the order by co_name reduces the query time dramatically\nfrom ~ 9 sec to 63 ms. Can anyone please help.\n\nRegds\nRajesh Kumar Mallah.\n\n\nexplain analyze SELECT * from ( SELECT\na.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name\nfrom general.catalogs a join general.profile_master using(profile_id) where\n1=1 and co_name_vec @@ to_tsquery('manufacturer') and b.co_name is not\nnull and a.ifmain is true ) as c order by co_name\nlimit 25 offset 0;\n\n\nLimit (cost=0.00..3659.13 rows=25 width=129) (actual time=721.075..9241.105\nrows=25 loops=1)\n -> Nested Loop (cost=0.00..1215772.28 rows=8307 width=476) (actual\ntime=721.073..9241.050 rows=25 loops=1)\n -> Nested Loop (cost=0.00..1208212.37 rows=8307 width=476)\n(actual time=721.052..9240.037 rows=25 loops=1)\n -> Nested Loop (cost=0.00..1204206.26 rows=6968 width=472)\n(actual time=721.032..9239.516 rows=25 loops=1)\n -> Nested Loop (cost=0.00..1154549.19 rows=6968\nwidth=471) (actual time=721.012..9236.523 rows=25 loops=1)\n -> Index Scan using profile_master_co_name on\nprofile_master b (cost=0.00..1125295.59 rows=6968 width=25) (actual\ntime=0.097..9193.154 rows=2212 loops=1)\n Filter: ((co_name IS NOT NULL) AND\n((co_name_vec)::tsvector @@ to_tsquery('manufacturer'::text)))\n -> Index Scan using\ncatalog_master_profile_id_fkindex on catalog_master (cost=0.00..4.19 rows=1\nwidth=446) (actual time=0.016..0.016 rows=0 loops=2212)\n Index Cond: (catalog_master.profile_id =\nb.profile_id)\n Filter: ((catalog_master.hide IS FALSE) AND\n((catalog_master.hosting_status)::text = 'ACTIVE'::text))\n -> Index Scan using profile_master_profile_id_pkey on\nprofile_master (cost=0.00..7.11 rows=1 width=9) (actual time=0.105..0.105\nrows=1 loops=25)\n Index Cond: (profile_master.profile_id =\ncatalog_master.profile_id)\n -> Index Scan using\ncatalog_categories_pkey_catalog_id_category_id on catalog_categories\n(cost=0.00..0.56 rows=1 width=8) (actual time=0.014..0.015 rows=1 loops=25)\n Index Cond: (catalog_categories.catalog_id =\ncatalog_master.catalog_id)\n Filter: (catalog_categories.ifmain IS TRUE)\n -> Index Scan using web_category_master_pkey on\nweb_category_master (cost=0.00..0.90 rows=1 width=4) (actual\ntime=0.034..0.036 rows=1 loops=25)\n Index Cond: (web_category_master.category_id =\ncatalog_categories.category_id)\n Filter: ((web_category_master.status)::text = 'ACTIVE'::text)\nTotal runtime: 9241.304 ms\n\nexplain analyze SELECT * from ( SELECT\na.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name\nfrom general.catalogs a join general.profile_master b using(profile_id)\nwhere 1=1 and co_name_vec @@ to_tsquery('manufacturer') and b.co_name\nis not null and a.ifmain is true ) as c limit 25 offset 0;\n\nQUERY PLAN\n\n----------------------------------------------------------------------------------------------\n Limit (cost=0.00..358.85 rows=25 width=476) (actual time=0.680..63.176\nrows=25 loops=1)\n -> Nested Loop (cost=0.00..119238.58 rows=8307 width=476) (actual\ntime=0.677..63.139 rows=25 loops=1)\n -> Nested Loop (cost=0.00..111678.66 rows=8307 width=476) (actual\ntime=0.649..62.789 rows=25 loops=1)\n -> Nested Loop (cost=0.00..107672.56 rows=6968 width=472)\n(actual time=0.626..62.436 rows=25 loops=1)\n -> Nested Loop (cost=0.00..58015.49 rows=6968\nwidth=471) (actual time=0.606..62.013 rows=25 loops=1)\n -> Index Scan using profile_master_co_name_vec\non profile_master b (cost=0.00..28761.89 rows=6968 width=25) (actual\ntime=0.071..50.576 rows=1160 loops=1)\n Index Cond: ((co_name_vec)::tsvector @@\nto_tsquery('manufacturer'::text))\n Filter: (co_name IS NOT NULL)\n -> Index Scan using\ncatalog_master_profile_id_fkindex on catalog_master (cost=0.00..4.19 rows=1\nwidth=446) (actual time=0.008..0.008 rows=0 loops=1160)\n Index Cond: (catalog_master.profile_id =\nb.profile_id)\n Filter: ((catalog_master.hide IS FALSE) AND\n((catalog_master.hosting_status)::text = 'ACTIVE'::text))\n -> Index Scan using profile_master_profile_id_pkey on\nprofile_master (cost=0.00..7.11 rows=1 width=9) (actual time=0.012..0.012\nrows=1 loops=25)\n Index Cond: (profile_master.profile_id =\ncatalog_master.profile_id)\n -> Index Scan using\ncatalog_categories_pkey_catalog_id_category_id on catalog_categories\n(cost=0.00..0.56 rows=1 width=8) (actual time=0.010..0.011 rows=1 loops=25)\n Index Cond: (catalog_categories.catalog_id =\ncatalog_master.catalog_id)\n Filter: (catalog_categories.ifmain IS TRUE)\n -> Index Scan using web_category_master_pkey on\nweb_category_master (cost=0.00..0.90 rows=1 width=4) (actual\ntime=0.009..0.010 rows=1 loops=25)\n Index Cond: (web_category_master.category_id =\ncatalog_categories.category_id)\n Filter: ((web_category_master.status)::text = 'ACTIVE'::text)\n Total runtime: 63.378 ms\n\nDear List,just by removing the order by co_name reduces the query time dramaticallyfrom  ~ 9 sec  to 63 ms. Can anyone please help.RegdsRajesh Kumar Mallah.\nexplain analyze SELECT * from   ( SELECT  a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name  from general.catalogs a join general.profile_master using(profile_id) where  1=1  and co_name_vec @@   to_tsquery('manufacturer')  and  b.co_name is not null and a.ifmain is true ) as c order by co_name  \nlimit 25 offset 0;Limit  (cost=0.00..3659.13 rows=25 width=129) (actual time=721.075..9241.105 rows=25 loops=1)   ->  Nested Loop  (cost=0.00..1215772.28 rows=8307 width=476) (actual time=721.073..9241.050 rows=25 loops=1)\n         ->  Nested Loop  (cost=0.00..1208212.37 rows=8307 width=476) (actual time=721.052..9240.037 rows=25 loops=1)               ->  Nested Loop  (cost=0.00..1204206.26 rows=6968 width=472) (actual time=721.032..9239.516 rows=25 loops=1)\n                     ->  Nested Loop  (cost=0.00..1154549.19 rows=6968 width=471) (actual time=721.012..9236.523 rows=25 loops=1)                           ->  Index Scan using profile_master_co_name on profile_master b  (cost=0.00..1125295.59 rows=6968 width=25) (actual time=0.097..9193.154 rows=2212 loops=1)\n                                 Filter: ((co_name IS NOT NULL) AND ((co_name_vec)::tsvector @@ to_tsquery('manufacturer'::text)))                           ->  Index Scan using catalog_master_profile_id_fkindex on catalog_master  (cost=0.00..4.19 rows=1 width=446) (actual time=0.016..0.016 rows=0 loops=2212)\n                                 Index Cond: (catalog_master.profile_id = b.profile_id)                                 Filter: ((catalog_master.hide IS FALSE) AND ((catalog_master.hosting_status)::text = 'ACTIVE'::text))\n                     ->  Index Scan using profile_master_profile_id_pkey on profile_master  (cost=0.00..7.11 rows=1 width=9) (actual time=0.105..0.105 rows=1 loops=25)                           Index Cond: (profile_master.profile_id = catalog_master.profile_id)\n               ->  Index Scan using catalog_categories_pkey_catalog_id_category_id on catalog_categories  (cost=0.00..0.56 rows=1 width=8) (actual time=0.014..0.015 rows=1 loops=25)                     Index Cond: (catalog_categories.catalog_id = catalog_master.catalog_id)\n                     Filter: (catalog_categories.ifmain IS TRUE)         ->  Index Scan using web_category_master_pkey on web_category_master  (cost=0.00..0.90 rows=1 width=4) (actual time=0.034..0.036 rows=1 loops=25)\n               Index Cond: (web_category_master.category_id = catalog_categories.category_id)               Filter: ((web_category_master.status)::text = 'ACTIVE'::text)Total runtime: 9241.304 msexplain analyze SELECT * from   ( SELECT  a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name  from general.catalogs a join general.profile_master b using(profile_id) where  1=1  and co_name_vec @@   to_tsquery('manufacturer')  and  b.co_name is not null and a.ifmain is true ) as c  limit 25 offset 0;\n                                                                                      QUERY PLAN                                                    ----------------------------------------------------------------------------------------------\n Limit  (cost=0.00..358.85 rows=25 width=476) (actual time=0.680..63.176 rows=25 loops=1)   ->  Nested Loop  (cost=0.00..119238.58 rows=8307 width=476) (actual time=0.677..63.139 rows=25 loops=1)         ->  Nested Loop  (cost=0.00..111678.66 rows=8307 width=476) (actual time=0.649..62.789 rows=25 loops=1)\n               ->  Nested Loop  (cost=0.00..107672.56 rows=6968 width=472) (actual time=0.626..62.436 rows=25 loops=1)                     ->  Nested Loop  (cost=0.00..58015.49 rows=6968 width=471) (actual time=0.606..62.013 rows=25 loops=1)\n                           ->  Index Scan using profile_master_co_name_vec on profile_master b  (cost=0.00..28761.89 rows=6968 width=25) (actual time=0.071..50.576 rows=1160 loops=1)                                 Index Cond: ((co_name_vec)::tsvector @@ to_tsquery('manufacturer'::text))\n                                 Filter: (co_name IS NOT NULL)                           ->  Index Scan using catalog_master_profile_id_fkindex on catalog_master  (cost=0.00..4.19 rows=1 width=446) (actual time=0.008..0.008 rows=0 loops=1160)\n                                 Index Cond: (catalog_master.profile_id = b.profile_id)                                 Filter: ((catalog_master.hide IS FALSE) AND ((catalog_master.hosting_status)::text = 'ACTIVE'::text))\n                     ->  Index Scan using profile_master_profile_id_pkey on profile_master  (cost=0.00..7.11 rows=1 width=9) (actual time=0.012..0.012 rows=1 loops=25)                           Index Cond: (profile_master.profile_id = catalog_master.profile_id)\n               ->  Index Scan using catalog_categories_pkey_catalog_id_category_id on catalog_categories  (cost=0.00..0.56 rows=1 width=8) (actual time=0.010..0.011 rows=1 loops=25)                     Index Cond: (catalog_categories.catalog_id = catalog_master.catalog_id)\n                     Filter: (catalog_categories.ifmain IS TRUE)         ->  Index Scan using web_category_master_pkey on web_category_master  (cost=0.00..0.90 rows=1 width=4) (actual time=0.009..0.010 rows=1 loops=25)\n               Index Cond: (web_category_master.category_id = catalog_categories.category_id)               Filter: ((web_category_master.status)::text = 'ACTIVE'::text) Total runtime: 63.378 ms", "msg_date": "Mon, 28 Jun 2010 16:38:41 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "order by slowing down a query by 80 times" }, { "msg_contents": "Rajesh Kumar Mallah wrote:\n> Dear List,\n>\n> just by removing the order by co_name reduces the query time dramatically\n> from ~ 9 sec to 63 ms. Can anyone please help.\nThe 63 ms query result is probably useless since it returns a limit of \n25 rows from an unordered result. It is not surprising that this is fast.\n\nThe pain is here:\nIndex Scan using profile_master_co_name on profile_master b \n(cost=0.00..1125295.59 rows=6968 width=25) (actual time=0.097..9193.154 \nrows=2212 loops=1)\n Filter: ((co_name IS NOT NULL) AND \n((co_name_vec)::tsvector @@ to_tsquery('manufacturer'::text)))\n\n\nIt looks like seq_scans are disabled, since the index scan has only a \nfilter expression but not an index cond.\n\nregards,\nYeb Havinga\n\n>\n> Regds\n> Rajesh Kumar Mallah.\n>\n>\n> explain analyze SELECT * from ( SELECT \n> a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name \n> from general.catalogs a join general.profile_master using(profile_id) \n> where 1=1 and co_name_vec @@ to_tsquery('manufacturer') and \n> b.co_name is not null and a.ifmain is true ) as c order by co_name \n> limit 25 offset 0;\n>\n>\n> Limit (cost=0.00..3659.13 rows=25 width=129) (actual \n> time=721.075..9241.105 rows=25 loops=1)\n> -> Nested Loop (cost=0.00..1215772.28 rows=8307 width=476) \n> (actual time=721.073..9241.050 rows=25 loops=1)\n> -> Nested Loop (cost=0.00..1208212.37 rows=8307 width=476) \n> (actual time=721.052..9240.037 rows=25 loops=1)\n> -> Nested Loop (cost=0.00..1204206.26 rows=6968 \n> width=472) (actual time=721.032..9239.516 rows=25 loops=1)\n> -> Nested Loop (cost=0.00..1154549.19 rows=6968 \n> width=471) (actual time=721.012..9236.523 rows=25 loops=1)\n> -> Index Scan using profile_master_co_name \n> on profile_master b (cost=0.00..1125295.59 rows=6968 width=25) \n> (actual time=0.097..9193.154 rows=2212 loops=1)\n> Filter: ((co_name IS NOT NULL) AND \n> ((co_name_vec)::tsvector @@ to_tsquery('manufacturer'::text)))\n> -> Index Scan using \n> catalog_master_profile_id_fkindex on catalog_master (cost=0.00..4.19 \n> rows=1 width=446) (actual time=0.016..0.016 rows=0 loops=2212)\n> Index Cond: \n> (catalog_master.profile_id = b.profile_id)\n> Filter: ((catalog_master.hide IS \n> FALSE) AND ((catalog_master.hosting_status)::text = 'ACTIVE'::text))\n> -> Index Scan using \n> profile_master_profile_id_pkey on profile_master (cost=0.00..7.11 \n> rows=1 width=9) (actual time=0.105..0.105 rows=1 loops=25)\n> Index Cond: (profile_master.profile_id = \n> catalog_master.profile_id)\n> -> Index Scan using \n> catalog_categories_pkey_catalog_id_category_id on catalog_categories \n> (cost=0.00..0.56 rows=1 width=8) (actual time=0.014..0.015 rows=1 \n> loops=25)\n> Index Cond: (catalog_categories.catalog_id = \n> catalog_master.catalog_id)\n> Filter: (catalog_categories.ifmain IS TRUE)\n> -> Index Scan using web_category_master_pkey on \n> web_category_master (cost=0.00..0.90 rows=1 width=4) (actual \n> time=0.034..0.036 rows=1 loops=25)\n> Index Cond: (web_category_master.category_id = \n> catalog_categories.category_id)\n> Filter: ((web_category_master.status)::text = \n> 'ACTIVE'::text)\n> Total runtime: 9241.304 ms\n>\n> explain analyze SELECT * from ( SELECT \n> a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name \n> from general.catalogs a join general.profile_master b \n> using(profile_id) where 1=1 and co_name_vec @@ \n> to_tsquery('manufacturer') and b.co_name is not null and a.ifmain is \n> true ) as c limit 25 offset 0;\n> \n> QUERY PLAN \n>\n> ----------------------------------------------------------------------------------------------\n> Limit (cost=0.00..358.85 rows=25 width=476) (actual \n> time=0.680..63.176 rows=25 loops=1)\n> -> Nested Loop (cost=0.00..119238.58 rows=8307 width=476) (actual \n> time=0.677..63.139 rows=25 loops=1)\n> -> Nested Loop (cost=0.00..111678.66 rows=8307 width=476) \n> (actual time=0.649..62.789 rows=25 loops=1)\n> -> Nested Loop (cost=0.00..107672.56 rows=6968 \n> width=472) (actual time=0.626..62.436 rows=25 loops=1)\n> -> Nested Loop (cost=0.00..58015.49 rows=6968 \n> width=471) (actual time=0.606..62.013 rows=25 loops=1)\n> -> Index Scan using \n> profile_master_co_name_vec on profile_master b (cost=0.00..28761.89 \n> rows=6968 width=25) (actual time=0.071..50.576 rows=1160 loops=1)\n> Index Cond: ((co_name_vec)::tsvector \n> @@ to_tsquery('manufacturer'::text))\n> Filter: (co_name IS NOT NULL)\n> -> Index Scan using \n> catalog_master_profile_id_fkindex on catalog_master (cost=0.00..4.19 \n> rows=1 width=446) (actual time=0.008..0.008 rows=0 loops=1160)\n> Index Cond: \n> (catalog_master.profile_id = b.profile_id)\n> Filter: ((catalog_master.hide IS \n> FALSE) AND ((catalog_master.hosting_status)::text = 'ACTIVE'::text))\n> -> Index Scan using \n> profile_master_profile_id_pkey on profile_master (cost=0.00..7.11 \n> rows=1 width=9) (actual time=0.012..0.012 rows=1 loops=25)\n> Index Cond: (profile_master.profile_id = \n> catalog_master.profile_id)\n> -> Index Scan using \n> catalog_categories_pkey_catalog_id_category_id on catalog_categories \n> (cost=0.00..0.56 rows=1 width=8) (actual time=0.010..0.011 rows=1 \n> loops=25)\n> Index Cond: (catalog_categories.catalog_id = \n> catalog_master.catalog_id)\n> Filter: (catalog_categories.ifmain IS TRUE)\n> -> Index Scan using web_category_master_pkey on \n> web_category_master (cost=0.00..0.90 rows=1 width=4) (actual \n> time=0.009..0.010 rows=1 loops=25)\n> Index Cond: (web_category_master.category_id = \n> catalog_categories.category_id)\n> Filter: ((web_category_master.status)::text = \n> 'ACTIVE'::text)\n> Total runtime: 63.378 ms\n>\n>\n\n", "msg_date": "Mon, 28 Jun 2010 13:39:27 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: order by slowing down a query by 80 times" }, { "msg_contents": "On Monday 28 June 2010 13:39:27 Yeb Havinga wrote:\n> It looks like seq_scans are disabled, since the index scan has only a \n> filter expression but not an index cond.\nOr its using it to get an ordered result...\n\nAndres\n", "msg_date": "Mon, 28 Jun 2010 15:26:34 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: order by slowing down a query by 80 times" }, { "msg_contents": "On Mon, Jun 28, 2010 at 5:09 PM, Yeb Havinga <[email protected]> wrote:\n\n> Rajesh Kumar Mallah wrote:\n>\n>> Dear List,\n>>\n>> just by removing the order by co_name reduces the query time dramatically\n>> from ~ 9 sec to 63 ms. Can anyone please help.\n>>\n> The 63 ms query result is probably useless since it returns a limit of 25\n> rows from an unordered result. It is not surprising that this is fast.\n>\n> The pain is here:\n>\n> Index Scan using profile_master_co_name on profile_master b\n> (cost=0.00..1125295.59 rows=6968 width=25) (actual time=0.097..9193.154\n> rows=2212 loops=1)\n> Filter: ((co_name IS NOT NULL) AND\n> ((co_name_vec)::tsvector @@ to_tsquery('manufacturer'::text)))\n>\n>\n> It looks like seq_scans are disabled, since the index scan has only a\n> filter expression but not an index cond.\n>\n\n\nseq_scans is NOT explicitly disabled. The two queries just differed in the\norder by clause.\n\nregds\nRajesh Kumar Mallah.\n\n\n>\n> regards,\n> Yeb Havinga\n>\n>\n>\n>> Regds\n>> Rajesh Kumar Mallah.\n>>\n>\n\nOn Mon, Jun 28, 2010 at 5:09 PM, Yeb Havinga <[email protected]> wrote:\nRajesh Kumar Mallah wrote:\n\nDear List,\n\njust by removing the order by co_name reduces the query time dramatically\nfrom  ~ 9 sec  to 63 ms. Can anyone please help.\n\nThe 63 ms query result is probably useless since it returns a limit of 25 rows from an unordered result. It is not surprising that this is fast.\n\nThe pain is here:\nIndex Scan using profile_master_co_name on profile_master b  (cost=0.00..1125295.59 rows=6968 width=25) (actual time=0.097..9193.154 rows=2212 loops=1)\n                                Filter: ((co_name IS NOT NULL) AND ((co_name_vec)::tsvector @@ to_tsquery('manufacturer'::text)))\n\n\nIt looks like seq_scans are disabled, since the index scan has only a filter expression but not an index cond.seq_scans is NOT explicitly disabled. The two queries just differed in the order by clause.\nregdsRajesh Kumar Mallah. \n\nregards,\nYeb Havinga\n\n\n\nRegds\nRajesh Kumar Mallah.", "msg_date": "Mon, 28 Jun 2010 19:11:58 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: order by slowing down a query by 80 times" }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> wrote:\n \n> just by removing the order by co_name reduces the query time\n> dramatically from ~ 9 sec to 63 ms. Can anyone please help.\n \nThe reason is that one query allows it to return *any* 25 rows,\nwhile the other query requires it to find a *specific* set of 25\nrows. It happens to be faster to just grab any old set of rows than\nto find particular ones.\n \nIf the actual results you need are the ones sorted by name, then\nforget the other query and focus on how you can retrieve the desired\nresults more quickly. One valuable piece of information would be to\nknow how many rows the query would return without the limit. It's\nalso possible that your costing factors may need adjustment. Or you\nmay need to get finer-grained statistics -- the optimizer thought it\nwould save time to use an index in the sequence you wanted, but it\nhad to scan through 2212 rows to find 25 rows which matched the\nselection criteria. It might well have been faster to use a table\nscan and sort than to follow the index like that.\n \n-Kevin\n", "msg_date": "Mon, 28 Jun 2010 08:52:25 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: order by slowing down a query by 80 times" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Rajesh Kumar Mallah <[email protected]> wrote:\n>> just by removing the order by co_name reduces the query time\n>> dramatically from ~ 9 sec to 63 ms. Can anyone please help.\n \n> The reason is that one query allows it to return *any* 25 rows,\n> while the other query requires it to find a *specific* set of 25\n> rows. It happens to be faster to just grab any old set of rows than\n> to find particular ones.\n\nI'm guessing that most of the cost is in repeated evaluations of the\nfilter clause\n\t(co_name_vec)::tsvector @@ to_tsquery('manufacturer'::text)\n\nThere are two extremely expensive functions involved there (cast to\ntsvector and to_tsquery) and they're being done over again at what\nI suspect is practically every table row. The unordered query is\nso much faster because it stops after having evaluated the text\nsearch clause just a few times.\n\nThe way to make this go faster is to set up the actually recommended\ninfrastructure for full text search, namely create an index on\n(co_name_vec)::tsvector (either directly or using an auxiliary tsvector\ncolumn). If you don't want to maintain such an index, fine, but don't\nexpect full text search queries to be quick.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Jun 2010 10:17:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: order by slowing down a query by 80 times " }, { "msg_contents": "Dear Tom/Kevin/List\n\nthanks for the insight, i will check the suggestion more closely and post\nthe results.\n\nregds\nRajesh Kumar Mallah.\n\nDear Tom/Kevin/List thanks for the insight, i will check the suggestion more closely and postthe results.regdsRajesh Kumar Mallah.", "msg_date": "Tue, 29 Jun 2010 00:33:09 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: order by slowing down a query by 80 times" }, { "msg_contents": "The way to make this go faster is to set up the actually recommended\n> infrastructure for full text search, namely create an index on\n> (co_name_vec)::tsvector (either directly or using an auxiliary tsvector\n> column). If you don't want to maintain such an index, fine, but don't\n> expect full text search queries to be quick.\n>\n> regards, tom lane\n>\n\n\n\nDear Tom/List ,\n\nco_name_vec is actually the auxiliary tsvector column that is mantained via\na\nan update trigger. and the index that you suggested is there . consider\nsimplified\nversion. When we order by co_name the index on co_name_vec is not used\nsome other index is used.\n\n tradein_clients=> explain analyze SELECT profile_id from\ngeneral.profile_master b where 1=1 and co_name_vec @@ to_tsquery\n('manufacturer') order by co_name limit 25;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3958.48 rows=25 width=25) (actual time=0.045..19.847\nrows=25 loops=1)\n -> Index Scan using profile_master_co_name on profile_master b\n(cost=0.00..1125315.59 rows=7107 width=25) (actual time=0.043..19.818\nrows=25 loops=1)\n Filter: ((co_name_vec)::tsvector @@\nto_tsquery('manufacturer'::text))\n Total runtime: 19.894 ms\n(4 rows)\n\ntradein_clients=> explain analyze SELECT profile_id from\ngeneral.profile_master b where 1=1 and co_name_vec @@ to_tsquery\n('manufacturer') limit 25;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..101.18 rows=25 width=4) (actual time=0.051..0.632\nrows=25 loops=1)\n -> Index Scan using profile_master_co_name_vec on profile_master b\n(cost=0.00..28761.89 rows=7107 width=4) (actual time=0.049..0.593 rows=25\nloops=1)\n Index Cond: ((co_name_vec)::tsvector @@\nto_tsquery('manufacturer'::text))\n Total runtime: 0.666 ms\n(4 rows)\n\ntradein_clients=>\n\n\nThe way to make this go faster is to set up the actually recommended\ninfrastructure for full text search, namely create an index on\n(co_name_vec)::tsvector (either directly or using an auxiliary tsvector\ncolumn).  If you don't want to maintain such an index, fine, but don't\nexpect full text search queries to be quick.\n\n                        regards, tom lane Dear Tom/List ,co_name_vec is actually the auxiliary tsvector column that is mantained via aan update trigger. and the index that you suggested is there . consider simplified\nversion. When we  order by co_name the index on co_name_vec is not usedsome other index is used. tradein_clients=> explain analyze SELECT  profile_id from  \ngeneral.profile_master b  where  1=1  and co_name_vec @@   to_tsquery \n('manufacturer')   order by co_name  limit 25;\n                                                                        \nQUERY PLAN                                                         \n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..3958.48 rows=25 width=25) (actual time=0.045..19.847\n rows=25 loops=1)\n   ->  Index Scan using profile_master_co_name on profile_master b  \n(cost=0.00..1125315.59 rows=7107 width=25) (actual time=0.043..19.818 \nrows=25 loops=1)\n         Filter: ((co_name_vec)::tsvector @@ \nto_tsquery('manufacturer'::text))\n Total runtime: 19.894 ms\n(4 rows)\n\ntradein_clients=> explain analyze SELECT  profile_id from  \ngeneral.profile_master b  where  1=1  and co_name_vec @@   to_tsquery \n('manufacturer')    limit 25;\n                                                                        \nQUERY PLAN                                                         \n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..101.18 rows=25 width=4) (actual time=0.051..0.632 \nrows=25 loops=1)\n   ->  Index Scan using profile_master_co_name_vec on profile_master \nb  (cost=0.00..28761.89 rows=7107 width=4) (actual time=0.049..0.593 \nrows=25 loops=1)\n         Index Cond: ((co_name_vec)::tsvector @@ \nto_tsquery('manufacturer'::text))\n Total runtime: 0.666 ms\n(4 rows)\n\ntradein_clients=>", "msg_date": "Tue, 29 Jun 2010 01:26:50 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: order by slowing down a query by 80 times" }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> co_name_vec is actually the auxiliary tsvector column that is mantained via\n> a\n> an update trigger. and the index that you suggested is there .\n\nWell, in that case it's just a costing/statistics issue. The planner is\nprobably estimating there are more tsvector matches than there really\nare, which causes it to think the in-order indexscan will terminate\nearlier than it really will, so it goes for that instead of a full scan\nand sort. If this is 8.4 then increasing the statistics target for the\nco_name_vec column should help that. In previous versions I'm not sure\nhow much you can do about it other than raise random_page_cost, which is\nlikely to be a net loss overall.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Jun 2010 16:18:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: order by slowing down a query by 80 times " } ]
[ { "msg_contents": "Hi,\n\n \n\nWe are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x).\nIt's a master/slave deployment & slony-2.0.4.rc2 is used for DB\nreplication on the slave box.\n\n \n\nAt times we have observed that postgres stops responding for several\nminutes, even couldn't fetch the number of entries in a particular\ntable. One such instance happens when we execute the following steps:\n\n- Add few lakh entries (~20) to table X on the master DB.\n\n- After addition, slony starts replication on the slave DB. It\ntakes several minutes (~25 mins) for replication to finish.\n\n- During this time (while replication is in progress), sometimes\npostgres stops responding, i.e. we couldn't even fetch the number of\nentries in any table (X, Y, etc).\n\n \n\nCan you please let us know what could the reason for such a behavior and\nhow it can be fixed/improved.\n\n \n\nRegards,\n\nSachin\n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nWe are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x).\nIt’s a master/slave deployment & slony-2.0.4.rc2 is used for DB\nreplication on the slave box.\n \nAt times we have observed that postgres stops responding for several minutes,\neven couldn’t fetch the number of entries in a particular table. One such\ninstance happens when we execute the following steps:\n-        \nAdd\nfew lakh entries (~20) to table X on the master DB.\n-        \nAfter\naddition, slony starts replication on the slave DB. It takes several minutes\n(~25 mins) for replication to finish.\n-        \nDuring\nthis time (while replication is in progress), sometimes postgres stops responding, i.e. we\ncouldn’t even fetch the number of entries in any table (X, Y, etc).\n \nCan you please let us know what could\nthe reason for such a behavior and how it can be fixed/improved.\n \nRegards,\nSachin", "msg_date": "Tue, 29 Jun 2010 12:31:18 +0530", "msg_from": "\"Sachin Kumar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issues with postgresql-8.4.0" }, { "msg_contents": "On 29/06/10 15:01, Sachin Kumar wrote:\n\n> At times we have observed that postgres stops responding for several\n> minutes, even couldn't fetch the number of entries in a particular\n> table.\n\nQuick guess: checkpoints. Enable checkpoint logging, follow the logs,\nsee if there's any correspondance.\n\nIn general, looking at the logs might help you identify the issue.\n\n One such instance happens when we execute the following steps:\n> \n> - Add few lakh entries (~20) to table X on the master DB.\n> \n> - After addition, slony starts replication on the slave DB. It\n> takes several minutes (~25 mins) for replication to finish.\n> \n> - During this time (while replication is in progress), sometimes\n> postgres stops responding, i.e. we couldn't even fetch the number of\n> entries in any table (X, Y, etc).\n\nFetching the number of entries in a table - using count(...) - is\nactually a rather expensive operation, and a poor choice if you just\nwant to see if the server is responsive.\n\n SELECT id FROM tablename LIMIT 1;\n\nwhere \"id\" is the primary key of the table would be a better option.\n\n--\nCraig Ringer\n", "msg_date": "Sun, 04 Jul 2010 20:38:36 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues with postgresql-8.4.0" } ]
[ { "msg_contents": "I've been reading this list for a couple of weeks, so I've got some\nsense of what you folks are likely to recommend, but I'm curious what\nis considered an ideal storage solution if building a database system\nfrom scratch. I just got an exploratory call from my boss, asking\nwhat my preference would be, and I didn't really have a great answer\nready. Budget is certainly NOT unlimited, but with the right\njustification, I don't need to pinch pennies, either.\n\nThe workload:\n\nIt is a combination of OLTP and data warehouse, but the OLTP workload\nis trivially light. All of the load comes from the constant data\ninsertion and reporting queries over the that data. Inserts are all\nperformed via COPY. The dataset size is kept small at the moment via\nvery aggressive aggregation and then dropping older, more granular\ndata but I'd like to be able to expand the quantity of data that I\nkeep at each level of aggregation. Inserts are currently occurring at\na rate of about 85,000 rows per minute, executed via 3 copy statements\nof about 50000, 30000, and 5000 rows each into 3 different tables. The\ncopy statements execute in a small fraction of the minute in which\nthey occur. I don't have timings handy, but no more than a couple of\nseconds.\n\nAll fact tables are partitioned over time. Data comes into the db\nalready aggregated by minute. I keep 2 hours of minute scale data in\na table partitioned by hour. Once per hour, the previous hour of data\nis aggregated up into an hour scale table. I keep 2 days of hour\nscale data in a table partitioned by day. Once per day, that gets\naggregated up into a day scale table that is partitioned by month. We\nkeep 2 months of day scale data. Once per month, that gets aggregated\nup into a month scale table and that data is kept indefinitely, at the\nmoment, but may eventually be limited to 3 years or so. All old data\ngets removed by dropping older partitions. There are no updates at\nall.\n\nMost reporting is done from the 2 hours of minute scale data and 2\nmonths of day scale data tables, which are 4 million and 47 million\nrows, respectively. I'm not sure the partitioning gets us much, other\nthan making removal of old data much more efficient, since queries are\nusually over the most recent 60 minutes and most recent 30 days, so\ntend to involve both partitions to some degree in every query except\nin the last minute and last day of each time period. We haven't put a\nlot of effort into tuning the queries since the dataset was MUCH\nsmaller, so there is likely some work to be done just in tuning the\nsystem as it stands, but queries are definitely taking longer than\nwe'd like them to, and we expect the volume of data coming into the db\nto grow in coming months. Ideally, I'd also like to be keeping a much\nlonger series of minute scale data, since that is the data most useful\nfor diagnosing problems in the run time system that is generating the\ndata, though we may still limit queries on that data set to the last 2\nhours.\n\nI inherited the hardware and have changed absolutely nothing to date.\n\nCurrent hardware -\nLooks like I've got 4 of these on the host:\n# cat /proc/cpuinfo\nprocessor\t: 0\nvendor_id\t: GenuineIntel\ncpu family\t: 6\nmodel\t\t: 15\nmodel name\t: Intel(R) Xeon(R) CPU 5110 @ 1.60GHz\nstepping\t: 6\ncpu MHz\t\t: 1600.002\ncache size\t: 4096 KB\nphysical id\t: 0\nsiblings\t: 2\ncore id\t\t: 0\ncpu cores\t: 2\napicid\t\t: 0\ninitial apicid\t: 0\nfpu\t\t: yes\nfpu_exception\t: yes\ncpuid level\t: 10\nwp\t\t: yes\nflags\t\t: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat\npse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm\nconstant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl\nvmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow\nbogomips\t: 3192.31\nclflush size\t: 64\ncache_alignment\t: 64\naddress sizes\t: 36 bits physical, 48 bits virtual\n\nso that's 8 cores total\n\n8 GB of RAM, but it is capable of handling 128GB and I get no\nresistance when I suggest going to 64GB of RAM.\n\n6 internal drives on battery backed raid (I don't know what RAID level\n- is there a way to discover this?), all in a single filesystem, so\nWAL and data are on the same filesystem. I don't believe that we are\ntaking advantage of the battery backed controller, since I only see\nthis in /etc/fstab:\n\nUUID=12dcd71d-8aec-4253-815c-b4883195eeb8 / ext3\n defaults 1 1\n\nBut inserts are happening so rapidly that I don't imagine that getting\nrid of fsync is going to change performance of the reporting queries\ntoo dramatically.\n\nTotal available storage is 438GB. Database currently occupies 91GB on disk.\n\nSo my question is, what would be the recommended storage solution,\ngiven what I've said above? And at what kind of price points? I have\nno idea at what price point I'll start to get resistance at the\nmoment. It could be $10K, it could be 5 times that. I really hope it\nwon't be considerably less than that.\n", "msg_date": "Tue, 29 Jun 2010 14:00:43 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "ideal storage configuration" }, { "msg_contents": "Samuel Gendler <[email protected]> wrote:\n \n> queries are definitely taking longer than we'd like them to\n \n> Database currently occupies 91GB on disk.\n \n> I get no resistance when I suggest going to 64GB of RAM.\n \nOne thing that jumps out at me is that with a 91GB database, and no\npushback on buying 64GB of RAM, it may be possible to get enough RAM\nto keep the *active portion* of the database entirely in RAM. (By\n\"active portion\" I mean that part which is repeated accessed to run\nthese queries.) If you can do that, then your bottleneck is almost\ncertainly going to be CPU, so you want fast ones. I hear that the\nnewest Intel chips do really well on PostgreSQL benchmarks. You\nwant the highest speed cores you can get, with fast access to fast\nRAM, even if it means fewer cores. (Someone please jump in with\ndetails.)\n \nGiven your insert-only, partitioned data, and your heavy reporting,\nI would definitely try to get to what I describe above; your disk\nsystem might not be where you need to spend the money. Of course, I\nwould still get a good RAID controller with BBU cache; I just don't\nthink you need to worry a whole lot about boosting your spindle\ncount.\n \n-Kevin\n", "msg_date": "Tue, 29 Jun 2010 16:30:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ideal storage configuration" }, { "msg_contents": "On Tue, 29 Jun 2010, Samuel Gendler wrote:\n> The copy statements execute in a small fraction of the minute in which \n> they occur.\n\nI'm going to ask a silly question here. If the system is already coping \nquite well with the load, then why are you changing it?\n\n> All old data gets removed by dropping older partitions. There are no \n> updates at all.\n\nThat's good.\n\n> 6 internal drives on battery backed raid (I don't know what RAID level\n> - is there a way to discover this?), all in a single filesystem, so\n> WAL and data are on the same filesystem. I don't believe that we are\n> taking advantage of the battery backed controller, since I only see\n> this in /etc/fstab:\n>\n> UUID=12dcd71d-8aec-4253-815c-b4883195eeb8 / ext3\n> defaults 1 1\n\nThat doesn't have anything to do with whether or not the controller has a \nBBU cache. If the controller does have a BBU cache, then your writes will \nreturn quicker - and nothing else.\n\n> But inserts are happening so rapidly that I don't imagine that getting\n> rid of fsync is going to change performance of the reporting queries\n> too dramatically.\n\nDon't get rid of fsync, unless you want to lose your data. Especially with \nyour workload of large transactions, you do not need the benefit of \nreducing the transaction latency, and even that benefit is not present if \nyou have a BBU cache.\n\nIt seems like your current disc array is coping quite well with the write \ntraffic. If you really want to boost your read speeds for your reporting \nqueries, then increase the amount of RAM, as Kevin said, and see if you \ncan fit the active portion of the database into RAM.\n\nMatthew\n\n-- \n Nog: Look! They've made me into an ensign!\n O'Brien: I didn't know things were going so badly.\n Nog: Frightening, isn't it?\n", "msg_date": "Wed, 30 Jun 2010 10:56:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ideal storage configuration" }, { "msg_contents": "Samuel Gendler wrote:\n> 6 internal drives on battery backed raid (I don't know what RAID level\n> - is there a way to discover this?), all in a single filesystem, so\n> WAL and data are on the same filesystem. I don't believe that we are\n> taking advantage of the battery backed controller, since I only see\n> this in /etc/fstab:\n>\n> UUID=12dcd71d-8aec-4253-815c-b4883195eeb8 / ext3\n> defaults 1 1\n> \n\nThat doesn't tell you anything about whether the card's write cache is \nbeing used or not. What you need to do is install the monitoring tools \nprovided by the manufacturer of the card, which will let you check both \nthe RAID level and how the cache is setup. I can't give any more \nspecific instructions because you left out a vital piece of system \ninformation: what RAID card you actually have.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 30 Jun 2010 14:52:09 +0100", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ideal storage configuration" } ]
[ { "msg_contents": "\nPlease tell me What is the best way to optimize this query\n\nselect\ns.*,a.actid,a.phone,d.domid,d.domname,d.domno,a.actno,a.actname,p.descr\nas svcdescr from vwsubsmin s inner join packages p on s.svcno=p.pkgno inner\njoin\naccount a on a.actno=s.actno inner join ssgdom d on a.domno=d.domno inner\njoin (select subsno from\ngetexpiringsubs($1,cast($2 as integer),cast($3 as double precision), $4) as\n(subsno int,expirydt timestamp without time zone,balcpt double precision))\nas e on s.subsno=e.subsno where s.status <=15 and d.domno=$5 order by\nd.domname,s.expirydt,a.actname \n-- \nView this message in context: http://old.nabble.com/What-is-the-best-way-to-optimize-this-query-tp29041515p29041515.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 30 Jun 2010 22:19:14 -0700 (PDT)", "msg_from": "Srikanth Kata <[email protected]>", "msg_from_op": true, "msg_subject": "What is the best way to optimize this query" }, { "msg_contents": "Dear Sri,\n\nPlease post at least the Explain Analyze output . There is a nice posting\nguideline\nalso regarding on how to post query optimization questions.\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nOn Thu, Jul 1, 2010 at 10:49 AM, Srikanth Kata <[email protected]>wrote:\n\n>\n> Please tell me What is the best way to optimize this query\n>\n> select\n> s.*,a.actid,a.phone,d.domid,d.domname,d.domno,a.actno,a.actname,p.descr\n> as svcdescr from vwsubsmin s inner join packages p on s.svcno=p.pkgno inner\n> join\n> account a on a.actno=s.actno inner join ssgdom d on a.domno=d.domno inner\n> join (select subsno from\n> getexpiringsubs($1,cast($2 as integer),cast($3 as double precision), $4) as\n> (subsno int,expirydt timestamp without time zone,balcpt double precision))\n> as e on s.subsno=e.subsno where s.status <=15 and d.domno=$5 order by\n> d.domname,s.expirydt,a.actname\n> --\n> View this message in context:\n> http://old.nabble.com/What-is-the-best-way-to-optimize-this-query-tp29041515p29041515.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nDear Sri,Please post at least  the Explain Analyze output . There is a nice posting guidelinealso regarding on how to post query optimization questions.http://wiki.postgresql.org/wiki/SlowQueryQuestions\nOn Thu, Jul 1, 2010 at 10:49 AM, Srikanth Kata <[email protected]> wrote:\n\nPlease tell me What is the best way to optimize this query\n\nselect\ns.*,a.actid,a.phone,d.domid,d.domname,d.domno,a.actno,a.actname,p.descr\nas svcdescr from vwsubsmin s inner join packages p on s.svcno=p.pkgno inner\njoin\naccount a on a.actno=s.actno inner join ssgdom d on a.domno=d.domno inner\njoin (select subsno from\ngetexpiringsubs($1,cast($2 as integer),cast($3 as double precision), $4) as\n(subsno int,expirydt timestamp without time zone,balcpt double precision))\nas e on s.subsno=e.subsno where s.status <=15 and d.domno=$5 order by\nd.domname,s.expirydt,a.actname\n--\nView this message in context: http://old.nabble.com/What-is-the-best-way-to-optimize-this-query-tp29041515p29041515.html\n\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 1 Jul 2010 18:10:18 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to optimize this query" }, { "msg_contents": "On 1 July 2010 06:19, Srikanth Kata <[email protected]> wrote:\n>\n> Please tell me What is the best way to optimize this query\n>\n> select\n> s.*,a.actid,a.phone,d.domid,d.domname,d.domno,a.actno,a.actname,p.descr\n> as svcdescr from vwsubsmin s inner join packages p on s.svcno=p.pkgno inner\n> join\n> account a on a.actno=s.actno inner join ssgdom d on a.domno=d.domno inner\n> join (select subsno from\n> getexpiringsubs($1,cast($2 as integer),cast($3 as double precision), $4) as\n> (subsno int,expirydt timestamp without time zone,balcpt double precision))\n> as e on s.subsno=e.subsno where s.status <=15 and d.domno=$5 order by\n> d.domname,s.expirydt,a.actname\n> --\n> View this message in context: http://old.nabble.com/What-is-the-best-way-to-optimize-this-query-tp29041515p29041515.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n\nMight help if the query were a bit more readable too:\n\nselect\n\ts.*,\n\ta.actid,\n\ta.phone,\n\td.domid,\n\td.domname,\n\td.domno,\n\ta.actno,\n\ta.actname,\n\tp.descr as svcdescr\nfrom\n\tvwsubsmin s\ninner join\n\tpackages p\n\ton s.svcno=p.pkgno inner\njoin\n\taccount a\n\ton a.actno=s.actno\ninner join\n\tssgdom d on a.domno=d.domno\ninner join\n\t(select\n\t\tsubsno\n\tfrom\n\t\tgetexpiringsubs(\n\t\t\t$1,\n\t\t\tcast($2 as integer),\n\t\t\tcast($3 as double precision),\n\t\t\t$4\n\t\t\t) as\n\t\t\t(subsno int,\n\t\t\texpirydt timestamp without time zone,\n\t\t\tbalcpt double precision)\n\t) as e\n\ton s.subsno=e.subsno\nwhere\n\ts.status <=15\nand\n\td.domno=$5\norder by\n\td.domname,\n\ts.expirydt,\n\ta.actname;\n\nAnd it would also help if the table names, column names and aliases\nwere more self-documenting.\n\nAs Rajesh said, an EXPLAIN ANALYZE output is needed, as we don't yet\nknow where your indexes are.\n\nRegards\n\nThom\n", "msg_date": "Thu, 1 Jul 2010 13:51:24 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to optimize this query" } ]
[ { "msg_contents": "Hi,\n\nthis is not really a performance question , sorry if its bit irrelevant\nto be posted here. We have a development environment and we want\nto optimize the non-database parts of the application. The problem is\nthat subsequent run of queries are execute very fast and makes the\nperformance analysis a trivial problem. We want that the subsequent runs\nof query should take similar times as the first run so that we can work\non the optimizing the calling patterns to the database.\n\nregds\nRajesh Kumar Mallah.\n\nHi,this is not really a performance question , sorry if its bit irrelevant to be posted here. We have a development environment and we want to optimize the non-database parts of the application. The problem is \nthat subsequent run of  queries are execute very fast and makes the performance analysis a trivial problem. We want that the subsequent runs of query should take similar times as the first run so that we can work \non the optimizing the calling patterns to the database.regdsRajesh Kumar Mallah.", "msg_date": "Thu, 1 Jul 2010 15:11:10 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "how to (temporarily) disable/minimize benefits of disk block cache or\n\tpostgresql shared buffer" }, { "msg_contents": "On 01/07/10 17:41, Rajesh Kumar Mallah wrote:\n> Hi,\n> \n> this is not really a performance question , sorry if its bit irrelevant\n> to be posted here. We have a development environment and we want\n> to optimize the non-database parts of the application. The problem is\n> that subsequent run of queries are execute very fast and makes the\n> performance analysis a trivial problem. We want that the subsequent runs\n> of query should take similar times as the first run so that we can work\n> on the optimizing the calling patterns to the database.\n\nYou can get rid of PostgreSQL's caches in shared_buffers by restarting\nthe PostgreSQL server. I don't know if there's any more convenient way.\nAlternately, just set a really minimal shared_buffers that's just enough\nfor your connections so there's not much room for cached data.\n\nIf you are running a Linux server (as you didn't mention what you're\nrunning on) you can drop the OS disk cache quite easily:\n\n http://linux-mm.org/Drop_Caches\n http://www.linuxinsight.com/proc_sys_vm_drop_caches.html\n\nAFAIK for most other platforms you have to use a tool that gobbles\nmemory to force caches out. On Windows, most of those garbage tools that\nclaim to \"free\" memory do this - it's about the only time you'd ever\nwant to use one, since they do such horrid things to performance.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 02 Jul 2010 00:37:51 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to (temporarily) disable/minimize benefits of disk\n\tblock cache or \tpostgresql shared buffer" }, { "msg_contents": "On Thu, Jul 1, 2010 at 10:07 PM, Craig Ringer\n<[email protected]>wrote:\n\n> On 01/07/10 17:41, Rajesh Kumar Mallah wrote:\n> > Hi,\n> >\n> > this is not really a performance question , sorry if its bit irrelevant\n> > to be posted here. We have a development environment and we want\n> > to optimize the non-database parts of the application. The problem is\n> > that subsequent run of queries are execute very fast and makes the\n> > performance analysis a trivial problem. We want that the subsequent runs\n> > of query should take similar times as the first run so that we can work\n> > on the optimizing the calling patterns to the database.\n>\n> You can get rid of PostgreSQL's caches in shared_buffers by restarting\n> the PostgreSQL server. I don't know if there's any more convenient way.\n> Alternately, just set a really minimal shared_buffers that's just enough\n> for your connections so there's not much room for cached data.\n>\n> I had set it to 128kb\nit does not really work , i even tried your next suggestion. I am in\nvirtualized\nenvironment particularly OpenVz. where echo 3 > /proc/sys/vm/drop_caches\ndoes not work inside the virtual container, i did it in the hardware node\nbut still does not give desired result.\nregds\nRajesh Kumar Mallah.\n\n\n\n> If you are running a Linux server (as you didn't mention what you're\n> running on) you can drop the OS disk cache quite easily:\n>\n> http://linux-mm.org/Drop_Caches\n> http://www.linuxinsight.com/proc_sys_vm_drop_caches.html\n>\n> AFAIK for most other platforms you have to use a tool that gobbles\n> memory to force caches out. On Windows, most of those garbage tools that\n> claim to \"free\" memory do this - it's about the only time you'd ever\n> want to use one, since they do such horrid things to performance.\n>\n> --\n> Craig Ringer\n>\n\nOn Thu, Jul 1, 2010 at 10:07 PM, Craig Ringer <[email protected]> wrote:\nOn 01/07/10 17:41, Rajesh Kumar Mallah wrote:\n> Hi,\n>\n> this is not really a performance question , sorry if its bit irrelevant\n> to be posted here. We have a development environment and we want\n> to optimize the non-database parts of the application. The problem is\n> that subsequent run of  queries are execute very fast and makes the\n> performance analysis a trivial problem. We want that the subsequent runs\n> of query should take similar times as the first run so that we can work\n> on the optimizing the calling patterns to the database.\n\nYou can get rid of PostgreSQL's caches in shared_buffers by restarting\nthe PostgreSQL server. I don't know if there's any more convenient way.\nAlternately, just set a really minimal shared_buffers that's just enough\nfor your connections so there's not much room for cached data.\nI had set it to 128kbit does not really work , i even tried your next suggestion. I am in virtualized environment particularly OpenVz. where echo 3 > /proc/sys/vm/drop_cachesdoes not work inside the virtual container, i did it in the hardware node \nbut still does not give desired result.regdsRajesh Kumar Mallah. \nIf you are running a Linux server (as you didn't mention what you're\nrunning on) you can drop the OS disk cache quite easily:\n\n  http://linux-mm.org/Drop_Caches\n  http://www.linuxinsight.com/proc_sys_vm_drop_caches.html\n\nAFAIK for most other platforms you have to use a tool that gobbles\nmemory to force caches out. On Windows, most of those garbage tools that\nclaim to \"free\" memory do this - it's about the only time you'd ever\nwant to use one, since they do such horrid things to performance.\n\n--\nCraig Ringer", "msg_date": "Thu, 1 Jul 2010 23:29:20 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to (temporarily) disable/minimize benefits of disk\n\tblock cache or postgresql shared buffer" }, { "msg_contents": "On 02/07/10 01:59, Rajesh Kumar Mallah wrote:\n\n> I had set it to 128kb\n> it does not really work , i even tried your next suggestion. I am in\n> virtualized\n> environment particularly OpenVz. where echo 3 > /proc/sys/vm/drop_caches\n> does not work inside the virtual container, i did it in the hardware node\n> but still does not give desired result.\n\nYeah, if you're in a weird virtualized environment like that you're\nlikely to have problems, because caching can be done at multiple levels.\nIn the case of OpenVZ, it's hard to know what the \"guest\" and what the\n\"host\" even is sometimes, and I wouldn't trust it to respect things like\nthe Linux VM cache management.\n\nYou might have to fall back on the classic method: a program that tries\nto allocate as much RAM as it can. On Linux this is EXTREMELY unsafe\nunless you ensure you have vm overcommit disabled (see the postgresql\ndocs) because by default Linux systems will never fail a memory\nallocation - instead they'll go looking for a big process to kill to\nfree some memory. In theory this should be your memory gobbler program,\nbut in reality the OOM killer isn't so predictable.\n\nSo: try turning vm overcommit off, then writing (or finding) a simple\nprogram that keeps on malloc()ing memory until an allocation call fails.\nThat should force any caches out, freeing you for another cold run.\n\nNote that this method won't just force out the obvious caches like\npostgresql data files. It also forces out things like caches of running\nbinaries. Things will grind to an absolute crawl for a minute or two\nbefore resuming normal speed, because *everything* has to come back from\ndisk at once. The same is true of using /proc/sys/vm/drop_caches to drop\nall caches.\n\nI guess, in the end, nothing really subtitutes for a good reboot.\n\n-- \nCraig Ringer\n\nTech-related writing: http://soapyfrogs.blogspot.com/\n", "msg_date": "Fri, 02 Jul 2010 11:00:51 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to (temporarily) disable/minimize benefits of disk\n\tblock cache or postgresql shared buffer" }, { "msg_contents": "Dear Criag,\n\nThanks for thinking about it.I do not understand why u feel OpenVz is weird.\nat the most its not very popular. But lets not get into that debate as its\nnot\nthe proper forum. From your reply i understand that there is not a easy and\nclean way of doing it. Since performance related profiling requires multiple\niterations it is not feasible to reboot the machine. I think i will try to\nprofile\nmy code using new and unique input parameters in each iteration, this shall\nroughly serve my purpose.\n\nOn Fri, Jul 2, 2010 at 8:30 AM, Craig Ringer <[email protected]>wrote:\n\n> On 02/07/10 01:59, Rajesh Kumar Mallah wrote:\n>\n> > I had set it to 128kb\n> > it does not really work , i even tried your next suggestion. I am in\n> > virtualized\n> > environment particularly OpenVz. where echo 3 > /proc/sys/vm/drop_caches\n> > does not work inside the virtual container, i did it in the hardware node\n> > but still does not give desired result.\n>\n> Yeah, if you're in a weird virtualized environment like that you're\n> likely to have problems, because caching can be done at multiple levels.\n> In the case of OpenVZ, it's hard to know what the \"guest\" and what the\n> \"host\" even is sometimes, and I wouldn't trust it to respect things like\n> the Linux VM cache management.\n>\n> You might have to fall back on the classic method: a program that tries\n> to allocate as much RAM as it can. On Linux this is EXTREMELY unsafe\n> unless you ensure you have vm overcommit disabled (see the postgresql\n> docs) because by default Linux systems will never fail a memory\n> allocation - instead they'll go looking for a big process to kill to\n> free some memory. In theory this should be your memory gobbler program,\n> but in reality the OOM killer isn't so predictable.\n>\n> So: try turning vm overcommit off, then writing (or finding) a simple\n> program that keeps on malloc()ing memory until an allocation call fails.\n> That should force any caches out, freeing you for another cold run.\n>\n> Note that this method won't just force out the obvious caches like\n> postgresql data files. It also forces out things like caches of running\n> binaries. Things will grind to an absolute crawl for a minute or two\n> before resuming normal speed, because *everything* has to come back from\n> disk at once. The same is true of using /proc/sys/vm/drop_caches to drop\n> all caches.\n>\n> I guess, in the end, nothing really subtitutes for a good reboot.\n>\n> --\n> Craig Ringer\n>\n> Tech-related writing: http://soapyfrogs.blogspot.com/\n>\n\nDear Criag,Thanks for thinking about it.I do not understand why u feel OpenVz is weird.at the most its not very popular. But lets not get into that debate as its notthe proper forum. From your reply i understand that there is not a easy and\nclean way of doing it. Since performance related profiling requires multipleiterations it is not feasible to reboot the machine. I think i will try to profilemy code using new and unique input parameters in each iteration, this shall\nroughly serve my purpose.  On Fri, Jul 2, 2010 at 8:30 AM, Craig Ringer <[email protected]> wrote:\nOn 02/07/10 01:59, Rajesh Kumar Mallah wrote:\n\n> I had set it to 128kb\n> it does not really work , i even tried your next suggestion. I am in\n> virtualized\n> environment particularly OpenVz. where echo 3 > /proc/sys/vm/drop_caches\n> does not work inside the virtual container, i did it in the hardware node\n> but still does not give desired result.\n\nYeah, if you're in a weird virtualized environment like that you're\nlikely to have problems, because caching can be done at multiple levels.\nIn the case of OpenVZ, it's hard to know what the \"guest\" and what the\n\"host\" even is sometimes, and I wouldn't trust it to respect things like\nthe Linux VM cache management.\n\nYou might have to fall back on the classic method: a program that tries\nto allocate as much RAM as it can. On Linux this is EXTREMELY unsafe\nunless you ensure you have vm overcommit disabled (see the postgresql\ndocs) because by default Linux systems will never fail a memory\nallocation - instead they'll go looking for a big process to kill to\nfree some memory. In theory this should be your memory gobbler program,\nbut in reality the OOM killer isn't so predictable.\n\nSo: try turning vm overcommit off, then writing (or finding) a simple\nprogram that keeps on malloc()ing memory until an allocation call fails.\nThat should force any caches out, freeing you for another cold run.\n\nNote that this method won't just force out the obvious caches like\npostgresql data files. It also forces out things like caches of running\nbinaries. Things will grind to an absolute crawl for a minute or two\nbefore resuming normal speed, because *everything* has to come back from\ndisk at once. The same is true of using /proc/sys/vm/drop_caches to drop\nall caches.\n\nI guess, in the end, nothing really subtitutes for a good reboot.\n\n--\nCraig Ringer\n\nTech-related writing: http://soapyfrogs.blogspot.com/", "msg_date": "Sat, 3 Jul 2010 00:52:33 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to (temporarily) disable/minimize benefits of disk\n\tblock cache or postgresql shared buffer" }, { "msg_contents": "> On Fri, Jul 2, 2010 at 8:30 AM, Craig Ringer <[email protected]>wrote:\n>> Yeah, if you're in a weird virtualized environment like that you're\n>> likely to have problems...\n\nOn Sat, 3 Jul 2010, Rajesh Kumar Mallah wrote:\n> Thanks for thinking about it.I do not understand why u feel OpenVz is weird.\n> at the most its not very popular.\n\nIt's not OpenVz that is wierd, but virtualisation in general. If you are \nrunning in a virtual machine, then all sorts of things will not run as \nwell as expected.\n\nMatthew\n\n-- \n Contrary to popular belief, Unix is user friendly. It just happens to be\n very selective about who its friends are. -- Kyle Hearn\n", "msg_date": "Mon, 5 Jul 2010 10:55:07 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to (temporarily) disable/minimize benefits of disk\n\tblock cache or postgresql shared buffer" } ]
[ { "msg_contents": "Greetings,\n\nwe are running a few databases of currently 200GB (growing) in total for \ndata warehousing:\n- new data via INSERTs for (up to) millions of rows per day; sometimes \nwith UPDATEs\n- most data in a single table (=> 10 to 100s of millions of rows)\n- queries SELECT subsets of this table via an index\n- for effective parallelisation, queries create (potentially large) \nnon-temporary tables which are deleted at the end of the query => lots \nof simple INSERTs and SELECTs during queries\n- large transactions: they may contain millions of INSERTs/UPDATEs\n- running version PostgreSQL 8.4.2\n\nWe are moving all this to a larger system - the hardware is available, \ntherefore fixed:\n- Sun X4600 (16 cores, 64GB)\n- external SAS JBOD with 24 2,5\" slots:\n - 18x SAS 10k 146GB drives\n - 2x SAS 10k 73GB drives\n - 4x Intel SLC 32GB SATA SSD\n- JBOD connected to Adaptec SAS HBA with BBU\n- Internal storage via on-board RAID HBA:\n - 2x 73GB SAS 10k for OS (RAID1)\n - 2x Intel SLC 32GB SATA SSD for ZIL (RAID1) (?)\n- OS will be Solaris 10 to have ZFS as filesystem (and dtrace)\n- 10GigE towards client tier (currently, another X4600 with 32cores and \n64GB)\n\nWhat would be the optimal storage/ZFS layout for this? I checked \nsolarisinternals.com and some PostgreSQL resources and came to the \nfollowing concept - asking for your comments:\n- run the JBOD without HW-RAID, but let all redundancy be done by ZFS \nfor maximum flexibility\n- create separate ZFS pools for tablespaces (data, index, temp) and WAL \non separate devices (LUNs):\n- use the 4 SSDs in the JBOD as Level-2 ARC cache (can I use a single \ncache for all pools?) w/o redundancy\n- use the 2 SSDs connected to the on-board HBA as RAID1 for ZFS ZIL\n\nPotential issues that I see:\n- the ZFS ZIL will not benefit from a BBU (as it is connected to the \nbackplane, driven by the onboard-RAID), and might be too small (32GB for \n~2TB of data with lots of writes)?\n- the pools on the JBOD might have the wrong size for the tablespaces - \nlike: using the 2 73GB drives as RAID 1 for temp might become too small, \nbut adding a 146GB drive might not be a good idea?\n- with 20 spindles, does it make sense at all to use dedicated devices \nfor the tabelspaces, or will the load be distributed well enough across \nthe spindles anyway?\n\nthanks for any comments & suggestions,\n\n Joachim\n\n", "msg_date": "Thu, 01 Jul 2010 15:14:32 +0200", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": true, "msg_subject": "optimal ZFS filesystem on JBOD layout" }, { "msg_contents": "Joachim Worringen wrote:\n> Potential issues that I see:\n> - the ZFS ZIL will not benefit from a BBU (as it is connected to the \n> backplane, driven by the onboard-RAID), and might be too small (32GB \n> for ~2TB of data with lots of writes)?\n\nThis is a somewhat unpredictable setup. The conservative approach would \nbe to break two disks out of the larger array for the ZIL, running \nthrough the battery-backed cache, rather than using the SSD drives for \nthat. The best way IMHO to use SSD for PostgreSQL is to put your large \nindexes on it, so that even if the drive does the wrong thing when you \nwrite and the index gets corrupted you can always rebuild them rather \nthan suffer data loss. Also, index writes really benefit from being on \nsomething with low seek times, moreso than the ZIL or WAL.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 01 Jul 2010 15:15:16 +0100", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimal ZFS filesystem on JBOD layout" } ]
[ { "msg_contents": "Hi,\n\nI have quite a simple query but a lot of data and the SELECT query is\ntoo slow. I will be really grateful for any advice on this.\n\n--------------------------------------------------\nThe background info:\n\nI work on a speech search engine which differs from text search in\nhaving more words (hypotheses) on the same position and each\nhypothesis has some weight (probability) of occurrence.\n\nWhen a word 'hello' appears in a document 'lecture_1', there is a row\nin the table hyps (see below) which contains an array of all positions\nof word 'hello' in the document 'lecture_1' and for each position it\ncontains a weight as well.\n\nI need the positions to be able to search for phrases. However, here I\nsimplified the query as much as I could without a significant\nreduction in speed.\n\nI know there is tsearch extension which could be more appropriate for\nthis but I didn't try that yet. The size of my data will be the same\nwhich seems to be the issue in my case. But maybe I am wrong and with\ntsearch it will be much faster. What do you think?\n\n--------------------------------------------------\nPreconditions:\n\nFirst I cleared the disk cache:\n sync; sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'\n\nThen run the postgresql deamon and with psql client I connected to my\ndatabase. The first thing I did then was executing the SELECT query\ndescribed below. It took about 4.5 seconds. If I rerun it, it takes\nless than 2 miliseconds, but it is because of the cache. I need to\noptimize the first-run.\n\n--------------------------------------------------\nHardware:\n\nlaptop ASUS, CPU dual core T2300 1.66GHz, 1.5G RAM\n\n--------------------------------------------------\nVersion:\n\nPostgreSQL 8.4.4 on i686-pc-linux-gnu, compiled by GCC gcc (Ubuntu\n4.4.1-4ubuntu9) 4.4.1, 32-bit\n\ncompiled from sources, only --prefix=... argument given to ./configure\n\n--------------------------------------------------\nSchema:\n\nCREATE TABLE hyps (\n\tdocid INT,\n\twordid INT,\n\tpositions INT[],\n\tweights REAL[],\n\tlength INT,\n\ttotal_weight REAL\n);\nCOPY hyps FROM '/home/miso/exp/speech_search/postgresql/sqlcopy/all_weights_clustered.sqlcopy';\nCREATE INDEX hyps_wordid_index ON hyps USING hash (wordid);\nCREATE INDEX hyps_docid_index ON hyps USING hash (docid);\n\nshared_buffers = 300MB ...this is the only thing I changed in the config\n\nI tried that also with btree indices instead of hash and surprisingly\nthe SELECT query was a bit faster. I would expect hash to be faster.\n\nThe index on 'docid' column is there because I need to be able to\nsearch also in a particular document or in a set of documents.\n--------------------------------------------------\nTable info:\n\n- rows = 5490156\n- average length of positions vectors = 19.5\n- total number of items in positions vectors = 107444304\n- positions and weights in one row have the same number of items, but\nfor each row the number may differ.\n- table data are loaded only once (using COPY) and are not modified anymore\n- there are 369 various docid and 161460 various wordid\n- VACUUM was executed after COPY of data\n\n--------------------------------------------------\nQuery:\n\nEXPLAIN ANALYZE SELECT h1.docid\nFROM hyps AS h1\nWHERE h1.wordid=65658;\n\n Bitmap Heap Scan on hyps h1 (cost=10.97..677.09 rows=171 width=4)\n(actual time=62.106..4416.864 rows=343 loops=1)\n Recheck Cond: (wordid = 65658)\n -> Bitmap Index Scan on hyps_wordid_index (cost=0.00..10.92\nrows=171 width=0) (actual time=42.969..42.969 rows=343 loops=1)\n Index Cond: (wordid = 65658)\n Total runtime: 4432.015 ms\n\nThe result has 343 rows and there are 9294 items in positions vectors in total.\n\n--------------------------------------------------\nComparison with Lucene:\n\nIf I run the same query in Lucene search engine, it takes 0.105\nseconds on the same data which is quite a huge difference.\n\n--------------------------------------------------\nSynthetic data set:\n\nIf you want to try it yourself, here is a script which generates the\ndata for COPY command. I don't know whether it is possible to send\nattachments here, so I put the script inline. Just save it as\ncreate_synthetic_data.pl and run it by 'perl\ncreate_synthetic_data.pl'. With these synthetic data the SELECT query\ntimes are around 2.5 seconds. You can try the SELECT query with\n'wordid' equal 1, 2, 3, ...10000.\n\n\n#!/usr/bin/perl\n# Create synthetic data for PostgreSQL COPY.\n\n$rows = 5490156;\n$docs = 369;\n$words = 161460;\n$docid = 0;\n$wordid = 0;\n\nfor ($row=0; $row<$rows; $row++) {\n\n\tmy $sep = \"\";\n\tmy $positions = \"\";\n\tmy $weights = \"\";\n\tmy $total_weight = 0;\n\tmy $items = int(rand(39))+1;\n\n\tif ($row % int($rows/$docs) == 0) {\n\t\t$docid++;\n\t\t$wordid = 0;\n\t}\n\t$wordid++;\n\n\tfor ($i=0; $i<$items; $i++) {\n\t\t$position = int(rand(20000));\n\t\t$weight = rand(1);\n\t\t$positions .= $sep.$position;\n\t\t$weights .= $sep.sprintf(\"%.3f\", $weight);\n\t\t$total_weight += $weight;\n\t\t$sep = \",\";\n\t}\n\tprint \"$docid\\t$wordid\\t{$positions}\\t{$weights}\\t$items\\t$total_weight\\n\";\n}\n\n\nIf you need any other info, I will gladly provide it.\n\nThank You for Your time.\nMiso Fapso\n", "msg_date": "Fri, 2 Jul 2010 00:34:53 +0200", "msg_from": "Michal Fapso <[email protected]>", "msg_from_op": true, "msg_subject": "big data - slow select (speech search)" }, { "msg_contents": "I forgot to mention one thing. If you want to generate data using the\nperl script, do this:\n\n perl create_synthetic_data.pl > synthetic_data.sqlcopy\n\nand then after you create the 'hyps' table, use the COPY command with\nthe generated file:\n\n COPY hyps FROM '/the/full/path/synthetic_data.sqlcopy';\n\nBest regards,\nMiso Fapso\n\n\nOn 2 July 2010 00:34, Michal Fapso <[email protected]> wrote:\n> Hi,\n>\n> I have quite a simple query but a lot of data and the SELECT query is\n> too slow. I will be really grateful for any advice on this.\n>\n> --------------------------------------------------\n> The background info:\n>\n> I work on a speech search engine which differs from text search in\n> having more words (hypotheses) on the same position and each\n> hypothesis has some weight (probability) of occurrence.\n>\n> When a word 'hello' appears in a document 'lecture_1', there is a row\n> in the table hyps (see below) which contains an array of all positions\n> of word 'hello' in the document 'lecture_1' and for each position it\n> contains a weight as well.\n>\n> I need the positions to be able to search for phrases. However, here I\n> simplified the query as much as I could without a significant\n> reduction in speed.\n>\n> I know there is tsearch extension which could be more appropriate for\n> this but I didn't try that yet. The size of my data will be the same\n> which seems to be the issue in my case. But maybe I am wrong and with\n> tsearch it will be much faster. What do you think?\n>\n> --------------------------------------------------\n> Preconditions:\n>\n> First I cleared the disk cache:\n>  sync; sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'\n>\n> Then run the postgresql deamon and with psql client I connected to my\n> database. The first thing I did then was executing the SELECT query\n> described below. It took about 4.5 seconds. If I rerun it, it takes\n> less than 2 miliseconds, but it is because of the cache. I need to\n> optimize the first-run.\n>\n> --------------------------------------------------\n> Hardware:\n>\n> laptop ASUS, CPU dual core T2300 1.66GHz, 1.5G RAM\n>\n> --------------------------------------------------\n> Version:\n>\n> PostgreSQL 8.4.4 on i686-pc-linux-gnu, compiled by GCC gcc (Ubuntu\n> 4.4.1-4ubuntu9) 4.4.1, 32-bit\n>\n> compiled from sources, only --prefix=... argument given to ./configure\n>\n> --------------------------------------------------\n> Schema:\n>\n> CREATE TABLE hyps (\n>        docid INT,\n>        wordid INT,\n>        positions INT[],\n>        weights REAL[],\n>        length INT,\n>        total_weight REAL\n> );\n> COPY hyps FROM '/home/miso/exp/speech_search/postgresql/sqlcopy/all_weights_clustered.sqlcopy';\n> CREATE INDEX hyps_wordid_index ON hyps USING hash (wordid);\n> CREATE INDEX hyps_docid_index ON hyps USING hash (docid);\n>\n> shared_buffers = 300MB ...this is the only thing I changed in the config\n>\n> I tried that also with btree indices instead of hash and surprisingly\n> the SELECT query was a bit faster. I would expect hash to be faster.\n>\n> The index on 'docid' column is there because I need to be able to\n> search also in a particular document or in a set of documents.\n> --------------------------------------------------\n> Table info:\n>\n> - rows = 5490156\n> - average length of positions vectors = 19.5\n> - total number of items in positions vectors = 107444304\n> - positions and weights in one row have the same number of items, but\n> for each row the number may differ.\n> - table data are loaded only once (using COPY) and are not modified anymore\n> - there are 369 various docid and 161460 various wordid\n> - VACUUM was executed after COPY of data\n>\n> --------------------------------------------------\n> Query:\n>\n> EXPLAIN ANALYZE SELECT h1.docid\n> FROM hyps AS h1\n> WHERE h1.wordid=65658;\n>\n>  Bitmap Heap Scan on hyps h1  (cost=10.97..677.09 rows=171 width=4)\n> (actual time=62.106..4416.864 rows=343 loops=1)\n>   Recheck Cond: (wordid = 65658)\n>   ->  Bitmap Index Scan on hyps_wordid_index  (cost=0.00..10.92\n> rows=171 width=0) (actual time=42.969..42.969 rows=343 loops=1)\n>         Index Cond: (wordid = 65658)\n>  Total runtime: 4432.015 ms\n>\n> The result has 343 rows and there are 9294 items in positions vectors in total.\n>\n> --------------------------------------------------\n> Comparison with Lucene:\n>\n> If I run the same query in Lucene search engine, it takes 0.105\n> seconds on the same data which is quite a huge difference.\n>\n> --------------------------------------------------\n> Synthetic data set:\n>\n> If you want to try it yourself, here is a script which generates the\n> data for COPY command. I don't know whether it is possible to send\n> attachments here, so I put the script inline. Just save it as\n> create_synthetic_data.pl and run it by 'perl\n> create_synthetic_data.pl'. With these synthetic data the SELECT query\n> times are around 2.5 seconds. You can try the SELECT query with\n> 'wordid' equal 1, 2, 3, ...10000.\n>\n>\n> #!/usr/bin/perl\n> # Create synthetic data for PostgreSQL COPY.\n>\n> $rows = 5490156;\n> $docs = 369;\n> $words = 161460;\n> $docid = 0;\n> $wordid = 0;\n>\n> for ($row=0; $row<$rows; $row++) {\n>\n>        my $sep          = \"\";\n>        my $positions    = \"\";\n>        my $weights      = \"\";\n>        my $total_weight = 0;\n>        my $items        = int(rand(39))+1;\n>\n>        if ($row % int($rows/$docs) == 0) {\n>                $docid++;\n>                $wordid = 0;\n>        }\n>        $wordid++;\n>\n>        for ($i=0; $i<$items; $i++) {\n>                $position      = int(rand(20000));\n>                $weight        = rand(1);\n>                $positions    .= $sep.$position;\n>                $weights      .= $sep.sprintf(\"%.3f\", $weight);\n>                $total_weight += $weight;\n>                $sep           = \",\";\n>        }\n>        print \"$docid\\t$wordid\\t{$positions}\\t{$weights}\\t$items\\t$total_weight\\n\";\n> }\n>\n>\n> If you need any other info, I will gladly provide it.\n>\n> Thank You for Your time.\n> Miso Fapso\n>\n", "msg_date": "Fri, 2 Jul 2010 07:23:21 +0200", "msg_from": "Michal Fapso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: big data - slow select (speech search)" }, { "msg_contents": "On Thu, Jul 1, 2010 at 6:34 PM, Michal Fapso <[email protected]> wrote:\n> It took about 4.5 seconds. If I rerun it, it takes\n> less than 2 miliseconds, but it is because of the cache. I need to\n> optimize the first-run.\n>\n> laptop ASUS, CPU dual core T2300 1.66GHz, 1.5G RAM\n>\n> EXPLAIN ANALYZE SELECT h1.docid\n> FROM hyps AS h1\n> WHERE h1.wordid=65658;\n>\n>  Bitmap Heap Scan on hyps h1  (cost=10.97..677.09 rows=171 width=4)\n> (actual time=62.106..4416.864 rows=343 loops=1)\n>   Recheck Cond: (wordid = 65658)\n>   ->  Bitmap Index Scan on hyps_wordid_index  (cost=0.00..10.92\n> rows=171 width=0) (actual time=42.969..42.969 rows=343 loops=1)\n>         Index Cond: (wordid = 65658)\n>  Total runtime: 4432.015 ms\n>\n> If I run the same query in Lucene search engine, it takes 0.105\n> seconds on the same data which is quite a huge difference.\n\nSo PostgreSQL is reading 343 rows from disk in 4432 ms, or about 12\nms/row. I'm not an expert on seek times, but that might not really be\nthat unreasonable, considering that those rows may be scattered all\nover the index and thus it may be basically random I/O. Have you\ntried clustering hyps on hyps_wordid_index? If you had a more\nsophisticated disk subsystem you could try increasing\neffective_io_concurrency but that's not going to help with only one\nspindle.\n\nIf you run the same query in Lucene and it takes only 0.105 s, then\nLucene is obviously doing a lot less I/O. I doubt that any amount of\ntuning of your existing schema is going to produce that kind of result\non PostgreSQL. Using the full-text search stuff, or a gin index of\nsome kind, might get you closer, but it's hard to beat a\nspecial-purpose engine that implements exactly the right algorithm for\nyour use case.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Mon, 5 Jul 2010 20:25:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big data - slow select (speech search)" }, { "msg_contents": "Hi Robert,\n\nthank you for your help. I tried to cluster the table on\nhyps_wordid_index and the query execution time dropped from 4.43 to\n0.19 seconds which is not that far from Lucene's performance of 0.10\nsecond.\n\nThanks a lot!\nMiso Fapso\n\nOn 6 July 2010 02:25, Robert Haas <[email protected]> wrote:\n> On Thu, Jul 1, 2010 at 6:34 PM, Michal Fapso <[email protected]> wrote:\n>> It took about 4.5 seconds. If I rerun it, it takes\n>> less than 2 miliseconds, but it is because of the cache. I need to\n>> optimize the first-run.\n>>\n>> laptop ASUS, CPU dual core T2300 1.66GHz, 1.5G RAM\n>>\n>> EXPLAIN ANALYZE SELECT h1.docid\n>> FROM hyps AS h1\n>> WHERE h1.wordid=65658;\n>>\n>>  Bitmap Heap Scan on hyps h1  (cost=10.97..677.09 rows=171 width=4)\n>> (actual time=62.106..4416.864 rows=343 loops=1)\n>>   Recheck Cond: (wordid = 65658)\n>>   ->  Bitmap Index Scan on hyps_wordid_index  (cost=0.00..10.92\n>> rows=171 width=0) (actual time=42.969..42.969 rows=343 loops=1)\n>>         Index Cond: (wordid = 65658)\n>>  Total runtime: 4432.015 ms\n>>\n>> If I run the same query in Lucene search engine, it takes 0.105\n>> seconds on the same data which is quite a huge difference.\n>\n> So PostgreSQL is reading 343 rows from disk in 4432 ms, or about 12\n> ms/row.  I'm not an expert on seek times, but that might not really be\n> that unreasonable, considering that those rows may be scattered all\n> over the index and thus it may be basically random I/O.  Have you\n> tried clustering hyps on hyps_wordid_index?  If you had a more\n> sophisticated disk subsystem you could try increasing\n> effective_io_concurrency but that's not going to help with only one\n> spindle.\n>\n> If you run the same query in Lucene and it takes only 0.105 s, then\n> Lucene is obviously doing a lot less I/O.  I doubt that any amount of\n> tuning of your existing schema is going to produce that kind of result\n> on PostgreSQL.  Using the full-text search stuff, or a gin index of\n> some kind, might get you closer, but it's hard to beat a\n> special-purpose engine that implements exactly the right algorithm for\n> your use case.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n>\n", "msg_date": "Wed, 7 Jul 2010 15:31:29 +0200", "msg_from": "Michal Fapso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: big data - slow select (speech search)" }, { "msg_contents": "On Wed, Jul 7, 2010 at 9:31 AM, Michal Fapso <[email protected]> wrote:\n> thank you for your help. I tried to cluster the table on\n> hyps_wordid_index and the query execution time dropped from 4.43 to\n> 0.19 seconds which is not that far from Lucene's performance of 0.10\n> second.\n\nDang. Nice!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 7 Jul 2010 16:49:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big data - slow select (speech search)" } ]
[ { "msg_contents": "I have a long stored procedure (over 3,000 lines). Originally, it would take\nabout 44ms to run the whole query. After lots and lots of tweaking, Postgres\nnow runs the entire thing and gathers my results in just 15.2ms, which is\nvery impressive given the hardware this is running on. Now, I used to return\nthe results unsorted to my C++ backend and then sort them there using my\ncustom sort order which provides prioritized, weighted random ordering with\n4 different priority fields and 3 different weighting fields within 3 of\nthose 4 priority fields. Needless to say, the sorting is quite complex. I\nwanted to cut down on the amount of data being delivered to my C++ backend,\nso I am using the stored procedure to insert a summary of my results\ndirectly into the database, which is far more efficient than dumping it all\nto the C++ backend (including stuff that isn't really needed there) and then\ndumping it all back to Postgres via INSERTS later in the execution path. The\nproblem is that I want the results sorted in this custom order before they\nare stored in the database. (By sorted, I mean I want to include a field\nthat lists a numerical order for the set of results.) Thus, I used to dump\neverything to the C++ program, perform the sorting, then INSERT back to\nPostgres. This was obviously not all that efficient. Now, the sorting in C++\ntook <1ms to accomplish. When I re-wrote the sorting in pl/pgsql using a\ncouple of additional stored procedures, I discovered it is taking 15.2ms to\nperform the sort of the records within Postgres. This almost cancels out all\nof the prior optimizations I previously performed:\n\nT:20100702001841+0903010 TID:0x43945940 INFO:NOTICE: Sorting group ...\n<snip>\n...\n</snip>\nT:20100702001841+0917683 TID:0x43945940 INFO:NOTICE: Sorting 1 rows in\npriority 1... <-- Last sort item\nT:20100702001841+0918292 TID:0x43945940 INFO:NOTICE:\n\n918,292 - 903,010 = 15,282 us = 15.282 ms\n\nSo, the bottom line is, I need a faster way to do this sorting.\n\nWhat options are at my disposal for improving the speed and efficiency of\nthis sorting? Which is the easiest to implement? What are the drawbacks of\neach different method?\n\n\nThanks in advance for your insights.\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nI have a long stored procedure (over 3,000 lines). Originally, it would take about 44ms to run the whole query. After lots and lots of tweaking, Postgres now runs the entire thing and gathers my results in just 15.2ms, which is very impressive given the hardware this is running on. Now, I used to return the results unsorted to my C++ backend and then sort them there using my custom sort order which provides prioritized, weighted random ordering with 4 different priority fields and 3 different weighting fields within 3 of those 4 priority fields. Needless to say, the sorting is quite complex. I wanted to cut down on the amount of data being delivered to my C++ backend, so I am using the stored procedure to insert a summary of my results directly into the database, which is far more efficient than dumping it all to the C++ backend (including stuff that isn't really needed there) and then dumping it all back to Postgres via INSERTS later in the execution path. The problem is that I want the results sorted in this custom order before they are stored in the database. (By sorted, I mean I want to include a field that lists a numerical order for the set of results.) Thus, I used to dump everything to the C++ program, perform the sorting, then INSERT back to Postgres. This was obviously not all that efficient. Now, the sorting in C++ took <1ms to accomplish. When I re-wrote the sorting in pl/pgsql using a couple of additional stored procedures, I discovered it is taking 15.2ms to perform the sort of the records within Postgres. This almost cancels out all of the prior optimizations I previously performed:\nT:20100702001841+0903010 TID:0x43945940 INFO:NOTICE:  Sorting group ...<snip>...</snip>T:20100702001841+0917683 TID:0x43945940 INFO:NOTICE:  Sorting 1 rows in priority 1... <-- Last sort item\nT:20100702001841+0918292 TID:0x43945940 INFO:NOTICE:    918,292 - 903,010 = 15,282 us = 15.282 msSo, the bottom line is, I need a faster way to do this sorting. \nWhat options are at my disposal for improving the speed and efficiency of this sorting? Which is the easiest to implement? What are the drawbacks of each different method?\nThanks in advance for your insights.-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Thu, 1 Jul 2010 20:46:13 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Highly Efficient Custom Sorting" }, { "msg_contents": "On 02/07/10 08:46, Eliot Gable wrote:\n\n> So, the bottom line is, I need a faster way to do this sorting. \n\nYou haven't showed us how you're doing it at the moment, so it's awfully\nhard to comment usefully on possible approaches.\n\n-- \nCraig Ringer\n\nTech-related writing: http://soapyfrogs.blogspot.com/\n", "msg_date": "Fri, 02 Jul 2010 11:04:46 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> On 02/07/10 08:46, Eliot Gable wrote:\n>> So, the bottom line is, I need a faster way to do this sorting. \n\n> You haven't showed us how you're doing it at the moment, so it's awfully\n> hard to comment usefully on possible approaches.\n\nI'm guessing from tea leaves, but the impression I got from Eliot's\ndescription is that he's using plpgsql functions as sort comparators.\nIt's not surprising that that sucks performance-wise compared to having\nthe equivalent logic in C/C++ functions used as comparators on the\nclient side. plpgsql is no speed demon. Best fix might be to code the\ncomparators as C functions on the server side.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Jul 2010 00:08:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting " }, { "msg_contents": "On Thu, Jul 1, 2010 at 8:46 PM, Eliot Gable\n<[email protected]> wrote:\n> I have a long stored procedure (over 3,000 lines). Originally, it would take\n> about 44ms to run the whole query. After lots and lots of tweaking, Postgres\n> now runs the entire thing and gathers my results in just 15.2ms, which is\n> very impressive given the hardware this is running on. Now, I used to return\n> the results unsorted to my C++ backend and then sort them there using my\n> custom sort order which provides prioritized, weighted random ordering with\n> 4 different priority fields and 3 different weighting fields within 3 of\n> those 4 priority fields. Needless to say, the sorting is quite complex. I\n> wanted to cut down on the amount of data being delivered to my C++ backend,\n> so I am using the stored procedure to insert a summary of my results\n> directly into the database, which is far more efficient than dumping it all\n> to the C++ backend (including stuff that isn't really needed there) and then\n> dumping it all back to Postgres via INSERTS later in the execution path. The\n> problem is that I want the results sorted in this custom order before they\n> are stored in the database. (By sorted, I mean I want to include a field\n> that lists a numerical order for the set of results.) Thus, I used to dump\n> everything to the C++ program, perform the sorting, then INSERT back to\n> Postgres. This was obviously not all that efficient. Now, the sorting in C++\n> took <1ms to accomplish. When I re-wrote the sorting in pl/pgsql using a\n> couple of additional stored procedures, I discovered it is taking 15.2ms to\n> perform the sort of the records within Postgres. This almost cancels out all\n> of the prior optimizations I previously performed:\n> T:20100702001841+0903010 TID:0x43945940 INFO:NOTICE:  Sorting group ...\n> <snip>\n> ...\n> </snip>\n\nwhat are you sorting and how are you sorting it?\n\nmerlin\n", "msg_date": "Fri, 2 Jul 2010 09:59:36 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "Yes, I have two pl/pgsql functions. They take a prepared set of data (just\nthe row id of the original results, plus the particular priority and weight\nfields) and they return the same set of data with an extra field called\n\"order\" which contains a numerical order to apply when sorting the rows. One\nfunction uses the priority information to break everything into priority\ngroups, then calls the other function for each priority group. Each time it\ngets results back from the inner function, it returns that set of results.\nWhen it has looped through all priority groups, then it returns the full\nbuilt-up set of results back to the calling function.\n\nThe pl/pgsql functions implementing the sort are as optimized as they are\nlikely to get. I don't want to waste my time trying to further optimize\npl/pgsql functions that are never going to be as fast and efficient as I\nneed. I would rather spend that time re-writing it in C and get sorting back\nto <1ms.\n\nI guess the real question is, is a generic C sorting function my only real\nalternative? Is there anything else that would allow me to sort things\nfaster than pl/pgsql functions? For example, if I used pl/perl, would I be\nable to expect considerably better performance for sorting than using\npl/pgsql? What about other supported languages? If I can get close to 1ms\nsorting performance without resorting to C, it would save me much time and\nfrustration.\n\nOn Fri, Jul 2, 2010 at 12:08 AM, Tom Lane <[email protected]> wrote:\n\n> Craig Ringer <[email protected]> writes:\n> > On 02/07/10 08:46, Eliot Gable wrote:\n> >> So, the bottom line is, I need a faster way to do this sorting.\n>\n> > You haven't showed us how you're doing it at the moment, so it's awfully\n> > hard to comment usefully on possible approaches.\n>\n> I'm guessing from tea leaves, but the impression I got from Eliot's\n> description is that he's using plpgsql functions as sort comparators.\n> It's not surprising that that sucks performance-wise compared to having\n> the equivalent logic in C/C++ functions used as comparators on the\n> client side. plpgsql is no speed demon. Best fix might be to code the\n> comparators as C functions on the server side.\n>\n> regards, tom lane\n>\n\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nYes, I have two pl/pgsql functions. They take a prepared set of data (just the row id of the original results, plus the particular priority and weight fields) and they return the same set of data with an extra field called \"order\" which contains a numerical order to apply when sorting the rows. One function uses the priority information to break everything into priority groups, then calls the other function for each priority group. Each time it gets results back from the inner function, it returns that set of results. When it has looped through all priority groups, then it returns the full built-up set of results back to the calling function. \nThe pl/pgsql functions implementing the sort are as optimized as they are likely to get. I don't want to waste my time trying to further optimize pl/pgsql functions that are never going to be as fast and efficient as I need. I would rather spend that time re-writing it in C and get sorting back to <1ms. \nI guess the real question is, is a generic C sorting function my only real alternative? Is there anything else that would allow me to sort things faster than pl/pgsql functions? For example, if I used pl/perl, would I be able to expect considerably better performance for sorting than using pl/pgsql? What about other supported languages? If I can get close to 1ms sorting performance without resorting to C, it would save me much time and frustration. \nOn Fri, Jul 2, 2010 at 12:08 AM, Tom Lane <[email protected]> wrote:\nCraig Ringer <[email protected]> writes:\n> On 02/07/10 08:46, Eliot Gable wrote:\n>> So, the bottom line is, I need a faster way to do this sorting.\n\n> You haven't showed us how you're doing it at the moment, so it's awfully\n> hard to comment usefully on possible approaches.\n\nI'm guessing from tea leaves, but the impression I got from Eliot's\ndescription is that he's using plpgsql functions as sort comparators.\nIt's not surprising that that sucks performance-wise compared to having\nthe equivalent logic in C/C++ functions used as comparators on the\nclient side.  plpgsql is no speed demon.  Best fix might be to code the\ncomparators as C functions on the server side.\n\n                        regards, tom lane\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Fri, 2 Jul 2010 09:59:44 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "> On Fri, Jul 2, 2010 at 12:08 AM, Tom Lane <[email protected]> wrote:\n>> I'm guessing from tea leaves, but the impression I got from Eliot's\n>> description is that he's using plpgsql functions as sort comparators.\n>> It's not surprising that that sucks performance-wise compared to having\n>> the equivalent logic in C/C++ functions used as comparators on the\n>> client side. plpgsql is no speed demon. Best fix might be to code the\n>> comparators as C functions on the server side.\n\nOn Fri, 2 Jul 2010, Eliot Gable wrote:\n> I guess the real question is, is a generic C sorting function my only real\n> alternative?\n\nSounds to me like you are not really listening. You don't need to code an \nentire sorting algorithm in C, as Postgres already has a pretty good one \nof those. All you need to do is implement a comparator of some kind. \nInserting C functions into Postgres is pretty easy, especially on the \nlevel of comparators.\n\nMatthew\n\n-- \n For those of you who are into writing programs that are as obscure and\n complicated as possible, there are opportunities for... real fun here\n -- Computer Science Lecturer\n", "msg_date": "Fri, 2 Jul 2010 15:50:55 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "On Fri, Jul 2, 2010 at 10:50 AM, Matthew Wakeling <[email protected]> wrote:\n>> On Fri, Jul 2, 2010 at 12:08 AM, Tom Lane <[email protected]> wrote:\n>>>\n>>> I'm guessing from tea leaves, but the impression I got from Eliot's\n>>> description is that he's using plpgsql functions as sort comparators.\n>>> It's not surprising that that sucks performance-wise compared to having\n>>> the equivalent logic in C/C++ functions used as comparators on the\n>>> client side.  plpgsql is no speed demon.  Best fix might be to code the\n>>> comparators as C functions on the server side.\n>\n> On Fri, 2 Jul 2010, Eliot Gable wrote:\n>>\n>> I guess the real question is, is a generic C sorting function my only real\n>> alternative?\n>\n> Sounds to me like you are not really listening. You don't need to code an\n> entire sorting algorithm in C, as Postgres already has a pretty good one of\n> those. All you need to do is implement a comparator of some kind. Inserting\n> C functions into Postgres is pretty easy, especially on the level of\n> comparators.\n\nin recent versions of postgres you rarely if ever even have to do that\n-- row types are comparable w/o any extra work, as are arrays. If\nEliot would just give a little more deal of WHAT he is trying to sort\nand HOW he is currently doing it, i suspect his problem will be\ntrivially solved :-).\n\nmerlin\n", "msg_date": "Fri, 2 Jul 2010 10:56:46 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "On 7/2/10 6:59 AM, Eliot Gable wrote:\n> Yes, I have two pl/pgsql functions. They take a prepared set of data\n> (just the row id of the original results, plus the particular priority\n> and weight fields) and they return the same set of data with an extra\n> field called \"order\" which contains a numerical order to apply when\n> sorting the rows. One function uses the priority information to break\n> everything into priority groups, then calls the other function for each\n> priority group. Each time it gets results back from the inner function,\n> it returns that set of results. When it has looped through all priority\n> groups, then it returns the full built-up set of results back to the\n> calling function.\n>\n> The pl/pgsql functions implementing the sort are as optimized as they\n> are likely to get. I don't want to waste my time trying to further\n> optimize pl/pgsql functions that are never going to be as fast and\n> efficient as I need. I would rather spend that time re-writing it in C\n> and get sorting back to <1ms.\n>\n> I guess the real question is, is a generic C sorting function my only\n> real alternative? Is there anything else that would allow me to sort\n> things faster than pl/pgsql functions? For example, if I used pl/perl,\n> would I be able to expect considerably better performance for sorting\n> than using pl/pgsql? What about other supported languages? If I can get\n> close to 1ms sorting performance without resorting to C, it would save\n> me much time and frustration.\n\nTry coding it in perl on the server. It is MUCH easier to code, and you don't have to link anything or learn the esoteric details of the Postgres/C API.\n\nPerl itself is written in C, and some of it's operations are extremely fast. Depending on the size and complexity of your data structures, Perl code may be just as fast as code you could write in C.\n\nEven if it turns out to be slower than you like, it will give you a way to package up your sort functionality into a function call, so if you later find you need to replace the Perl function with a C function, the rest of your application won't change.\n\nCraig\n", "msg_date": "Fri, 02 Jul 2010 09:36:23 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "On 03/07/10 00:36, Craig James wrote:\n\n> Perl itself is written in C, and some of it's operations are extremely\n> fast.\n\nThe same is true of PL/PgSQL, though ;-)\n\nThe main advantage of PL/Perl is that it doesn't rely on the SPI to do\neverything. It's interpreted not compiled, but it has a much faster\napproach to interpretation than PL/PgSQL.\n\nReally, the right choice depends on exactly what the OP is doing and\nhow, which they're not saying.\n\nWhere's the code?\n\n--\nCraig Ringer\n", "msg_date": "Sat, 03 Jul 2010 09:44:38 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "Well, I re-wrote the algorithm in Perl. However, it did not solve the speed\nissue. Running time now is a whopping 240+ ms instead of the 31.8ms I was\ngetting before (15.2 of which is sorting). Here is the Perl code on the\nsorting. I won't post the pl/pgsql code, because this is far more clear (in\nmy opinion) on what the algorithm does:\n\nDROP TYPE IF EXISTS glbtype CASCADE;\nCREATE TYPE glbtype AS (\nid INTEGER,\n\"group\" TEXT,\npriority INTEGER,\nweight INTEGER\n);\n\nDROP TYPE IF EXISTS glbtype_result CASCADE;\nCREATE TYPE glbtype_result AS (\nid INTEGER,\npriority INTEGER,\nweight INTEGER,\n\"order\" BIGINT\n);\n\nCREATE OR REPLACE FUNCTION GroupedRandomWeightedLB(glbtype[]) RETURNS SETOF\nglbtype_result AS\n$BODY$\n# Input is an array of a composite type\nmy ($input) = @_;\nmy %groups;\n$input =~ s/^{|}$//g;\n$input =~ s/[)(]//g;\nmy @rows;\nmy $count = 0;\nwhile ($input && $count < 10000) {\nmy ($id, $group, $prio, $weight, @rest) = split(/,/, $input);\npush(@rows, {id => $id, group => $group, priority => $prio, weight =>\n$weight});\n$count++;\n$input = join(',', @rest);\n}\n\nif(scalar @rows < 1) {\nelog(NOTICE, ' No rows sent for sorting.');\nreturn undef;\n} else {\nelog(NOTICE, ' '.(scalar @rows).' rows sent for sorting.');\n}\n\nforeach $rw (@rows) {\nif($rw->{group} && $rw->{priority} && $rw->{weight}) {\npush( @{ $groups{$rw->{group}}{$rw->{priority}} }, $rw);\nelog(NOTICE, ' Pushing '.$rw->{group}.' with prio ('.$rw->{priority}.'),\nweight ('.$rw->{weight}.') onto array.');\n} else {\nelog(NOTICE, ' Invalid sort row: Group ('.$rw->{group}.'), Prio\n('.$rw->{priority}.'), Weight ('.$rw->{weight}.')');\n}\n}\n\nforeach $group (keys %groups) {\nelog(NOTICE, ' Sorting group '.$group.'...');\nforeach $prio (keys %{$groups{$group}}) {\nmy @rows = @{ $groups{$group}{$prio} };\nelog(NOTICE, ' Sorting '.(scalar @rows).' rows in priority\n'.$prio.'...');\nmy @zeros;\nmy @nonzeros;\nmy $total_weight = 0;\nmy $row_order = 1;\nfor($row_id = 0; $row_id < scalar @rows; $row_id++) {\nmy $row = $rows[$row_id];\n$total_weight += $row->{weight};\nelog(NOTICE, ' Total Weight ('.$total_weight.')');\nif($row->{weight} == 0) {\npush(@zeros, $row);\n} else {\npush(@nonzeros, $row);\n}\n}\nmy @first_order = (@zeros, @nonzeros);\nundef(@zeros);\nundef(@nonzeros);\nwhile(scalar @first_order) {\nelog(NOTICE, ' '.(scalar @first_order).' items remaining ...');\nmy $rand = int(rand($total_weight));\nelog(NOTICE, ' Random weight ('.$rand.')');\nmy $running_weight = 0;\nfor($row_id = 0; $row_id < scalar @first_order; $row_id++) {\nmy $row = $first_order[$row_id];\n$running_weight += $row->{weight};\nelog(NOTICE, ' Running weight ('.$running_weight.') Current Weight\n('.$row->{weight}.')');\nif($running_weight >= $rand) {\nelog(NOTICE, ' : Priority ('.($row->{priority}).') Weight\n('.($row->{weight}).')');\nreturn_next(\n{ id => int($row->{id}),\n priority => int($row->{priority}),\n weight => int($row->{weight}),\n order => int($row_order) }\n);\n$row_order++;\nsplice(@first_order, $row_id, 1);\n# Recalculate total weight\n$total_weight = 0;\nforeach $row (@first_order) {\n$total_weight += $row->{weight};\n}\nelog(NOTICE, ' : Remaining Weight ('.$total_weight.')');\nbreak;\n}\n}\n}\n}\n}\nreturn undef;\n$BODY$\nLANGUAGE plperl VOLATILE;\n\n5 rows sent for sorting.\nPushing GROUP_7 with prio (1), weight (0) onto array.\nPushing GROUP_7 with prio (1), weight (5) onto array.\nPushing GROUP_8 with prio (1), weight (1) onto array.\nPushing GROUP_8 with prio (1), weight (5) onto array.\nPushing GROUP_8 with prio (1), weight (5) onto array.\nSorting group GROUP_7...\nSorting 2 rows in priority 1...\nTotal Weight (0)\nTotal Weight (5)\n2 items remaining ...\nRandom weight (0)\nRunning weight (0) Current Weight (0)\n: Priority (1) Weight (0)\n: Remaining Weight (5)\n1 items remaining ...\nRandom weight (0)\nRunning weight (5) Current Weight (5)\n: Priority (1) Weight (5)\n: Remaining Weight (0)\nSorting group GROUP_8...\nSorting 3 rows in priority 1...\nTotal Weight (1)\nTotal Weight (6)\nTotal Weight (11)\n3 items remaining ...\nRandom weight (8)\nRunning weight (1) Current Weight (1)\nRunning weight (6) Current Weight (5)\nRunning weight (11) Current Weight (5)\n: Priority (1) Weight (5)\n: Remaining Weight (6)\n2 items remaining ...\nRandom weight (2)\nRunning weight (1) Current Weight (1)\nRunning weight (6) Current Weight (5)\n: Priority (1) Weight (5)\n: Remaining Weight (1)\n1 items remaining ...\nRandom weight (0)\nRunning weight (1) Current Weight (1)\n: Priority (1) Weight (1)\n: Remaining Weight (0)\n\n2 rows sent for sorting.\nPushing GROUP_1 with prio (1), weight (0) onto array.\nPushing GROUP_1 with prio (2), weight (4) onto array.\nSorting group GROUP_1...\nSorting 1 rows in priority 1...\nTotal Weight (0)\n1 items remaining ...\nRandom weight (0)\nRunning weight (0) Current Weight (0)\n: Priority (1) Weight (0)\n: Remaining Weight (0)\nSorting 1 rows in priority 2...\nTotal Weight (4)\n1 items remaining ...\nRandom weight (2)\nRunning weight (4) Current Weight (4)\n: Priority (2) Weight (4)\n: Remaining Weight (0)\n\nTotal runtime: 244.101 ms\n\n\nOn Fri, Jul 2, 2010 at 9:44 PM, Craig Ringer <[email protected]>wrote:\n\n> On 03/07/10 00:36, Craig James wrote:\n>\n> > Perl itself is written in C, and some of it's operations are extremely\n> > fast.\n>\n> The same is true of PL/PgSQL, though ;-)\n>\n> The main advantage of PL/Perl is that it doesn't rely on the SPI to do\n> everything. It's interpreted not compiled, but it has a much faster\n> approach to interpretation than PL/PgSQL.\n>\n> Really, the right choice depends on exactly what the OP is doing and\n> how, which they're not saying.\n>\n> Where's the code?\n>\n> --\n> Craig Ringer\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nWell, I re-wrote the algorithm in Perl. However, it did not solve the speed issue. Running time now is a whopping 240+ ms instead of the 31.8ms I was getting before (15.2 of which is sorting). Here is the Perl code on the sorting. I won't post the pl/pgsql code, because this is far more clear (in my opinion) on what the algorithm does:\nDROP TYPE IF EXISTS glbtype CASCADE;CREATE TYPE glbtype AS ( id INTEGER, \"group\" TEXT,\n priority INTEGER, weight INTEGER);DROP TYPE IF EXISTS glbtype_result CASCADE;\nCREATE TYPE glbtype_result AS ( id INTEGER, priority INTEGER, weight INTEGER,\n \"order\" BIGINT);CREATE OR REPLACE FUNCTION GroupedRandomWeightedLB(glbtype[]) RETURNS SETOF glbtype_result AS\n$BODY$ # Input is an array of a composite type my ($input) = @_;\n my %groups; $input =~ s/^{|}$//g; $input =~ s/[)(]//g;\n my @rows; my $count = 0; while ($input && $count < 10000) {\n my ($id, $group, $prio, $weight, @rest) = split(/,/, $input); push(@rows, {id => $id, group => $group, priority => $prio, weight => $weight});\n $count++; $input = join(',', @rest); }\n if(scalar @rows < 1) { elog(NOTICE, '  No rows sent for sorting.');\n return undef; } else { elog(NOTICE, '  '.(scalar @rows).' rows sent for sorting.');\n } foreach $rw (@rows) { if($rw->{group} && $rw->{priority} && $rw->{weight}) {\n push( @{ $groups{$rw->{group}}{$rw->{priority}} }, $rw); elog(NOTICE, '  Pushing '.$rw->{group}.' with prio ('.$rw->{priority}.'), weight ('.$rw->{weight}.') onto array.');\n } else { elog(NOTICE, '  Invalid sort row: Group ('.$rw->{group}.'), Prio ('.$rw->{priority}.'), Weight ('.$rw->{weight}.')');\n } } foreach $group (keys %groups) {\n elog(NOTICE, '  Sorting group '.$group.'...'); foreach $prio (keys %{$groups{$group}}) {\n my @rows = @{ $groups{$group}{$prio} }; elog(NOTICE, '    Sorting '.(scalar @rows).' rows in priority '.$prio.'...');\n my @zeros; my @nonzeros; my $total_weight = 0;\n my $row_order = 1; for($row_id = 0; $row_id < scalar @rows; $row_id++) {\n my $row = $rows[$row_id]; $total_weight += $row->{weight}; elog(NOTICE, '    Total Weight ('.$total_weight.')');\n if($row->{weight} == 0) { push(@zeros, $row); } else {\n push(@nonzeros, $row); } }\n my @first_order = (@zeros, @nonzeros); undef(@zeros); undef(@nonzeros);\n while(scalar @first_order) { elog(NOTICE, '      '.(scalar @first_order).' items remaining ...');\n my $rand = int(rand($total_weight)); elog(NOTICE, '      Random weight ('.$rand.')');\n my $running_weight = 0; for($row_id = 0; $row_id < scalar @first_order; $row_id++) {\n my $row = $first_order[$row_id]; $running_weight += $row->{weight};\n elog(NOTICE, '      Running weight ('.$running_weight.') Current Weight ('.$row->{weight}.')'); if($running_weight >= $rand) {\n elog(NOTICE, '        : Priority ('.($row->{priority}).') Weight ('.($row->{weight}).')'); return_next(\n { id => int($row->{id}),  priority => int($row->{priority}),\n  weight => int($row->{weight}),  order => int($row_order) }\n ); $row_order++; splice(@first_order, $row_id, 1);\n # Recalculate total weight $total_weight = 0; foreach $row (@first_order) {\n $total_weight += $row->{weight}; } elog(NOTICE, '        : Remaining Weight ('.$total_weight.')');\n break; } }\n } } }\n return undef;$BODY$LANGUAGE plperl VOLATILE;5 rows sent for sorting.Pushing GROUP_7 with prio (1), weight (0) onto array.\nPushing GROUP_7 with prio (1), weight (5) onto array.Pushing GROUP_8 with prio (1), weight (1) onto array.Pushing GROUP_8 with prio (1), weight (5) onto array.Pushing GROUP_8 with prio (1), weight (5) onto array.\nSorting group GROUP_7...Sorting 2 rows in priority 1...Total Weight (0)Total Weight (5)2 items remaining ...Random weight (0)Running weight (0) Current Weight (0)\n: Priority (1) Weight (0): Remaining Weight (5)1 items remaining ...Random weight (0)Running weight (5) Current Weight (5): Priority (1) Weight (5): Remaining Weight (0)\nSorting group GROUP_8...Sorting 3 rows in priority 1...Total Weight (1)Total Weight (6)Total Weight (11)3 items remaining ...Random weight (8)\nRunning weight (1) Current Weight (1)Running weight (6) Current Weight (5)Running weight (11) Current Weight (5): Priority (1) Weight (5): Remaining Weight (6)2 items remaining ...\nRandom weight (2)Running weight (1) Current Weight (1)Running weight (6) Current Weight (5): Priority (1) Weight (5): Remaining Weight (1)1 items remaining ...\nRandom weight (0)Running weight (1) Current Weight (1): Priority (1) Weight (1): Remaining Weight (0)2 rows sent for sorting.Pushing GROUP_1 with prio (1), weight (0) onto array.\nPushing GROUP_1 with prio (2), weight (4) onto array.Sorting group GROUP_1...Sorting 1 rows in priority 1...Total Weight (0)1 items remaining ...Random weight (0)\nRunning weight (0) Current Weight (0): Priority (1) Weight (0): Remaining Weight (0)Sorting 1 rows in priority 2...Total Weight (4)1 items remaining ...\nRandom weight (2)Running weight (4) Current Weight (4): Priority (2) Weight (4): Remaining Weight (0)Total runtime: 244.101 ms\nOn Fri, Jul 2, 2010 at 9:44 PM, Craig Ringer <[email protected]> wrote:\nOn 03/07/10 00:36, Craig James wrote:\n\n> Perl itself is written in C, and some of it's operations are extremely\n> fast.\n\nThe same is true of PL/PgSQL, though ;-)\n\nThe main advantage of PL/Perl is that it doesn't rely on the SPI to do\neverything. It's interpreted not compiled, but it has a much faster\napproach to interpretation than PL/PgSQL.\n\nReally, the right choice depends on exactly what the OP is doing and\nhow, which they're not saying.\n\nWhere's the code?\n\n--\nCraig Ringer\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Fri, 2 Jul 2010 23:17:47 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "On Fri, Jul 2, 2010 at 11:17 PM, Eliot Gable\n<[email protected]> wrote:\n> Well, I re-wrote the algorithm in Perl. However, it did not solve the speed\n> issue. Running time now is a whopping 240+ ms instead of the 31.8ms I was\n> getting before (15.2 of which is sorting). Here is the Perl code on the\n> sorting. I won't post the pl/pgsql code, because this is far more clear (in\n> my opinion) on what the algorithm does:\n> DROP TYPE IF EXISTS glbtype CASCADE;\n> CREATE TYPE glbtype AS (\n> id INTEGER,\n> \"group\" TEXT,\n> priority INTEGER,\n> weight INTEGER\n> );\n> DROP TYPE IF EXISTS glbtype_result CASCADE;\n> CREATE TYPE glbtype_result AS (\n> id INTEGER,\n> priority INTEGER,\n> weight INTEGER,\n> \"order\" BIGINT\n> );\n> CREATE OR REPLACE FUNCTION GroupedRandomWeightedLB(glbtype[]) RETURNS SETOF\n> glbtype_result AS\n\nok, I didn't take the time to read your implementation and completely\nunderstand it, but it looks like you're looking at a N^2 sorting at\nbest.\n\nYou probably want to do something like this (it might not be quite\nright, you need to explain what each of your input array fields is\nsupposed to represent):\nCREATE OR REPLACE FUNCTION GroupedRandomWeightedLB(glbtype[]) RETURNS SETOF\nglbtype_result AS\n$$\n with g as (select unnest(glbtype) as t)\n select array(select ((t).id, (t).priority) (t).weight), 0)::glbtype_result\n from g order by (t).group, (t).priority, random() * (t).weight);\n$$ language sql;\n\n(not sure what \"order\" is, is that the rownum, can't that just be\ninferred from the array position?)\n\nmerlin\n", "msg_date": "Sat, 3 Jul 2010 14:08:35 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "Read RFC 2782 on random weighted load balancing of SRV records inside DNS.\nThat algorithm is what I need implemented, but with an extension. I have\ngroups of records I need to have the algorithm applied to where each group\nis treated separately from the others. I understand the operational\ncomplexity of what I'm doing. It is more like N^3, or more precisely G*P*W\nwhere G is the number of groups, P the number of priorities per group, and W\nthe number of different weights per priority. But, the complexity of the\nalgorithm means nothing in terms of performance or run time because it will\nonly ever deal with very small sets of records (maybe 20 rows of data,\ntops). Even if the algorithm were N^4, it wouldn't matter with that few\nrecords. But, more importantly, there are constraints in how the data is\nsub-divided. Primarily, G < P < W. Further, G and P are groupings which\nsubdivide the entire set of data and the groups do not have overlapping\ndata. So, maybe it's more like N^2.2 or something. But, regardless, we're\nonly talking about 20 rows, tops.\n\nThe issue is how efficiently the languages can deal with arrays. In Perl, I\nhave to parse a string into an array of data, then break it up into sub\narrays inside associative arrays just to work with the input. I also have to\nsplice the array to remove elements, which I don't think is very efficient.\nAny way I could come up with of removing elements involved rebuilding the\nentire array. The same thing goes for pl/pgsql. Dealing with arrays there is\nalso not very efficient. I do a lot of constructing of arrays from sets of\ndata using myvar = array(select blah);. While pl/pgsql was considerably\nfaster than Perl, it cannot come close to what I did in C++ using a hash of\na hash of a linked list. The two hash tables provide my groupings and the\nlinked list gives me something that is highly efficient for removing\nelements as I pick them.\n\nI've looked through the documentation on how to re-write this in C, but I\ncannot seem to find good documentation on working with the input array\n(which is an array of a complex type). I also don't see good documentation\nfor working with the complex type. I found stuff that talks about\nconstructing a complex type in C and returning it. However, I'm not sure how\nto take an input complex type and deconstruct it into something I can work\nwith in C. Also, the memory context management stuff is not entirely clear.\nSpecifically, how do I go about preserving the pointers to the data that I\nallocate in multi-call memory context so that they still point to the data\non the next call to the function for the next result row? Am I supposed to\nset up some global variables to do that, or am I supposed to take a\ndifferent approach? If I need to use global variables, then how do I deal\nwith concurrency?\n\nOn Sat, Jul 3, 2010 at 2:08 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Fri, Jul 2, 2010 at 11:17 PM, Eliot Gable\n> <[email protected] <egable%[email protected]>>\n> wrote:\n> > Well, I re-wrote the algorithm in Perl. However, it did not solve the\n> speed\n> > issue. Running time now is a whopping 240+ ms instead of the 31.8ms I was\n> > getting before (15.2 of which is sorting). Here is the Perl code on the\n> > sorting. I won't post the pl/pgsql code, because this is far more clear\n> (in\n> > my opinion) on what the algorithm does:\n> > DROP TYPE IF EXISTS glbtype CASCADE;\n> > CREATE TYPE glbtype AS (\n> > id INTEGER,\n> > \"group\" TEXT,\n> > priority INTEGER,\n> > weight INTEGER\n> > );\n> > DROP TYPE IF EXISTS glbtype_result CASCADE;\n> > CREATE TYPE glbtype_result AS (\n> > id INTEGER,\n> > priority INTEGER,\n> > weight INTEGER,\n> > \"order\" BIGINT\n> > );\n> > CREATE OR REPLACE FUNCTION GroupedRandomWeightedLB(glbtype[]) RETURNS\n> SETOF\n> > glbtype_result AS\n>\n> ok, I didn't take the time to read your implementation and completely\n> understand it, but it looks like you're looking at a N^2 sorting at\n> best.\n>\n> You probably want to do something like this (it might not be quite\n> right, you need to explain what each of your input array fields is\n> supposed to represent):\n> CREATE OR REPLACE FUNCTION GroupedRandomWeightedLB(glbtype[]) RETURNS SETOF\n> glbtype_result AS\n> $$\n> with g as (select unnest(glbtype) as t)\n> select array(select ((t).id, (t).priority) (t).weight),\n> 0)::glbtype_result\n> from g order by (t).group, (t).priority, random() * (t).weight);\n> $$ language sql;\n>\n> (not sure what \"order\" is, is that the rownum, can't that just be\n> inferred from the array position?)\n>\n> merlin\n>\n\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nRead RFC 2782 on random weighted load balancing of SRV records inside DNS. That algorithm is what I need implemented, but with an extension. I have groups of records I need to have the algorithm applied to where each group is treated separately from the others. I understand the operational complexity of what I'm doing. It is more like N^3, or more precisely G*P*W where G is the number of groups, P the number of priorities per group, and W the number of different weights per priority. But, the complexity of the algorithm means nothing in terms of performance  or run time because it will only ever deal with very small sets of records (maybe 20 rows of data, tops). Even if the algorithm were N^4, it wouldn't matter with that few records. But, more importantly, there are constraints in how the data is sub-divided. Primarily, G < P < W. Further, G and P are groupings which subdivide the entire set of data and the groups do not have overlapping data. So, maybe it's more like N^2.2 or something. But, regardless, we're only talking about 20 rows, tops. \nThe issue is how efficiently the languages can deal with arrays. In Perl, I have to parse a string into an array of data, then break it up into sub arrays inside associative arrays just to work with the input. I also have to splice the array to remove elements, which I don't think is very efficient. Any way I could come up with of removing elements involved rebuilding the entire array. The same thing goes for pl/pgsql. Dealing with arrays there is also not very efficient. I do a lot of constructing of arrays from sets of data using myvar = array(select blah);. While pl/pgsql was considerably faster than Perl, it cannot come close to what I did in C++ using a hash of a hash of a linked list. The two hash tables provide my groupings and the linked list gives me something that is highly efficient for removing elements as I pick them.\nI've looked through the documentation on how to re-write this in C, but I cannot seem to find good documentation on working with the input array (which is an array of a complex type). I also don't see good documentation for working with the complex type. I found stuff that talks about constructing a complex type in C and returning it. However, I'm not sure how to take an input complex type and deconstruct it into something I can work with in C. Also, the memory context management stuff is not entirely clear. Specifically, how do I go about preserving the pointers to the data that I allocate in multi-call memory context so that they still point to the data on the next call to the function for the next result row? Am I supposed to set up some global variables to do that, or am I supposed to take a different approach? If I need to use global variables, then how do I deal with concurrency?\nOn Sat, Jul 3, 2010 at 2:08 PM, Merlin Moncure <[email protected]> wrote:\nOn Fri, Jul 2, 2010 at 11:17 PM, Eliot Gable\n<[email protected]> wrote:\n> Well, I re-wrote the algorithm in Perl. However, it did not solve the speed\n> issue. Running time now is a whopping 240+ ms instead of the 31.8ms I was\n> getting before (15.2 of which is sorting). Here is the Perl code on the\n> sorting. I won't post the pl/pgsql code, because this is far more clear (in\n> my opinion) on what the algorithm does:\n> DROP TYPE IF EXISTS glbtype CASCADE;\n> CREATE TYPE glbtype AS (\n> id INTEGER,\n> \"group\" TEXT,\n> priority INTEGER,\n> weight INTEGER\n> );\n> DROP TYPE IF EXISTS glbtype_result CASCADE;\n> CREATE TYPE glbtype_result AS (\n> id INTEGER,\n> priority INTEGER,\n> weight INTEGER,\n> \"order\" BIGINT\n> );\n> CREATE OR REPLACE FUNCTION GroupedRandomWeightedLB(glbtype[]) RETURNS SETOF\n> glbtype_result AS\n\nok, I didn't take the time to read your implementation and completely\nunderstand it, but it looks like you're looking at a N^2 sorting at\nbest.\n\nYou probably want to do something like this (it might not be quite\nright, you need to explain what each of your input array fields is\nsupposed to represent):\nCREATE OR REPLACE FUNCTION GroupedRandomWeightedLB(glbtype[]) RETURNS SETOF\nglbtype_result AS\n$$\n  with g as (select unnest(glbtype) as t)\n    select array(select ((t).id, (t).priority) (t).weight), 0)::glbtype_result\n      from g order by (t).group, (t).priority, random() * (t).weight);\n$$ language sql;\n\n(not sure what \"order\" is, is that the rownum, can't that just be\ninferred from the array position?)\n\nmerlin\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Sat, 3 Jul 2010 16:17:56 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "On Sat, Jul 3, 2010 at 4:17 PM, Eliot Gable\n<[email protected]> wrote:\n> Read RFC 2782 on random weighted load balancing of SRV records inside DNS.\n> That algorithm is what I need implemented, but with an extension. I have\n> groups of records I need to have the algorithm applied to where each group\n> is treated separately from the others. I understand the operational\n> complexity of what I'm doing. It is more like N^3, or more precisely G*P*W\n> where G is the number of groups, P the number of priorities per group, and W\n> the number of different weights per priority. But, the complexity of the\n> algorithm means nothing in terms of performance  or run time because it will\n> only ever deal with very small sets of records (maybe 20 rows of data,\n> tops). Even if the algorithm were N^4, it wouldn't matter with that few\n> records. But, more importantly, there are constraints in how the data is\n> sub-divided. Primarily, G < P < W. Further, G and P are groupings which\n> subdivide the entire set of data and the groups do not have overlapping\n> data. So, maybe it's more like N^2.2 or something. But, regardless, we're\n> only talking about 20 rows, tops.\n> The issue is how efficiently the languages can deal with arrays. In Perl, I\n> have to parse a string into an array of data, then break it up into sub\n> arrays inside associative arrays just to work with the input. I also have to\n> splice the array to remove elements, which I don't think is very efficient.\n> Any way I could come up with of removing elements involved rebuilding the\n> entire array. The same thing goes for pl/pgsql. Dealing with arrays there is\n> also not very efficient. I do a lot of constructing of arrays from sets of\n> data using myvar = array(select blah);. While pl/pgsql was considerably\n> faster than Perl, it cannot come close to what I did in C++ using a hash of\n> a hash of a linked list. The two hash tables provide my groupings and the\n> linked list gives me something that is highly efficient for removing\n> elements as I pick them.\n> I've looked through the documentation on how to re-write this in C, but I\n> cannot seem to find good documentation on working with the input array\n> (which is an array of a complex type). I also don't see good documentation\n> for working with the complex type. I found stuff that talks about\n> constructing a complex type in C and returning it. However, I'm not sure how\n> to take an input complex type and deconstruct it into something I can work\n> with in C. Also, the memory context management stuff is not entirely clear.\n> Specifically, how do I go about preserving the pointers to the data that I\n> allocate in multi-call memory context so that they still point to the data\n> on the next call to the function for the next result row? Am I supposed to\n> set up some global variables to do that, or am I supposed to take a\n> different approach? If I need to use global variables, then how do I deal\n> with concurrency?\n\nplease stop top posting.\n\nWhat about my suggestion doesn't work for your requirements? (btw,\nlet me disagree with my peers and state pl/perl is lousy for this type\nof job, only sql/and pl/sql can interact with postgresql variables\nnatively for the most part).\n\nmerlin\n", "msg_date": "Sat, 3 Jul 2010 18:53:46 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "Excerpts from Merlin Moncure's message of sáb jul 03 18:53:46 -0400 2010:\n\n> What about my suggestion doesn't work for your requirements? (btw,\n> let me disagree with my peers and state pl/perl is lousy for this type\n> of job, only sql/and pl/sql can interact with postgresql variables\n> natively for the most part).\n\nIIRC the other reason pl/perl sucks for this kind of thing is that it\nforces a subtransaction to be created before the function call, which is\nexpensive. (I might be misremembering and what actually causes a\nsubtransaction is a SPI call inside a PL/Perl function, which wouldn't\napply here.)\n", "msg_date": "Sat, 03 Jul 2010 22:47:51 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "On Sat, Jul 3, 2010 at 4:17 PM, Eliot Gable\n<[email protected]> wrote:\n> Read RFC 2782 on random weighted load balancing of SRV records inside DNS.\n\nIt may be asking a bit much to expect people here to read an RFC to\nfigure out how to help you solve this problem, but...\n\n> I've looked through the documentation on how to re-write this in C, but I\n> cannot seem to find good documentation on working with the input array\n> (which is an array of a complex type). I also don't see good documentation\n> for working with the complex type. I found stuff that talks about\n> constructing a complex type in C and returning it. However, I'm not sure how\n> to take an input complex type and deconstruct it into something I can work\n> with in C. Also, the memory context management stuff is not entirely clear.\n\n...there's no question that writing things in C is a lot more work,\nand takes some getting used to. Still, it's fast, so maybe worth it,\nespecially since you already know C++, and will therefore mostly just\nneed to learn the PostgreSQL coding conventions. The best thing to do\nis probably to look at some of the existing examples within the\nbackend code. Most of the datatype code is in src/backend/utils/adt.\nYou might want to look at arrayfuncs.c (perhaps array_ref() or\narray_map()); and also rowtypes.c (perhaps record_cmp()).\n\n> Specifically, how do I go about preserving the pointers to the data that I\n> allocate in multi-call memory context so that they still point to the data\n> on the next call to the function for the next result row? Am I supposed to\n> set up some global variables to do that, or am I supposed to take a\n> different approach? If I need to use global variables, then how do I deal\n> with concurrency?\n\nGlobal variables would be a bad idea, not so much because of\nconcurrency as because they won't get cleaned up properly. Again, the\nbest thing to do is to look at existing examples, like array_unnest()\nin src/backend/utils/adt/arrayfuncs.c; the short answer is that you\nprobably want to compute all your results on the first call and stash\nthem in the FuncCallContext (funcctx->user_fctx); and then on\nsubsequent calls just return one row per call.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 6 Jul 2010 15:01:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "On Tue, Jul 6, 2010 at 3:01 PM, Robert Haas <[email protected]> wrote:\n\n> On Sat, Jul 3, 2010 at 4:17 PM, Eliot Gable\n> <[email protected] <egable%[email protected]>>\n> wrote:\n> > Read RFC 2782 on random weighted load balancing of SRV records inside\n> DNS.\n>\n> It may be asking a bit much to expect people here to read an RFC to\n> figure out how to help you solve this problem, but...\n>\n>\nYeah, I was not actually expecting them to read the whole RFC. The section\non random weighted load balancing is only a few paragraphs and describes\njust the algorithm I am trying to implement as efficiently as possible:\n\n Priority\n The priority of this target host. A client MUST attempt to\n contact the target host with the lowest-numbered priority it can\n reach; target hosts with the same priority SHOULD be tried in an\n order defined by the weight field. The range is 0-65535. This\n is a 16 bit unsigned integer in network byte order.\n\n Weight\n A server selection mechanism. The weight field specifies a\n relative weight for entries with the same priority. Larger\n weights SHOULD be given a proportionately higher probability of\n being selected. The range of this number is 0-65535. This is a\n 16 bit unsigned integer in network byte order. Domain\n administrators SHOULD use Weight 0 when there isn't any server\n selection to do, to make the RR easier to read for humans (less\n noisy). In the presence of records containing weights greater\n than 0, records with weight 0 should have a very small chance of\n being selected.\n\n In the absence of a protocol whose specification calls for the\n use of other weighting information, a client arranges the SRV\n RRs of the same Priority in the order in which target hosts,\n specified by the SRV RRs, will be contacted. The following\n algorithm SHOULD be used to order the SRV RRs of the same\n priority:\n\n To select a target to be contacted next, arrange all SRV RRs\n (that have not been ordered yet) in any order, except that all\n those with weight 0 are placed at the beginning of the list.\n\n Compute the sum of the weights of those RRs, and with each RR\n associate the running sum in the selected order. Then choose a\n uniform random number between 0 and the sum computed\n (inclusive), and select the RR whose running sum value is the\n first in the selected order which is greater than or equal to\n the random number selected. The target host specified in the\n selected SRV RR is the next one to be contacted by the client.\n Remove this SRV RR from the set of the unordered SRV RRs and\n apply the described algorithm to the unordered SRV RRs to select\n the next target host. Continue the ordering process until there\n are no unordered SRV RRs. This process is repeated for each\n Priority.\n\nThe difference between this description and my implementation is that I have\nadded a \"group\" field to the mix so that this algorithm is applied to each\ngroup independently of the others. Also, my input data has an \"id\" field\nwhich must be present on the same rows of the output and is used to map the\noutput back to my original input.\n\n\n> ...there's no question that writing things in C is a lot more work,\n> and takes some getting used to. Still, it's fast, so maybe worth it,\n> especially since you already know C++, and will therefore mostly just\n> need to learn the PostgreSQL coding conventions. The best thing to do\n> is probably to look at some of the existing examples within the\n> backend code. Most of the datatype code is in src/backend/utils/adt.\n> You might want to look at arrayfuncs.c (perhaps array_ref() or\n> array_map()); and also rowtypes.c (perhaps record_cmp()).\n>\n>\nI did actually find the arrayfuncs.c file and start looking through it for\nexamples. I'm just not entirely clear on what is going on in some of those\nfunctions -- what is necessary to keep in order to extract my data and get\nit represented in C structures and what I can remove. I was hoping there was\nsome sort of documentation on how to work with input arrays for extracting\nthe data and getting it converted. In any event, I have spent several hours\nreverse engineering how that stuff works, and I think I am pretty close to\nbeing able to get my data into a C structure that I can work with.\n\n\n> > Specifically, how do I go about preserving the pointers to the data that\n> I\n> > allocate in multi-call memory context so that they still point to the\n> data\n> > on the next call to the function for the next result row? Am I supposed\n> to\n> > set up some global variables to do that, or am I supposed to take a\n> > different approach? If I need to use global variables, then how do I deal\n> > with concurrency?\n>\n> Global variables would be a bad idea, not so much because of\n> concurrency as because they won't get cleaned up properly. Again, the\n> best thing to do is to look at existing examples, like array_unnest()\n> in src/backend/utils/adt/arrayfuncs.c; the short answer is that you\n> probably want to compute all your results on the first call and stash\n> them in the FuncCallContext (funcctx->user_fctx); and then on\n> subsequent calls just return one row per call.\n>\n\nThanks for suggesting array_unnest(). I think that will actually prove more\nuseful to me than the other example I'm using for extracting my data from an\narray. I was actually planning on computing the order on the first call and\nstoring it in a linked list which gets returned one item at a time until all\nrows have been returned. Also, I found a code example using Google that\nshowed someone storing data across function calls using that pointer. I used\ntheir example to produce this:\n\n<snip>\n if(SRF_IS_FIRSTCALL()) {\n funcctx = SRF_FIRSTCALL_INIT();\n\n /* This is where we stick or sorted data for returning later */\n funcctx->user_fctx =\nMemoryContextAlloc(funcctx->multi_call_memory_ctx, sizeof(sort_data));\n oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\n data = (sort_data*) funcctx->user_fctx;\n</snip>\n\nI have a structure set up that is typedef'd to \"sort_data\" which stores\npointers to various things that I need to survive across the calls. Since\nthis seems to be what you are suggesting, I assume this is the correct\napproach.\n\n\n-- \nEliot Gable\n\nOn Tue, Jul 6, 2010 at 3:01 PM, Robert Haas <[email protected]> wrote:\nOn Sat, Jul 3, 2010 at 4:17 PM, Eliot Gable\n<[email protected]> wrote:\n> Read RFC 2782 on random weighted load balancing of SRV records inside DNS.\n\nIt may be asking a bit much to expect people here to read an RFC to\nfigure out how to help you solve this problem, but...\nYeah, I was not actually expecting them to read the whole RFC. The section on random weighted load balancing is only a few paragraphs and describes just the algorithm I am trying to implement as efficiently as possible:\n Priority The priority of this target host. A client MUST attempt to contact the target host with the lowest-numbered priority it can reach; target hosts with the same priority SHOULD be tried in an\n order defined by the weight field. The range is 0-65535. This is a 16 bit unsigned integer in network byte order. Weight A server selection mechanism. The weight field specifies a\n relative weight for entries with the same priority. Larger weights SHOULD be given a proportionately higher probability of being selected. The range of this number is 0-65535. This is a 16 bit unsigned integer in network byte order. Domain\n administrators SHOULD use Weight 0 when there isn't any server selection to do, to make the RR easier to read for humans (less noisy). In the presence of records containing weights greater\n than 0, records with weight 0 should have a very small chance of being selected. In the absence of a protocol whose specification calls for the use of other weighting information, a client arranges the SRV\n RRs of the same Priority in the order in which target hosts, specified by the SRV RRs, will be contacted. The following algorithm SHOULD be used to order the SRV RRs of the same priority:\n To select a target to be contacted next, arrange all SRV RRs (that have not been ordered yet) in any order, except that all those with weight 0 are placed at the beginning of the list.\n Compute the sum of the weights of those RRs, and with each RR associate the running sum in the selected order. Then choose a uniform random number between 0 and the sum computed (inclusive), and select the RR whose running sum value is the\n first in the selected order which is greater than or equal to the random number selected. The target host specified in the selected SRV RR is the next one to be contacted by the client. Remove this SRV RR from the set of the unordered SRV RRs and\n apply the described algorithm to the unordered SRV RRs to select the next target host. Continue the ordering process until there are no unordered SRV RRs. This process is repeated for each\n Priority.The difference between this description and my implementation is that I have added a \"group\" field to the mix so that this algorithm is applied to each group independently of the others. Also, my input data has an \"id\" field which must be present on the same rows of the output and is used to map the output back to my original input. \n \n...there's no question that writing things in C is a lot more work,\nand takes some getting used to.  Still, it's fast, so maybe worth it,\nespecially since you already know C++, and will therefore mostly just\nneed to learn the PostgreSQL coding conventions.  The best thing to do\nis probably to look at some of the existing examples within the\nbackend code.  Most of the datatype code is in src/backend/utils/adt.\nYou might want to look at arrayfuncs.c (perhaps array_ref() or\narray_map()); and also rowtypes.c (perhaps record_cmp()).\nI did actually find the arrayfuncs.c file and start looking through it for examples. I'm just not entirely clear on what is going on in some of those functions -- what is necessary to keep in order to extract my data and get it represented in C structures and what I can remove. I was hoping there was some sort of documentation on how to work with input arrays for extracting the data and getting it converted. In any event, I have spent several hours reverse engineering how that stuff works, and I think I am pretty close to being able to get my data into a C structure that I can work with. \n \n> Specifically, how do I go about preserving the pointers to the data that I\n> allocate in multi-call memory context so that they still point to the data\n> on the next call to the function for the next result row? Am I supposed to\n> set up some global variables to do that, or am I supposed to take a\n> different approach? If I need to use global variables, then how do I deal\n> with concurrency?\n\nGlobal variables would be a bad idea, not so much because of\nconcurrency as because they won't get cleaned up properly.  Again, the\nbest thing to do is to look at existing examples, like array_unnest()\nin src/backend/utils/adt/arrayfuncs.c; the short answer is that you\nprobably want to compute all your results on the first call and stash\nthem in the FuncCallContext (funcctx->user_fctx); and then on\nsubsequent calls just return one row per call.Thanks for suggesting array_unnest(). I think that will actually prove more useful to me than the other example I'm using for extracting my data from an array. I was actually planning on computing the order on the first call and storing it in a linked list which gets returned one item at a time until all rows have been returned. Also, I found a code example using Google that showed someone storing data across function calls using that pointer. I used their example to produce this:\n<snip>    if(SRF_IS_FIRSTCALL()) {        funcctx = SRF_FIRSTCALL_INIT();        /* This is where we stick or sorted data for returning later */        funcctx->user_fctx = MemoryContextAlloc(funcctx->multi_call_memory_ctx, sizeof(sort_data));\n        oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);        data = (sort_data*) funcctx->user_fctx;</snip>I have a structure set up that is typedef'd to \"sort_data\" which stores pointers to various things that I need to survive across the calls. Since this seems to be what you are suggesting, I assume this is the correct approach. \n-- Eliot Gable", "msg_date": "Tue, 6 Jul 2010 15:42:14 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "On 07/06/2010 12:42 PM, Eliot Gable wrote:\n> Thanks for suggesting array_unnest(). I think that will actually prove\n> more useful to me than the other example I'm using for extracting my\n> data from an array. I was actually planning on computing the order on\n> the first call and storing it in a linked list which gets returned one\n> item at a time until all rows have been returned. Also, I found a code\n> example using Google that showed someone storing data across function\n> calls using that pointer. I used their example to produce this:\n> \n> <snip>\n> if(SRF_IS_FIRSTCALL()) {\n> funcctx = SRF_FIRSTCALL_INIT();\n> \n> /* This is where we stick or sorted data for returning later */\n> funcctx->user_fctx =\n> MemoryContextAlloc(funcctx->multi_call_memory_ctx, sizeof(sort_data));\n> oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\n> data = (sort_data*) funcctx->user_fctx;\n> </snip>\n> \n> I have a structure set up that is typedef'd to \"sort_data\" which stores\n> pointers to various things that I need to survive across the calls.\n> Since this seems to be what you are suggesting, I assume this is the\n> correct approach.\n\nThis approach works, but you could also use the SFRM_Materialize mode\nand calculate the entire result set in one go. That tends to be simpler.\nSee, for example crosstab_hash() in contrib/tablefunc for an example.\n\nFWIW, there are also some good examples of array handling in PL/R, e.g.\npg_array_get_r() in pg_conversion.c\n\nHTH,\n\nJoe\n\n-- \nJoe Conway\ncredativ LLC: http://www.credativ.us\nLinux, PostgreSQL, and general Open Source\nTraining, Service, Consulting, & Support", "msg_date": "Tue, 06 Jul 2010 13:00:33 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "On Tue, Jul 6, 2010 at 4:00 PM, Joe Conway <[email protected]> wrote:\n\n>\n>\n> This approach works, but you could also use the SFRM_Materialize mode\n> and calculate the entire result set in one go. That tends to be simpler.\n> See, for example crosstab_hash() in contrib/tablefunc for an example.\n>\n> FWIW, there are also some good examples of array handling in PL/R, e.g.\n> pg_array_get_r() in pg_conversion.c\n>\n>\n Thanks. That looks like less code and probably will be slightly more\nefficient.\n\nOn Tue, Jul 6, 2010 at 4:00 PM, Joe Conway <[email protected]> wrote:\n\n\nThis approach works, but you could also use the SFRM_Materialize mode\nand calculate the entire result set in one go. That tends to be simpler.\nSee, for example crosstab_hash() in contrib/tablefunc for an example.\n\nFWIW, there are also some good examples of array handling in PL/R, e.g.\npg_array_get_r() in pg_conversion.c\n Thanks. That looks like less code and probably will be slightly more efficient.", "msg_date": "Tue, 6 Jul 2010 16:17:47 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "On Tue, Jul 6, 2010 at 4:17 PM, Eliot Gable <\[email protected] <egable%[email protected]>>wrote:\n\n>\n> On Tue, Jul 6, 2010 at 4:00 PM, Joe Conway <[email protected]> wrote:\n>\n>>\n>>\n>> This approach works, but you could also use the SFRM_Materialize mode\n>> and calculate the entire result set in one go. That tends to be simpler.\n>> See, for example crosstab_hash() in contrib/tablefunc for an example.\n>>\n>> FWIW, there are also some good examples of array handling in PL/R, e.g.\n>> pg_array_get_r() in pg_conversion.c\n>>\n>>\n> Thanks. That looks like less code and probably will be slightly more\n> efficient.\n>\n\nI just got my first test of the new C-based function compiled and loaded\ninto the server. The first time it is called, I see it correctly print the\npriority of each of the five rows of the array that I passed to it:\n\nGot priority 1.\nGot priority 1.\nGot priority 1.\nGot priority 1.\nGot priority 1.\nCONTEXT: ERROR\nCODE: XX000\nMESSAGE: cache lookup failed for type 7602245\n---------------------------------------------------------------------------\n\nI assume this \"cache lookup\" error is because I am not actually returning\nany results (or even NULL) at the end of the function call. If it means\nsomething else, please let me know.\n\nDo I need to somehow force the server to unload and then re-load this .so\nfile each time I build a new version of it? If so, how do I do that? Can I\njust re-run the \"create or replace function\" SQL code again to make that\nhappen? In every other system I have dealt with where I build a module, I\nhave some way to unload the module and force it to load again; but I don't\nsee a mention of that in the PostgreSQL documentation.\n\nThanks again to everyone who has provided feedback.\n\n-- \nEliot Gable\n\nOn Tue, Jul 6, 2010 at 4:17 PM, Eliot Gable <[email protected]> wrote:\nOn Tue, Jul 6, 2010 at 4:00 PM, Joe Conway <[email protected]> wrote:\n\n\nThis approach works, but you could also use the SFRM_Materialize mode\nand calculate the entire result set in one go. That tends to be simpler.\nSee, for example crosstab_hash() in contrib/tablefunc for an example.\n\nFWIW, there are also some good examples of array handling in PL/R, e.g.\npg_array_get_r() in pg_conversion.c\n Thanks. That looks like less code and probably will be slightly more efficient.\nI just got my first test of the new C-based function compiled and\nloaded into the server. The first time it is called, I see it correctly\nprint the priority of each of the five rows of the array that I passed\nto it:\n\nGot priority 1.\nGot priority 1.\nGot priority 1.\nGot priority 1.\nGot priority 1.\nCONTEXT: ERROR\nCODE: XX000\nMESSAGE: cache lookup failed for type 7602245\n---------------------------------------------------------------------------\n\nI assume this \"cache lookup\" error is because I am not actually\nreturning any results (or even NULL) at the end of the function call.\nIf it means something else, please let me know.\n\nDo I need to somehow force the server to unload and then re-load this\n.so file each time I build a new version of it? If so, how do I do\nthat? Can I just re-run the \"create or replace function\" SQL code again\nto make that happen? In every other system I have dealt with where I\nbuild a module, I have some way to unload the module and force it to\nload again; but I don't see a mention of that in the PostgreSQL\ndocumentation.\n\nThanks again to everyone who has provided feedback.-- Eliot Gable", "msg_date": "Tue, 6 Jul 2010 17:53:04 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "Eliot Gable <[email protected]> writes:\n> Do I need to somehow force the server to unload and then re-load this .so\n> file each time I build a new version of it? If so, how do I do that?\n\nStart a new database session.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jul 2010 18:21:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting " }, { "msg_contents": "Thanks again for all the input and suggestions from people. I have this\nsorting algorithm re-implemented in C now and it is somewhere <2ms to run it\nnow; though it is difficult to get a more accurate measure. There may be\nsome additional optimizations I can come up with, but for now, this will\nwork very well compared to the alternative methods.\n\nOn Tue, Jul 6, 2010 at 6:21 PM, Tom Lane <[email protected]> wrote:\n\n> Eliot Gable <[email protected]<egable%[email protected]>>\n> writes:\n> > Do I need to somehow force the server to unload and then re-load this .so\n> > file each time I build a new version of it? If so, how do I do that?\n>\n> Start a new database session.\n>\n> regards, tom lane\n>\n\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nThanks again for all the input and suggestions from people. I have this sorting algorithm re-implemented in C now and it is somewhere <2ms to run it now; though it is difficult to get a more accurate measure. There may be some additional optimizations I can come up with, but for now, this will work very well compared to the alternative methods.\nOn Tue, Jul 6, 2010 at 6:21 PM, Tom Lane <[email protected]> wrote:\nEliot Gable <[email protected]> writes:\n> Do I need to somehow force the server to unload and then re-load this .so\n> file each time I build a new version of it? If so, how do I do that?\n\nStart a new database session.\n\n                        regards, tom lane\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Wed, 7 Jul 2010 15:23:12 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Highly Efficient Custom Sorting" }, { "msg_contents": "Hi Eliot,\n\nWould you mind posting your code for reference. It is nice to\nhave working examples when trying to figure out how it all fits\ntogether.\n\nRegards,\nKen\n\nOn Wed, Jul 07, 2010 at 03:23:12PM -0400, Eliot Gable wrote:\n> Thanks again for all the input and suggestions from people. I have this\n> sorting algorithm re-implemented in C now and it is somewhere <2ms to run it\n> now; though it is difficult to get a more accurate measure. There may be\n> some additional optimizations I can come up with, but for now, this will\n> work very well compared to the alternative methods.\n> \n> On Tue, Jul 6, 2010 at 6:21 PM, Tom Lane <[email protected]> wrote:\n> \n> > Eliot Gable <[email protected]<egable%[email protected]>>\n> > writes:\n> > > Do I need to somehow force the server to unload and then re-load this .so\n> > > file each time I build a new version of it? If so, how do I do that?\n> >\n> > Start a new database session.\n> >\n> > regards, tom lane\n> >\n> \n> \n> \n> -- \n> Eliot Gable\n> \n> \"We do not inherit the Earth from our ancestors: we borrow it from our\n> children.\" ~David Brower\n> \n> \"I decided the words were too conservative for me. We're not borrowing from\n> our children, we're stealing from them--and it's not even considered to be a\n> crime.\" ~David Brower\n> \n> \"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\n> live to eat.) ~Marcus Tullius Cicero\n", "msg_date": "Wed, 7 Jul 2010 14:42:36 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Highly Efficient Custom Sorting" } ]
[ { "msg_contents": "Hi,\n\n \n\nWe are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x).\nIt's a master/slave deployment & slony-2.0.4.rc2 is used for DB\nreplication (from master to slave). \n\n \n\nAt times we have observed that postgres stops responding for several\nminutes, even couldn't fetch the number of entries in a particular\ntable. One such instance happens when we execute the following steps:\n\n- Add few lakh entries (~20) to table X on the master DB.\n\n- After addition, slony starts replication on the slave DB. It\ntakes several minutes (~25 mins) for replication to finish.\n\n- During this time (while replication is in progress), sometimes\npostgres stops responding, i.e. we couldn't even fetch the number of\nentries in any table (X, Y, etc).\n\n \n\nCan you please let us know what could the reason for such a behavior and\nhow it can be fixed/improved.\n\n \n\nPlease let us know if any information is required wrt hardware\ndetails/configurations etc.\n\n \n\nRegards,\n\nSachin\n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nWe are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x).\nIt’s a master/slave deployment & slony-2.0.4.rc2 is used for DB\nreplication (from master to slave). \n \nAt times we have observed that postgres stops responding for several minutes,\neven couldn’t fetch the number of entries in a particular table. One such\ninstance happens when we execute the following steps:\n-        \nAdd\nfew lakh entries (~20) to table X on the master DB.\n-        \nAfter\naddition, slony starts replication on the slave DB. It takes several minutes\n(~25 mins) for replication to finish.\n-        \nDuring\nthis time (while replication is in progress), sometimes postgres stops responding, i.e. we\ncouldn’t even fetch the number of entries in any table (X, Y, etc).\n \nCan you please let us know what could the\nreason for such a behavior and how it can be fixed/improved.\n \nPlease let us know if any information is\nrequired wrt hardware details/configurations etc.\n \nRegards,\nSachin", "msg_date": "Fri, 2 Jul 2010 11:10:13 +0530", "msg_from": "\"Sachin Kumar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issues with postgresql-8.4.0: Query gets stuck sometimes" }, { "msg_contents": "On Fri, Jul 2, 2010 at 1:40 AM, Sachin Kumar <[email protected]> wrote:\n> Hi,\n>\n> We are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x). It’s\n> a master/slave deployment & slony-2.0.4.rc2 is used for DB replication (from\n> master to slave).\n\nYou should really be running 8.4.4, not 8.4.0, as there are quite a\nfew bug fixes since 8.4.0 was released.\n\nslony 2.0.4 is latest, and I'm not sure I trust it completely just\nyet, and am still running 1.2.latest myself. At least move forward\nfrom 2.0.4.rc2 to 2.0.4 release.\n\n> At times we have observed that postgres stops responding for several\n> minutes, even couldn’t fetch the number of entries in a particular table.\n\nNote that retrieving the number of entries in a table is not a cheap\noperation in pgsql. Try something cheaper like \"select * from\nsometable limit 1;\" and see if that responds. If that seems to hang,\nopen another session and see what select * from pg_statistic has to\nsay about waiting queries.\n\n> One such instance happens when we execute the following steps:\n>\n> -         Add few lakh entries (~20) to table X on the master DB.\n\nNote that most westerner's don't know what a lakh is. (100k I believe?)\n\n> -         After addition, slony starts replication on the slave DB. It takes\n> several minutes (~25 mins) for replication to finish.\n>\n> -         During this time (while replication is in progress), sometimes\n> postgres stops responding, i.e. we couldn’t even fetch the number of entries\n> in any table (X, Y, etc).\n\nI have seen some issues pop up during subscription of large sets like\nthis. Most of the time you're just outrunning your IO subsystem.\nOccasionally a nasty interaction between slony, autovacuum, and user\nqueries causes a problem.\n\n> Can you please let us know what could the reason for such a behavior and how\n> it can be fixed/improved.\n\nYou'll need to see what's happening on your end. If pg_statistic says\nyour simple select * from X limit 1 is waiting, we'll go from there.\nIf it returns but bigger queries take a long time you've got a\ndifferent issue and probably need to monitor your IO subsystem with\nthings like iostat, vmstat, iotop, etc.\n\n> Please let us know if any information is required wrt hardware\n> details/configurations etc.\n\nAlways useful to have.\n", "msg_date": "Fri, 2 Jul 2010 01:50:58 -0400", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues with postgresql-8.4.0: Query gets\n\tstuck sometimes" }, { "msg_contents": "On Fri, Jul 2, 2010 at 1:40 AM, Sachin Kumar <[email protected]> wrote:\n> At times we have observed that postgres stops responding for several\n> minutes, even couldn’t fetch the number of entries in a particular table.\n> One such instance happens when we execute the following steps:\n\nSounds sort of like a checkpoint spike.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Mon, 5 Jul 2010 20:34:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues with postgresql-8.4.0: Query gets\n\tstuck sometimes" } ]
[ { "msg_contents": "Hello,\n\nI try to make a query run quicker but I don't really know how to give \nhints to the planner.\n\nWe are using postgresql 8.4.3 64bit on ubuntu 9.10 server. The hardware \nis a 10 SAS drive (15k) on a single RAID 10 array with 8Go RAM.\nQueries come from J2EE application (OLAP cube), but running them in \npg_admin perform the same way.\n\nI made a short example that shows what I think is the problem. The real \nquery is much longer but with only one join it already cause problems.\n\nHere is the short example :\n\nselect rfoadv_8.rfoadvsup as c8,\n sum(dwhinv.dwhinvqte) as m0\nfrom\n dwhinv as dwhinv,\n rfoadv as rfoadv_8\nwhere (dwhinv.dwhinv___rforefide = 'HPLUS'\n and (dwhinv.dwhinv___rfodomide = 'PMSI' and \ndwhinv.dwhinv___rfoindrvs = '1' and \ndwhinv.dwhinv___rfoindide='recN3_BB_reel') )\n and dwhinv.dwhinv_p2rfodstide = rfoadv_8.rfoadvinf\n and rfoadv_8.rfoadvsup = 'ACTI'\ngroup by rfoadv_8.rfoadvsup\n\ndwhinv is a table with almost 6.000.000 records\nrfoadv is a view with 800.000 records\nrfoadv is based on rfoade which is 50.000 records\n\nHere is the explain analyse :\nGroupAggregate (cost=0.00..16.56 rows=1 width=13) (actual \ntime=2028.452..2028.453 rows=1 loops=1)\n -> Nested Loop (cost=0.00..16.54 rows=1 width=13) (actual \ntime=0.391..1947.432 rows=42664 loops=1)\n Join Filter: (((ade2.rfoadegch)::text >= (ade1.rfoadegch)::text) \nAND ((ade2.rfoadedrt)::text <= (ade1.rfoadedrt)::text))\n -> Nested Loop (cost=0.00..12.54 rows=1 width=214) (actual \ntime=0.304..533.281 rows=114350 loops=1)\n -> Index Scan using dwhinv_rdi_idx on dwhinv \n(cost=0.00..4.87 rows=1 width=12) (actual time=0.227..16.827 rows=6360 \nloops=1)\n Index Cond: (((dwhinv___rforefide)::text = \n'HPLUS'::text) AND ((dwhinv___rfodomide)::text = 'PMSI'::text) AND \n((dwhinv___rfoindide)::text = 'recN3_BB_reel'::text) AND \n(dwhinv___rfoindrvs = 1))\n -> Index Scan using rfoade_dsi_idx on rfoade ade2 \n(cost=0.00..7.63 rows=3 width=213) (actual time=0.007..0.037 rows=18 \nloops=6360)\n Index Cond: ((ade2.rfoade_i_rfodstide)::text = \n(dwhinv.dwhinv_p2rfodstide)::text)\n -> Index Scan using rfoade_pk on rfoade ade1 (cost=0.00..3.98 \nrows=1 width=213) (actual time=0.008..0.009 rows=0 loops=114350)\n Index Cond: (((ade1.rfoade___rforefide)::text = \n(ade2.rfoade___rforefide)::text) AND ((ade1.rfoade_i_rfodstide)::text = \n'ACTI'::text) AND ((ade1.rfoade___rfovdeide)::text = \n(ade2.rfoade___rfovdeide)::text) AND (ade1.rfoadervs = ade2.rfoadervs))\n\nWe can see that the planner think that accessing dwhinv with the \ndwhinv_rdi_idx index will return 1 row, but in fact there are 6360. So \nthe nested loop is not done with 1 loop but 6360. With only one Join, \nthe query runs in about 1.5 sec which is not really long, but with 8 \njoin, the same mistake is repeated 8 times, the query runs in 30-60 sec. \nI try to disable nested loop, hash join and merge join are done instead \nof nested loops, example query runs in 0.2 - 0.5 sec, and the real query \nno more that 1 sec ! Which is great.\n\nHere is the execution plan with nested loop off:\n\nGroupAggregate (cost=12.56..2453.94 rows=1 width=13) (actual \ntime=817.306..817.307 rows=1 loops=1)\n -> Hash Join (cost=12.56..2453.93 rows=1 width=13) (actual \ntime=42.583..720.746 rows=42664 loops=1)\n Hash Cond: (((ade2.rfoade___rforefide)::text = \n(ade1.rfoade___rforefide)::text) AND ((ade2.rfoade___rfovdeide)::text = \n(ade1.rfoade___rfovdeide)::text) AND (ade2.rfoadervs = ade1.rfoadervs))\n Join Filter: (((ade2.rfoadegch)::text >= (ade1.rfoadegch)::text) \nAND ((ade2.rfoadedrt)::text <= (ade1.rfoadedrt)::text))\n -> Hash Join (cost=4.88..2446.21 rows=1 width=214) (actual \ntime=42.168..411.962 rows=114350 loops=1)\n Hash Cond: ((ade2.rfoade_i_rfodstide)::text = \n(dwhinv.dwhinv_p2rfodstide)::text)\n -> Seq Scan on rfoade ade2 (cost=0.00..2262.05 \nrows=47805 width=213) (actual time=0.057..78.988 rows=47805 loops=1)\n -> Hash (cost=4.87..4.87 rows=1 width=12) (actual \ntime=41.632..41.632 rows=6360 loops=1)\n -> Index Scan using dwhinv_rdi_idx on dwhinv \n(cost=0.00..4.87 rows=1 width=12) (actual time=0.232..28.199 rows=6360 \nloops=1)\n Index Cond: (((dwhinv___rforefide)::text = \n'HPLUS'::text) AND ((dwhinv___rfodomide)::text = 'PMSI'::text) AND \n((dwhinv___rfoindide)::text = 'recN3_BB_reel'::text) AND \n(dwhinv___rfoindrvs = 1))\n -> Hash (cost=7.63..7.63 rows=3 width=213) (actual \ntime=0.347..0.347 rows=11 loops=1)\n -> Index Scan using rfoade_dsi_idx on rfoade ade1 \n(cost=0.00..7.63 rows=3 width=213) (actual time=0.095..0.307 rows=11 \nloops=1)\n Index Cond: ((rfoade_i_rfodstide)::text = 'ACTI'::text)\n\nEven if dwhinv row estimation is wrong, the query is quicker\n\n\nSo after looking at dwhinv_rdi_idx statistics, I found that \ndwhinv___rfoindide related stats wasn't good, so I try \"ALTER TABLE \ndwhinv ALTER dwhinv_p2rfodstide SET STATISTICS 2000\" and launch an \nvaccum analyse to gather more impressive stats. Stats are better but \nquery plan is the same and query is not optimised. So I try reindex on \nDWHINV as a last chance, but it changes nothing !\n\nMaybe I'm wrong with the interpretation of the plan but I don't really \nthink so because with no nested loops this query is really fast ! I do \nnot plan to disable nested loop on the whole database because sometimes, \nnested loops are greats !\n\nNow I'm stuck ! I don't know how to make the planner understand there \nare 6000 rows. Or maybe the 3 column index is a bad idea... ?!\n\nThanks\n\n-- \nHOSTIN Damien - Equipe R&D\nSoci�t� Ax�ge\nwww.axege.com\n\n\n\n", "msg_date": "Fri, 02 Jul 2010 10:48:59 +0200", "msg_from": "damien hostin <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query with planner row strange estimation" }, { "msg_contents": "Hello,\n\nBefore the week end I tried to change the index, but even with the \nmono-column index on differents columns, the estimated number of rows \nfrom dwhinv is 1.\n\nAnyone have a suggestion, what can I check ?\n\n\nthx\n\n\ndamien hostin a �crit :\n> Hello,\n>\n> I try to make a query run quicker but I don't really know how to give \n> hints to the planner.\n>\n> We are using postgresql 8.4.3 64bit on ubuntu 9.10 server. The \n> hardware is a 10 SAS drive (15k) on a single RAID 10 array with 8Go RAM.\n> Queries come from J2EE application (OLAP cube), but running them in \n> pg_admin perform the same way.\n>\n> I made a short example that shows what I think is the problem. The \n> real query is much longer but with only one join it already cause \n> problems.\n>\n> Here is the short example :\n>\n> select rfoadv_8.rfoadvsup as c8,\n> sum(dwhinv.dwhinvqte) as m0\n> from\n> dwhinv as dwhinv,\n> rfoadv as rfoadv_8\n> where (dwhinv.dwhinv___rforefide = 'HPLUS'\n> and (dwhinv.dwhinv___rfodomide = 'PMSI' and \n> dwhinv.dwhinv___rfoindrvs = '1' and \n> dwhinv.dwhinv___rfoindide='recN3_BB_reel') )\n> and dwhinv.dwhinv_p2rfodstide = rfoadv_8.rfoadvinf\n> and rfoadv_8.rfoadvsup = 'ACTI'\n> group by rfoadv_8.rfoadvsup\n>\n> dwhinv is a table with almost 6.000.000 records\n> rfoadv is a view with 800.000 records\n> rfoadv is based on rfoade which is 50.000 records\n>\n> Here is the explain analyse :\n> GroupAggregate (cost=0.00..16.56 rows=1 width=13) (actual \n> time=2028.452..2028.453 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..16.54 rows=1 width=13) (actual \n> time=0.391..1947.432 rows=42664 loops=1)\n> Join Filter: (((ade2.rfoadegch)::text >= \n> (ade1.rfoadegch)::text) AND ((ade2.rfoadedrt)::text <= \n> (ade1.rfoadedrt)::text))\n> -> Nested Loop (cost=0.00..12.54 rows=1 width=214) (actual \n> time=0.304..533.281 rows=114350 loops=1)\n> -> Index Scan using dwhinv_rdi_idx on dwhinv \n> (cost=0.00..4.87 rows=1 width=12) (actual time=0.227..16.827 rows=6360 \n> loops=1)\n> Index Cond: (((dwhinv___rforefide)::text = \n> 'HPLUS'::text) AND ((dwhinv___rfodomide)::text = 'PMSI'::text) AND \n> ((dwhinv___rfoindide)::text = 'recN3_BB_reel'::text) AND \n> (dwhinv___rfoindrvs = 1))\n> -> Index Scan using rfoade_dsi_idx on rfoade ade2 \n> (cost=0.00..7.63 rows=3 width=213) (actual time=0.007..0.037 rows=18 \n> loops=6360)\n> Index Cond: ((ade2.rfoade_i_rfodstide)::text = \n> (dwhinv.dwhinv_p2rfodstide)::text)\n> -> Index Scan using rfoade_pk on rfoade ade1 (cost=0.00..3.98 \n> rows=1 width=213) (actual time=0.008..0.009 rows=0 loops=114350)\n> Index Cond: (((ade1.rfoade___rforefide)::text = \n> (ade2.rfoade___rforefide)::text) AND ((ade1.rfoade_i_rfodstide)::text \n> = 'ACTI'::text) AND ((ade1.rfoade___rfovdeide)::text = \n> (ade2.rfoade___rfovdeide)::text) AND (ade1.rfoadervs = ade2.rfoadervs))\n>\n> We can see that the planner think that accessing dwhinv with the \n> dwhinv_rdi_idx index will return 1 row, but in fact there are 6360. So \n> the nested loop is not done with 1 loop but 6360. With only one Join, \n> the query runs in about 1.5 sec which is not really long, but with 8 \n> join, the same mistake is repeated 8 times, the query runs in 30-60 \n> sec. I try to disable nested loop, hash join and merge join are done \n> instead of nested loops, example query runs in 0.2 - 0.5 sec, and the \n> real query no more that 1 sec ! Which is great.\n>\n> Here is the execution plan with nested loop off:\n>\n> GroupAggregate (cost=12.56..2453.94 rows=1 width=13) (actual \n> time=817.306..817.307 rows=1 loops=1)\n> -> Hash Join (cost=12.56..2453.93 rows=1 width=13) (actual \n> time=42.583..720.746 rows=42664 loops=1)\n> Hash Cond: (((ade2.rfoade___rforefide)::text = \n> (ade1.rfoade___rforefide)::text) AND ((ade2.rfoade___rfovdeide)::text \n> = (ade1.rfoade___rfovdeide)::text) AND (ade2.rfoadervs = ade1.rfoadervs))\n> Join Filter: (((ade2.rfoadegch)::text >= \n> (ade1.rfoadegch)::text) AND ((ade2.rfoadedrt)::text <= \n> (ade1.rfoadedrt)::text))\n> -> Hash Join (cost=4.88..2446.21 rows=1 width=214) (actual \n> time=42.168..411.962 rows=114350 loops=1)\n> Hash Cond: ((ade2.rfoade_i_rfodstide)::text = \n> (dwhinv.dwhinv_p2rfodstide)::text)\n> -> Seq Scan on rfoade ade2 (cost=0.00..2262.05 \n> rows=47805 width=213) (actual time=0.057..78.988 rows=47805 loops=1)\n> -> Hash (cost=4.87..4.87 rows=1 width=12) (actual \n> time=41.632..41.632 rows=6360 loops=1)\n> -> Index Scan using dwhinv_rdi_idx on dwhinv \n> (cost=0.00..4.87 rows=1 width=12) (actual time=0.232..28.199 rows=6360 \n> loops=1)\n> Index Cond: (((dwhinv___rforefide)::text = \n> 'HPLUS'::text) AND ((dwhinv___rfodomide)::text = 'PMSI'::text) AND \n> ((dwhinv___rfoindide)::text = 'recN3_BB_reel'::text) AND \n> (dwhinv___rfoindrvs = 1))\n> -> Hash (cost=7.63..7.63 rows=3 width=213) (actual \n> time=0.347..0.347 rows=11 loops=1)\n> -> Index Scan using rfoade_dsi_idx on rfoade ade1 \n> (cost=0.00..7.63 rows=3 width=213) (actual time=0.095..0.307 rows=11 \n> loops=1)\n> Index Cond: ((rfoade_i_rfodstide)::text = \n> 'ACTI'::text)\n>\n> Even if dwhinv row estimation is wrong, the query is quicker\n>\n>\n> So after looking at dwhinv_rdi_idx statistics, I found that \n> dwhinv___rfoindide related stats wasn't good, so I try \"ALTER TABLE \n> dwhinv ALTER dwhinv_p2rfodstide SET STATISTICS 2000\" and launch an \n> vaccum analyse to gather more impressive stats. Stats are better but \n> query plan is the same and query is not optimised. So I try reindex on \n> DWHINV as a last chance, but it changes nothing !\n>\n> Maybe I'm wrong with the interpretation of the plan but I don't really \n> think so because with no nested loops this query is really fast ! I do \n> not plan to disable nested loop on the whole database because \n> sometimes, nested loops are greats !\n>\n> Now I'm stuck ! I don't know how to make the planner understand there \n> are 6000 rows. Or maybe the 3 column index is a bad idea... ?!\n>\n> Thanks\n>\n\n\n-- \nHOSTIN Damien - Equipe R&D\nTel:+33(0)4 63 05 95 40\nSoci�t� Ax�ge\n23 rue Saint Simon\n63000 Clermont Ferrand\nwww.axege.com\n\n\n\n", "msg_date": "Mon, 05 Jul 2010 09:28:03 +0200", "msg_from": "damien hostin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with planner row strange estimation" }, { "msg_contents": "Hello,\n\nPostgresql configuration was default. So I take a look at pgtune which \nhelp me start a bit of tuning. I thought that the planner mistake could \ncome from the default low memory configuration. But after applying new \nparameters, nothing has changed. The query is still low, the execution \nplan is still using nested loops where hashjoin/hashmerge seems a lot \nbetter.\n\nHere are the postgresql.conf parameters I changed using pgtune advises, \nall other are defaults.\n(The hardware is a 10 SAS drive (15k) on a single RAID 10 array with 8Go \nRAM, with 2 opteron dual core 64bit (I can't remember the exact model))\n\n# generated for 100 connection and 6G RAM with datawarehouse type\n#\ndefault_statistics_target = 100\nmaintenance_work_mem = 768MB\n#constraint_exclusion = on\n#checkpoint_completion_target = 0.9\neffective_cache_size = 4608MB\nwork_mem = 30MB\nwal_buffers = 32MB\ncheckpoint_segments = 64\nshared_buffers = 1536MB\n\nSome information that I may have forgotten.\nSELECT version();\n\"PostgreSQL 8.4.3 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real \n(Ubuntu 4.4.1-4ubuntu8) 4.4.1, 64-bit\"\n\n\nand here is a link with the full request explain analyse \nhttp://explain.depesz.com/s/Yx0\n\n\nI will try the same query with the same data on another server, with \n\"PostgreSQL 8.3.11 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4 \n(Ubuntu 4.2.4-1ubuntu3)\".\n\n\ndamien hostin a �crit :\n> Hello,\n>\n> Before the week end I tried to change the index, but even with the \n> mono-column index on differents columns, the estimated number of rows \n> from dwhinv is 1.\n>\n> Anyone have a suggestion, what can I check ?\n>\n>\n> thx\n>\n>\n> damien hostin a �crit :\n>> Hello,\n>>\n>> I try to make a query run quicker but I don't really know how to give \n>> hints to the planner.\n>>\n>> We are using postgresql 8.4.3 64bit on ubuntu 9.10 server. The \n>> hardware is a 10 SAS drive (15k) on a single RAID 10 array with 8Go RAM.\n>> Queries come from J2EE application (OLAP cube), but running them in \n>> pg_admin perform the same way.\n>>\n>> I made a short example that shows what I think is the problem. The \n>> real query is much longer but with only one join it already cause \n>> problems.\n>>\n>> Here is the short example :\n>>\n>> select rfoadv_8.rfoadvsup as c8,\n>> sum(dwhinv.dwhinvqte) as m0\n>> from\n>> dwhinv as dwhinv,\n>> rfoadv as rfoadv_8\n>> where (dwhinv.dwhinv___rforefide = 'HPLUS'\n>> and (dwhinv.dwhinv___rfodomide = 'PMSI' and \n>> dwhinv.dwhinv___rfoindrvs = '1' and \n>> dwhinv.dwhinv___rfoindide='recN3_BB_reel') )\n>> and dwhinv.dwhinv_p2rfodstide = rfoadv_8.rfoadvinf\n>> and rfoadv_8.rfoadvsup = 'ACTI'\n>> group by rfoadv_8.rfoadvsup\n>>\n>> dwhinv is a table with almost 6.000.000 records\n>> rfoadv is a view with 800.000 records\n>> rfoadv is based on rfoade which is 50.000 records\n>>\n>> Here is the explain analyse :\n>> GroupAggregate (cost=0.00..16.56 rows=1 width=13) (actual \n>> time=2028.452..2028.453 rows=1 loops=1)\n>> -> Nested Loop (cost=0.00..16.54 rows=1 width=13) (actual \n>> time=0.391..1947.432 rows=42664 loops=1)\n>> Join Filter: (((ade2.rfoadegch)::text >= \n>> (ade1.rfoadegch)::text) AND ((ade2.rfoadedrt)::text <= \n>> (ade1.rfoadedrt)::text))\n>> -> Nested Loop (cost=0.00..12.54 rows=1 width=214) (actual \n>> time=0.304..533.281 rows=114350 loops=1)\n>> -> Index Scan using dwhinv_rdi_idx on dwhinv \n>> (cost=0.00..4.87 rows=1 width=12) (actual time=0.227..16.827 \n>> rows=6360 loops=1)\n>> Index Cond: (((dwhinv___rforefide)::text = \n>> 'HPLUS'::text) AND ((dwhinv___rfodomide)::text = 'PMSI'::text) AND \n>> ((dwhinv___rfoindide)::text = 'recN3_BB_reel'::text) AND \n>> (dwhinv___rfoindrvs = 1))\n>> -> Index Scan using rfoade_dsi_idx on rfoade ade2 \n>> (cost=0.00..7.63 rows=3 width=213) (actual time=0.007..0.037 rows=18 \n>> loops=6360)\n>> Index Cond: ((ade2.rfoade_i_rfodstide)::text = \n>> (dwhinv.dwhinv_p2rfodstide)::text)\n>> -> Index Scan using rfoade_pk on rfoade ade1 \n>> (cost=0.00..3.98 rows=1 width=213) (actual time=0.008..0.009 rows=0 \n>> loops=114350)\n>> Index Cond: (((ade1.rfoade___rforefide)::text = \n>> (ade2.rfoade___rforefide)::text) AND ((ade1.rfoade_i_rfodstide)::text \n>> = 'ACTI'::text) AND ((ade1.rfoade___rfovdeide)::text = \n>> (ade2.rfoade___rfovdeide)::text) AND (ade1.rfoadervs = ade2.rfoadervs))\n>>\n>> We can see that the planner think that accessing dwhinv with the \n>> dwhinv_rdi_idx index will return 1 row, but in fact there are 6360. \n>> So the nested loop is not done with 1 loop but 6360. With only one \n>> Join, the query runs in about 1.5 sec which is not really long, but \n>> with 8 join, the same mistake is repeated 8 times, the query runs in \n>> 30-60 sec. I try to disable nested loop, hash join and merge join are \n>> done instead of nested loops, example query runs in 0.2 - 0.5 sec, \n>> and the real query no more that 1 sec ! Which is great.\n>>\n>> Here is the execution plan with nested loop off:\n>>\n>> GroupAggregate (cost=12.56..2453.94 rows=1 width=13) (actual \n>> time=817.306..817.307 rows=1 loops=1)\n>> -> Hash Join (cost=12.56..2453.93 rows=1 width=13) (actual \n>> time=42.583..720.746 rows=42664 loops=1)\n>> Hash Cond: (((ade2.rfoade___rforefide)::text = \n>> (ade1.rfoade___rforefide)::text) AND ((ade2.rfoade___rfovdeide)::text \n>> = (ade1.rfoade___rfovdeide)::text) AND (ade2.rfoadervs = \n>> ade1.rfoadervs))\n>> Join Filter: (((ade2.rfoadegch)::text >= \n>> (ade1.rfoadegch)::text) AND ((ade2.rfoadedrt)::text <= \n>> (ade1.rfoadedrt)::text))\n>> -> Hash Join (cost=4.88..2446.21 rows=1 width=214) (actual \n>> time=42.168..411.962 rows=114350 loops=1)\n>> Hash Cond: ((ade2.rfoade_i_rfodstide)::text = \n>> (dwhinv.dwhinv_p2rfodstide)::text)\n>> -> Seq Scan on rfoade ade2 (cost=0.00..2262.05 \n>> rows=47805 width=213) (actual time=0.057..78.988 rows=47805 loops=1)\n>> -> Hash (cost=4.87..4.87 rows=1 width=12) (actual \n>> time=41.632..41.632 rows=6360 loops=1)\n>> -> Index Scan using dwhinv_rdi_idx on dwhinv \n>> (cost=0.00..4.87 rows=1 width=12) (actual time=0.232..28.199 \n>> rows=6360 loops=1)\n>> Index Cond: (((dwhinv___rforefide)::text = \n>> 'HPLUS'::text) AND ((dwhinv___rfodomide)::text = 'PMSI'::text) AND \n>> ((dwhinv___rfoindide)::text = 'recN3_BB_reel'::text) AND \n>> (dwhinv___rfoindrvs = 1))\n>> -> Hash (cost=7.63..7.63 rows=3 width=213) (actual \n>> time=0.347..0.347 rows=11 loops=1)\n>> -> Index Scan using rfoade_dsi_idx on rfoade ade1 \n>> (cost=0.00..7.63 rows=3 width=213) (actual time=0.095..0.307 rows=11 \n>> loops=1)\n>> Index Cond: ((rfoade_i_rfodstide)::text = \n>> 'ACTI'::text)\n>>\n>> Even if dwhinv row estimation is wrong, the query is quicker\n>>\n>>\n>> So after looking at dwhinv_rdi_idx statistics, I found that \n>> dwhinv___rfoindide related stats wasn't good, so I try \"ALTER TABLE \n>> dwhinv ALTER dwhinv_p2rfodstide SET STATISTICS 2000\" and launch an \n>> vaccum analyse to gather more impressive stats. Stats are better but \n>> query plan is the same and query is not optimised. So I try reindex \n>> on DWHINV as a last chance, but it changes nothing !\n>>\n>> Maybe I'm wrong with the interpretation of the plan but I don't \n>> really think so because with no nested loops this query is really \n>> fast ! I do not plan to disable nested loop on the whole database \n>> because sometimes, nested loops are greats !\n>>\n>> Now I'm stuck ! I don't know how to make the planner understand there \n>> are 6000 rows. Or maybe the 3 column index is a bad idea... ?!\n>>\n>> Thanks\n>>\n>\n>\n\n\n-- \nHOSTIN Damien - Equipe R&D\nTel:+33(0)4 63 05 95 40\nSoci�t� Ax�ge\n23 rue Saint Simon\n63000 Clermont Ferrand\nwww.axege.com\n\n\n\n", "msg_date": "Tue, 06 Jul 2010 14:48:04 +0200", "msg_from": "damien hostin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with planner row strange estimation" }, { "msg_contents": "Hello again,\n\nAt last, I check the same query with the same data on my desktop \ncomputer. Just after loading the data, the queries were slow, I launch a \nvaccum analyse which collect good stats on the main table, the query \nbecame quick (~200ms). Now 1classic sata disk computer is faster than \nour little monster server !!\n\nI compare the volume between the two database. On my desktop computer, \nthe table dwinv has 12000 row with 6000 implicated in my query. The dev \nserver has 6000000 rows with only 6000 implicated in the query. I check \nthe repartition of the column I am using in this query and actually, \nonly the 6000 rows implicated in the query are using column with non \nnull values. I put statistics target on this columns at 10000 which make \nthe analyse take half the table as sample for stats. This way I get some \nvalues for these columns. But the execution plan is still mistaking. \n(plan : http://explain.depesz.com/s/LKW)\n\nI try to compare with desktop plan, but it seems to have nothing \ncomparable. I though I would find something like \"access on dwhinv with \n6000 estimated rows\", but it does the following : \nhttp://explain.depesz.com/s/kbn\n\nI don't understand \"rows=0\" in :\nIndex Scan using dwhinv_dig_idx on dwhinv (cost=0.00..25.91 rows=1 \nwidth=80) (actual time=0.009..0.010 rows=0 loops=120)\n\n * Index Cond: ((dwhinv.dwhinv___rsadigide)::text =\n (adi2.rsaadi_i_rsadigide)::text)\n * Filter: (((dwhinv.dwhinv___rforefide)::text = 'HPLUS'::text) AND\n (dwhinv.dwhinv___rfoindrvs = 1) AND\n ((dwhinv.dwhinv___rfodomide)::text = 'PMSI'::text) AND\n ((dwhinv.dwhinv___rfoindide)::text = 'recN3_BB_reel'::text))\n\nI also managed to make the query run 10x faster with SQL92 join syntax \ninstead of old \"from table1, table where table1.col1=table2.col1\". This \nway the query takes 3sec instead of 30sec. But again, without nested \nloops, 200ms !\n\nI will try later with new mondrian release and a better balanced fact \ntable.\n\n\nThanks anyway__\n\n\ndamien hostin a �crit :\n> Hello,\n>\n> Postgresql configuration was default. So I take a look at pgtune which \n> help me start a bit of tuning. I thought that the planner mistake \n> could come from the default low memory configuration. But after \n> applying new parameters, nothing has changed. The query is still low, \n> the execution plan is still using nested loops where \n> hashjoin/hashmerge seems a lot better.\n>\n> Here are the postgresql.conf parameters I changed using pgtune \n> advises, all other are defaults.\n> (The hardware is a 10 SAS drive (15k) on a single RAID 10 array with \n> 8Go RAM, with 2 opteron dual core 64bit (I can't remember the exact \n> model))\n>\n> # generated for 100 connection and 6G RAM with datawarehouse type\n> #\n> default_statistics_target = 100\n> maintenance_work_mem = 768MB\n> #constraint_exclusion = on\n> #checkpoint_completion_target = 0.9\n> effective_cache_size = 4608MB\n> work_mem = 30MB\n> wal_buffers = 32MB\n> checkpoint_segments = 64\n> shared_buffers = 1536MB\n>\n> Some information that I may have forgotten.\n> SELECT version();\n> \"PostgreSQL 8.4.3 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real \n> (Ubuntu 4.4.1-4ubuntu8) 4.4.1, 64-bit\"\n>\n>\n> and here is a link with the full request explain analyse \n> http://explain.depesz.com/s/Yx0\n>\n>\n> I will try the same query with the same data on another server, with \n> \"PostgreSQL 8.3.11 on i486-pc-linux-gnu, compiled by GCC cc (GCC) \n> 4.2.4 (Ubuntu 4.2.4-1ubuntu3)\".\n>\n>\n> damien hostin a �crit :\n>> Hello,\n>>\n>> Before the week end I tried to change the index, but even with the \n>> mono-column index on differents columns, the estimated number of rows \n>> from dwhinv is 1.\n>>\n>> Anyone have a suggestion, what can I check ?\n>>\n>>\n>> thx\n>>\n>>\n>> damien hostin a �crit :\n>>> Hello,\n>>>\n>>> I try to make a query run quicker but I don't really know how to \n>>> give hints to the planner.\n>>>\n>>> We are using postgresql 8.4.3 64bit on ubuntu 9.10 server. The \n>>> hardware is a 10 SAS drive (15k) on a single RAID 10 array with 8Go \n>>> RAM.\n>>> Queries come from J2EE application (OLAP cube), but running them in \n>>> pg_admin perform the same way.\n>>>\n>>> I made a short example that shows what I think is the problem. The \n>>> real query is much longer but with only one join it already cause \n>>> problems.\n>>>\n>>> Here is the short example :\n>>>\n>>> select rfoadv_8.rfoadvsup as c8,\n>>> sum(dwhinv.dwhinvqte) as m0\n>>> from\n>>> dwhinv as dwhinv,\n>>> rfoadv as rfoadv_8\n>>> where (dwhinv.dwhinv___rforefide = 'HPLUS'\n>>> and (dwhinv.dwhinv___rfodomide = 'PMSI' and \n>>> dwhinv.dwhinv___rfoindrvs = '1' and \n>>> dwhinv.dwhinv___rfoindide='recN3_BB_reel') )\n>>> and dwhinv.dwhinv_p2rfodstide = rfoadv_8.rfoadvinf\n>>> and rfoadv_8.rfoadvsup = 'ACTI'\n>>> group by rfoadv_8.rfoadvsup\n>>>\n>>> dwhinv is a table with almost 6.000.000 records\n>>> rfoadv is a view with 800.000 records\n>>> rfoadv is based on rfoade which is 50.000 records\n>>>\n>>> Here is the explain analyse :\n>>> GroupAggregate (cost=0.00..16.56 rows=1 width=13) (actual \n>>> time=2028.452..2028.453 rows=1 loops=1)\n>>> -> Nested Loop (cost=0.00..16.54 rows=1 width=13) (actual \n>>> time=0.391..1947.432 rows=42664 loops=1)\n>>> Join Filter: (((ade2.rfoadegch)::text >= \n>>> (ade1.rfoadegch)::text) AND ((ade2.rfoadedrt)::text <= \n>>> (ade1.rfoadedrt)::text))\n>>> -> Nested Loop (cost=0.00..12.54 rows=1 width=214) (actual \n>>> time=0.304..533.281 rows=114350 loops=1)\n>>> -> Index Scan using dwhinv_rdi_idx on dwhinv \n>>> (cost=0.00..4.87 rows=1 width=12) (actual time=0.227..16.827 \n>>> rows=6360 loops=1)\n>>> Index Cond: (((dwhinv___rforefide)::text = \n>>> 'HPLUS'::text) AND ((dwhinv___rfodomide)::text = 'PMSI'::text) AND \n>>> ((dwhinv___rfoindide)::text = 'recN3_BB_reel'::text) AND \n>>> (dwhinv___rfoindrvs = 1))\n>>> -> Index Scan using rfoade_dsi_idx on rfoade ade2 \n>>> (cost=0.00..7.63 rows=3 width=213) (actual time=0.007..0.037 rows=18 \n>>> loops=6360)\n>>> Index Cond: ((ade2.rfoade_i_rfodstide)::text = \n>>> (dwhinv.dwhinv_p2rfodstide)::text)\n>>> -> Index Scan using rfoade_pk on rfoade ade1 \n>>> (cost=0.00..3.98 rows=1 width=213) (actual time=0.008..0.009 rows=0 \n>>> loops=114350)\n>>> Index Cond: (((ade1.rfoade___rforefide)::text = \n>>> (ade2.rfoade___rforefide)::text) AND \n>>> ((ade1.rfoade_i_rfodstide)::text = 'ACTI'::text) AND \n>>> ((ade1.rfoade___rfovdeide)::text = (ade2.rfoade___rfovdeide)::text) \n>>> AND (ade1.rfoadervs = ade2.rfoadervs))\n>>>\n>>> We can see that the planner think that accessing dwhinv with the \n>>> dwhinv_rdi_idx index will return 1 row, but in fact there are 6360. \n>>> So the nested loop is not done with 1 loop but 6360. With only one \n>>> Join, the query runs in about 1.5 sec which is not really long, but \n>>> with 8 join, the same mistake is repeated 8 times, the query runs in \n>>> 30-60 sec. I try to disable nested loop, hash join and merge join \n>>> are done instead of nested loops, example query runs in 0.2 - 0.5 \n>>> sec, and the real query no more that 1 sec ! Which is great.\n>>>\n>>> Here is the execution plan with nested loop off:\n>>>\n>>> GroupAggregate (cost=12.56..2453.94 rows=1 width=13) (actual \n>>> time=817.306..817.307 rows=1 loops=1)\n>>> -> Hash Join (cost=12.56..2453.93 rows=1 width=13) (actual \n>>> time=42.583..720.746 rows=42664 loops=1)\n>>> Hash Cond: (((ade2.rfoade___rforefide)::text = \n>>> (ade1.rfoade___rforefide)::text) AND \n>>> ((ade2.rfoade___rfovdeide)::text = (ade1.rfoade___rfovdeide)::text) \n>>> AND (ade2.rfoadervs = ade1.rfoadervs))\n>>> Join Filter: (((ade2.rfoadegch)::text >= \n>>> (ade1.rfoadegch)::text) AND ((ade2.rfoadedrt)::text <= \n>>> (ade1.rfoadedrt)::text))\n>>> -> Hash Join (cost=4.88..2446.21 rows=1 width=214) (actual \n>>> time=42.168..411.962 rows=114350 loops=1)\n>>> Hash Cond: ((ade2.rfoade_i_rfodstide)::text = \n>>> (dwhinv.dwhinv_p2rfodstide)::text)\n>>> -> Seq Scan on rfoade ade2 (cost=0.00..2262.05 \n>>> rows=47805 width=213) (actual time=0.057..78.988 rows=47805 loops=1)\n>>> -> Hash (cost=4.87..4.87 rows=1 width=12) (actual \n>>> time=41.632..41.632 rows=6360 loops=1)\n>>> -> Index Scan using dwhinv_rdi_idx on dwhinv \n>>> (cost=0.00..4.87 rows=1 width=12) (actual time=0.232..28.199 \n>>> rows=6360 loops=1)\n>>> Index Cond: (((dwhinv___rforefide)::text = \n>>> 'HPLUS'::text) AND ((dwhinv___rfodomide)::text = 'PMSI'::text) AND \n>>> ((dwhinv___rfoindide)::text = 'recN3_BB_reel'::text) AND \n>>> (dwhinv___rfoindrvs = 1))\n>>> -> Hash (cost=7.63..7.63 rows=3 width=213) (actual \n>>> time=0.347..0.347 rows=11 loops=1)\n>>> -> Index Scan using rfoade_dsi_idx on rfoade ade1 \n>>> (cost=0.00..7.63 rows=3 width=213) (actual time=0.095..0.307 rows=11 \n>>> loops=1)\n>>> Index Cond: ((rfoade_i_rfodstide)::text = \n>>> 'ACTI'::text)\n>>>\n>>> Even if dwhinv row estimation is wrong, the query is quicker\n>>>\n>>>\n>>> So after looking at dwhinv_rdi_idx statistics, I found that \n>>> dwhinv___rfoindide related stats wasn't good, so I try \"ALTER TABLE \n>>> dwhinv ALTER dwhinv_p2rfodstide SET STATISTICS 2000\" and launch an \n>>> vaccum analyse to gather more impressive stats. Stats are better but \n>>> query plan is the same and query is not optimised. So I try reindex \n>>> on DWHINV as a last chance, but it changes nothing !\n>>>\n>>> Maybe I'm wrong with the interpretation of the plan but I don't \n>>> really think so because with no nested loops this query is really \n>>> fast ! I do not plan to disable nested loop on the whole database \n>>> because sometimes, nested loops are greats !\n>>>\n>>> Now I'm stuck ! I don't know how to make the planner understand \n>>> there are 6000 rows. Or maybe the 3 column index is a bad idea... ?!\n>>>\n>>> Thanks\n>>>\n>>\n>>\n>\n>\n\n\n-- \nHOSTIN Damien - Equipe R&D\nTel:+33(0)4 63 05 95 40\nSoci�t� Ax�ge\n23 rue Saint Simon\n63000 Clermont Ferrand\nwww.axege.com\n\n\n\n", "msg_date": "Wed, 07 Jul 2010 16:39:38 +0200", "msg_from": "damien hostin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with planner row strange estimation" }, { "msg_contents": "On Wed, Jul 7, 2010 at 10:39 AM, damien hostin <[email protected]> wrote:\n> Hello again,\n>\n> At last, I check the same query with the same data on my desktop computer.\n> Just after loading the data, the queries were slow, I launch a vaccum\n> analyse which collect good stats on the main table, the query became quick\n> (~200ms). Now 1classic sata disk computer is faster than our little monster\n> server !!\n\nHave you tried running ANALYZE on the production server?\n\nYou might also want to try ALTER TABLE ... SET STATISTICS to a large\nvalue on some of the join columns involved in the query.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 8 Jul 2010 14:42:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with planner row strange estimation" }, { "msg_contents": "Robert Haas a �crit :\n> On Wed, Jul 7, 2010 at 10:39 AM, damien hostin <[email protected]> wrote:\n> \n>> Hello again,\n>>\n>> At last, I check the same query with the same data on my desktop computer.\n>> Just after loading the data, the queries were slow, I launch a vaccum\n>> analyse which collect good stats on the main table, the query became quick\n>> (~200ms). Now 1classic sata disk computer is faster than our little monster\n>> server !!\n>> \n>\n> Have you tried running ANALYZE on the production server?\n>\n> You might also want to try ALTER TABLE ... SET STATISTICS to a large\n> value on some of the join columns involved in the query.\n>\n> \nHello,\n\nBefore comparing the test case on the two machines, I run analyse on the \nwhole and look at pg_stats table to see if change occurs for the \ncolumns. but on the production server the stats never became as good as \non the desktop computer. I set statistic at 10000 on column used by the \njoin, run analyse which take a 3000000 row sample then look at the \nstats. The stats are not as good as on the desktop. Row number is nearly \nthe same but only 1 or 2 values are found.\n\nThe data are not balanced the same way on the two computer :\n- Desktop is 12000 rows with 6000 implicated in the query (50%),\n- \"Production\" (actually a dev/test server) is 6 million rows with 6000 \nimplicated in the query (0,1%).\nColumns used in the query are nullable, and in the 5994000 other rows \nthat are not implicated in the query these columns are null.\n\nI don't know if the statistic target is a % or a number of value to \nobtain, but event set at max (10000), it didn't managed to collect good \nstats (for this particular query).\nAs I don't know what more to do, my conclusion is that the data need to \nbe better balanced to allow the analyse gather better stats. But if \nthere is a way to improve the stats/query with this ugly balanced data, \nI'm open to it !\n\nI hope that in real production, data will never be loaded this way. If \nthis appened we will maybe set enable_nestloop to off, but I don't think \nit's a good solution, other query have a chance to get slower.\n\n\nThanks for helping\n\n-- \nHOSTIN Damien - Equipe R&D\nTel:+33(0)4 63 05 95 40\nSoci�t� Ax�ge\n23 rue Saint Simon\n63000 Clermont Ferrand\nwww.axege.com\n\n\n\n", "msg_date": "Fri, 09 Jul 2010 12:13:27 +0200", "msg_from": "damien hostin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with planner row strange estimation" }, { "msg_contents": "On Fri, Jul 9, 2010 at 6:13 AM, damien hostin <[email protected]> wrote:\n>> Have you tried running ANALYZE on the production server?\n>>\n>> You might also want to try ALTER TABLE ... SET STATISTICS to a large\n>> value on some of the join columns involved in the query.\n>\n> Hello,\n>\n> Before comparing the test case on the two machines, I run analyse on the\n> whole and look at pg_stats table to see if change occurs for the columns.\n> but on the production server the stats never became as good as on the\n> desktop computer. I set statistic at 10000 on column used by the join, run\n> analyse which take a 3000000 row sample then look at the stats. The stats\n> are not as good as on the desktop. Row number is nearly the same but only 1\n> or 2 values are found.\n>\n> The data are not balanced the same way on the two computer :\n> - Desktop is 12000 rows with 6000 implicated in the query (50%),\n> - \"Production\" (actually a dev/test server) is 6 million rows with 6000\n> implicated in the query (0,1%).\n> Columns used in the query are nullable, and in the 5994000 other rows that\n> are not implicated in the query these columns are null.\n>\n> I don't know if the statistic target is a % or a number of value to obtain,\n\nIt's a number of values to obtain.\n\n> but event set at max (10000), it didn't managed to collect good stats (for\n> this particular query).\n\nI think there's a cutoff where it won't collect values unless they\noccur significantly more often than the average frequency. I wonder\nif that might be biting you here: without the actual values in the MCV\ntable, the join selectivity estimates probably aren't too good.\n\n> As I don't know what more to do, my conclusion is that the data need to be\n> better balanced to allow the analyse gather better stats. But if there is a\n> way to improve the stats/query with this ugly balanced data, I'm open to it\n> !\n>\n> I hope that in real production, data will never be loaded this way. If this\n> appened we will maybe set enable_nestloop to off, but I don't think it's a\n> good solution, other query have a chance to get slower.\n\nYeah, that usually works out poorly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 9 Jul 2010 16:25:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with planner row strange estimation" }, { "msg_contents": "It's probably one of the cases when having HINTS in PostgreSQL may be\nvery helpful..\n\nSELECT /*+ enable_nestloop=off */ ... FROM ...\n\nwill just fix this query without impacting other queries and without\nadding any additional instructions into the application code..\n\nSo, why there is a such resistance to implement hints withing SQL\nqueries in PG?..\n\nRgds,\n-Dimitri\n\n\nOn 7/9/10, Robert Haas <[email protected]> wrote:\n> On Fri, Jul 9, 2010 at 6:13 AM, damien hostin <[email protected]>\n> wrote:\n>>> Have you tried running ANALYZE on the production server?\n>>>\n>>> You might also want to try ALTER TABLE ... SET STATISTICS to a large\n>>> value on some of the join columns involved in the query.\n>>\n>> Hello,\n>>\n>> Before comparing the test case on the two machines, I run analyse on the\n>> whole and look at pg_stats table to see if change occurs for the columns.\n>> but on the production server the stats never became as good as on the\n>> desktop computer. I set statistic at 10000 on column used by the join, run\n>> analyse which take a 3000000 row sample then look at the stats. The stats\n>> are not as good as on the desktop. Row number is nearly the same but only\n>> 1\n>> or 2 values are found.\n>>\n>> The data are not balanced the same way on the two computer :\n>> - Desktop is 12000 rows with 6000 implicated in the query (50%),\n>> - \"Production\" (actually a dev/test server) is 6 million rows with 6000\n>> implicated in the query (0,1%).\n>> Columns used in the query are nullable, and in the 5994000 other rows that\n>> are not implicated in the query these columns are null.\n>>\n>> I don't know if the statistic target is a % or a number of value to\n>> obtain,\n>\n> It's a number of values to obtain.\n>\n>> but event set at max (10000), it didn't managed to collect good stats (for\n>> this particular query).\n>\n> I think there's a cutoff where it won't collect values unless they\n> occur significantly more often than the average frequency. I wonder\n> if that might be biting you here: without the actual values in the MCV\n> table, the join selectivity estimates probably aren't too good.\n>\n>> As I don't know what more to do, my conclusion is that the data need to be\n>> better balanced to allow the analyse gather better stats. But if there is\n>> a\n>> way to improve the stats/query with this ugly balanced data, I'm open to\n>> it\n>> !\n>>\n>> I hope that in real production, data will never be loaded this way. If\n>> this\n>> appened we will maybe set enable_nestloop to off, but I don't think it's a\n>> good solution, other query have a chance to get slower.\n>\n> Yeah, that usually works out poorly.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 12 Jul 2010 09:17:32 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with planner row strange estimation" }, { "msg_contents": "\nDimitri a �crit :\n> It's probably one of the cases when having HINTS in PostgreSQL may be\n> very helpful..\n>\n> SELECT /*+ enable_nestloop=off */ ... FROM ...\n>\n> will just fix this query without impacting other queries and without\n> adding any additional instructions into the application code..\n>\n> So, why there is a such resistance to implement hints withing SQL\n> queries in PG?..\n>\n> Rgds,\n> -Dimitri\n>\n> \n+1.\nAnother typical case when it would be helpful is with setting the \ncursor_tuple_fraction GUC variable for a specific statement, without \nbeing obliged to issue 2 SET statements, one before the SELECT and the \nother after.\n\n> On 7/9/10, Robert Haas <[email protected]> wrote:\n> \n>> On Fri, Jul 9, 2010 at 6:13 AM, damien hostin <[email protected]>\n>> wrote:\n>> \n>>>> Have you tried running ANALYZE on the production server?\n>>>>\n>>>> You might also want to try ALTER TABLE ... SET STATISTICS to a large\n>>>> value on some of the join columns involved in the query.\n>>>> \n>>> Hello,\n>>>\n>>> Before comparing the test case on the two machines, I run analyse on the\n>>> whole and look at pg_stats table to see if change occurs for the columns.\n>>> but on the production server the stats never became as good as on the\n>>> desktop computer. I set statistic at 10000 on column used by the join, run\n>>> analyse which take a 3000000 row sample then look at the stats. The stats\n>>> are not as good as on the desktop. Row number is nearly the same but only\n>>> 1\n>>> or 2 values are found.\n>>>\n>>> The data are not balanced the same way on the two computer :\n>>> - Desktop is 12000 rows with 6000 implicated in the query (50%),\n>>> - \"Production\" (actually a dev/test server) is 6 million rows with 6000\n>>> implicated in the query (0,1%).\n>>> Columns used in the query are nullable, and in the 5994000 other rows that\n>>> are not implicated in the query these columns are null.\n>>>\n>>> I don't know if the statistic target is a % or a number of value to\n>>> obtain,\n>>> \n>> It's a number of values to obtain.\n>>\n>> \n>>> but event set at max (10000), it didn't managed to collect good stats (for\n>>> this particular query).\n>>> \n>> I think there's a cutoff where it won't collect values unless they\n>> occur significantly more often than the average frequency. I wonder\n>> if that might be biting you here: without the actual values in the MCV\n>> table, the join selectivity estimates probably aren't too good.\n>>\n>> \n>>> As I don't know what more to do, my conclusion is that the data need to be\n>>> better balanced to allow the analyse gather better stats. But if there is\n>>> a\n>>> way to improve the stats/query with this ugly balanced data, I'm open to\n>>> it\n>>> !\n>>>\n>>> I hope that in real production, data will never be loaded this way. If\n>>> this\n>>> appened we will maybe set enable_nestloop to off, but I don't think it's a\n>>> good solution, other query have a chance to get slower.\n>>> \n>> Yeah, that usually works out poorly.\n>>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise Postgres Company\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>> \nRegards.\nPhilippe Beaudoin.\n", "msg_date": "Mon, 12 Jul 2010 22:33:17 +0200", "msg_from": "phb07 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with planner row strange estimation" }, { "msg_contents": "phb07 a �crit :\n>\n> Dimitri a �crit :\n>> It's probably one of the cases when having HINTS in PostgreSQL may be\n>> very helpful..\n>>\n>> SELECT /*+ enable_nestloop=off */ ... FROM ...\n>>\n>> will just fix this query without impacting other queries and without\n>> adding any additional instructions into the application code..\n>>\n>> So, why there is a such resistance to implement hints withing SQL\n>> queries in PG?..\n>>\n>> Rgds,\n>> -Dimitri\n>>\n>> \n> +1.\n> Another typical case when it would be helpful is with setting the \n> cursor_tuple_fraction GUC variable for a specific statement, without \n> being obliged to issue 2 SET statements, one before the SELECT and the \n> other after.\n>\n>\nI remember that the \"dimension\" columns of the fact table have indexes \nlike with \"WHERE IS NOT NULL\" on the column indexed. Example:\n\nCREATE INDEX dwhinv_pd2_idx\n ON dwhinv\n USING btree\n (dwhinv_p2rfodstide)\nTABLESPACE tb_index\n WHERE dwhinv_p2rfodstide IS NOT NULL;\n\nIs the where clause being used to select the sample rows on which the \nstats will be calculated or just used to exclude values after collecting \nstat ? As I am writing I realize there's must be no link between a table \ncolumn stats and an index a the same column. (By the way, If I used is \nnot null on each column with such an index, it changes nothing)\n\n\nAbout the oracle-like hints, it does not really help, because the query \nis generated in an external jar that I should fork to include the \nmodification. I would prefer forcing a plan based on the query hashcode, \nbut this does not fix what make the planner goes wrong.\n\n-- \nHOSTIN Damien - Equipe R&D\nTel:+33(0)4 63 05 95 40\nSoci�t� Ax�ge\n23 rue Saint Simon\n63000 Clermont Ferrand\nwww.axege.com\n\n\n\n", "msg_date": "Tue, 13 Jul 2010 10:51:47 +0200", "msg_from": "damien hostin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with planner row strange estimation" }, { "msg_contents": "On Mon, Jul 12, 2010 at 4:33 PM, phb07 <[email protected]> wrote:\n>\n> Dimitri a écrit :\n>>\n>> It's probably one of the cases when having HINTS in PostgreSQL may be\n>> very helpful..\n>>\n>> SELECT /*+ enable_nestloop=off */ ... FROM ...\n>>\n>> will just fix this query without impacting other queries and without\n>> adding any additional instructions into the application code..\n>>\n>> So, why there is a such resistance to implement hints withing SQL\n>> queries in PG?..\n>>\n>\n> +1.\n> Another typical case when it would be helpful is with setting the\n> cursor_tuple_fraction GUC variable for a specific statement, without being\n> obliged to issue 2 SET statements, one before the SELECT and the other\n> after.\n\nWe've previously discussed adding a command something like:\n\nLET (variable = value, variable = value, ...) command\n\n...which would set those variables just for that one command. But\nhonestly I'm not sure how much it'll help with query planner problems.\n Disabling nestloops altogether, even for one particular query, is\noften going to be a sledgehammer where you need a scalpel. But then\nagain, a sledgehammer is better than no hammer.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 22 Jul 2010 14:08:45 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with planner row strange estimation" } ]
[ { "msg_contents": "Hi,\n\nWe are using dbt2 to check performance of postgresql 8.4 on Linux64 machine. When we increase \"TERMINALS PER WAREHOUSE\" TPM value increase rapidly but rampup time increase too , dbt2 estimated rampup time calculation do not work properly that’s why it run the test for wrong duration i.e.\n\n1.\nSettings :\n DATABASE CONNECTIONS: 50\n TERMINALS PER WAREHOUSE: 10\n SCALE FACTOR (WAREHOUSES): 200\n DURATION OF TEST (in sec): 7200\nResult : \n Response Time (s)\n Transaction % Average : 90th % Total Rollbacks %\n ------------ ----- --------------------- ----------- --------------- -----\n Delivery 3.96 0.285 : 0.023 26883 0 0.00\n New Order 45.26 0.360 : 0.010 307335 3082 1.01\n Order Status 3.98 0.238 : 0.003 27059 0 0.00\n Payment 42.82 0.233 : 0.003 290802 0 0.00\n Stock Level 3.97 0.245 : 0.002 26970 0 0.00\n ------------ ----- --------------------- ----------- --------------- -----\n \n 2508.36 new-order transactions per minute (NOTPM)\n 120.1 minute duration\n 0 total unknown errors\n 2000 second(s) ramping up\n\n2. \nSettings :\n DATABASE CONNECTIONS: 50\n TERMINALS PER WAREHOUSE: 40\n SCALE FACTOR (WAREHOUSES): 200\n DURATION OF TEST (in sec): 7200\nResult : \n Response Time (s)\n Transaction % Average : 90th % Total Rollbacks %\n ------------ ----- --------------------- ----------- --------------- -----\n Delivery 3.95 8.123 : 4.605 43672 0 0.00\n New Order 45.19 12.205 : 2.563 499356 4933 1.00\n Order Status 4.00 7.385 : 3.314 44175 0 0.00\n Payment 42.89 7.221 : 1.920 473912 0 0.00\n Stock Level 3.97 7.093 : 1.887 43868 0 0.00\n ------------ ----- --------------------- ----------- --------------- -----\n \n 7009.40 new-order transactions per minute (NOTPM)\n 69.8 minute duration\n 0 total unknown errors\n 8016 second(s) ramping up\n3.\nSettings :\n DATABASE CONNECTIONS: 50\n TERMINALS PER WAREHOUSE: 40\n SCALE FACTOR (WAREHOUSES): 200\n DURATION OF TEST (in sec): 7200\nResult : \n Response Time (s)\n Transaction % Average : 90th % Total Rollbacks %\n ------------ ----- --------------------- ----------- --------------- -----\n Delivery 3.98 9.095 : 16.103 15234 0 0.00\n New Order 45.33 7.896 : 14.794 173539 1661 0.97\n Order Status 3.96 8.165 : 13.989 15156 0 0.00\n Payment 42.76 7.295 : 12.470 163726 0 0.00\n Stock Level 3.97 7.198 : 12.520 15198 0 0.00\n ------------ ----- --------------------- ----------- --------------- -----\n \n 10432.09 new-order transactions per minute (NOTPM)\n 16.3 minute duration\n 0 total unknown errors\n 11227 second(s) ramping up\n \nThese results show that dbt2 test actually did not run for 2 hours but it start varying with the increase of \"TERMINALS PER WAREHOUSE\" value i.e. 1st Run ( 120.1 minute duration ), 2nd Run (69.8 minute duration) and 3rd Run (16.3 minute duration).\n\nTo fix and sync with the rampup time, I have made a minor change in the dbt2-run-workload script i.e.\n\n --- dbt2-run-workload 2010-07-02 08:18:06.000000000 -0400\n +++ dbt2-run-workload 2010-07-02 08:20:11.000000000 -0400\n @@ -625,7 +625,11 @@\n done\n \n echo -n \"estimated rampup time: \"\n -do_sleep $SLEEP_RAMPUP\n +#do_sleep $SLEEP_RAMPUP\n +while ! grep START ${DRIVER_OUTPUT_DIR}/*/mix.log ; do\n + sleep 1\n +done\n +date\n echo \"estimated rampup time has elapsed\"\n \n # Clear the readprofile data after the driver ramps up.\n \nWhat is rempup time ? And what do you think about the patch?. Can you please guide me?. Thanks.\n\nBest Regards,\nAsif Naeem\n\n \t\t \t \t\t \n_________________________________________________________________\nHotmail: Trusted email with Microsoft’s powerful SPAM protection.\nhttps://signup.live.com/signup.aspx?id=60969\n\n\n\n\n\nHi,We are using dbt2 to check performance of postgresql 8.4 on Linux64 machine. When we increase \"TERMINALS PER WAREHOUSE\" TPM value increase rapidly but rampup time increase too , dbt2 estimated rampup time calculation do not work properly that’s why it run the test for wrong duration i.e.1.Settings :    DATABASE CONNECTIONS: 50    TERMINALS PER WAREHOUSE: 10    SCALE FACTOR (WAREHOUSES): 200    DURATION OF TEST (in sec): 7200Result :                              Response Time (s)     Transaction      %    Average :    90th %        Total        Rollbacks      %    ------------  -----  ---------------------  -----------  ---------------  -----        Delivery   3.96      0.285 :     0.023        26883                0   0.00       New Order  45.26      0.360 :     0.010       307335             3082   1.01    Order Status   3.98      0.238 :     0.003        27059                0   0.00         Payment  42.82      0.233 :     0.003       290802                0   0.00     Stock Level   3.97      0.245 :     0.002        26970                0   0.00    ------------  -----  ---------------------  -----------  ---------------  -----        2508.36 new-order transactions per minute (NOTPM)    120.1 minute duration    0 total unknown errors    2000 second(s) ramping up2. Settings :    DATABASE CONNECTIONS: 50    TERMINALS PER WAREHOUSE: 40    SCALE FACTOR (WAREHOUSES): 200    DURATION OF TEST (in sec): 7200Result :                              Response Time (s)     Transaction      %    Average :    90th %        Total        Rollbacks      %    ------------  -----  ---------------------  -----------  ---------------  -----        Delivery   3.95      8.123 :     4.605        43672                0   0.00       New Order  45.19     12.205 :     2.563       499356             4933   1.00    Order Status   4.00      7.385 :     3.314        44175                0   0.00         Payment  42.89      7.221 :     1.920       473912                0   0.00     Stock Level   3.97      7.093 :     1.887        43868                0   0.00    ------------  -----  ---------------------  -----------  ---------------  -----        7009.40 new-order transactions per minute (NOTPM)    69.8 minute duration    0 total unknown errors    8016 second(s) ramping up3.Settings :    DATABASE CONNECTIONS: 50    TERMINALS PER WAREHOUSE: 40    SCALE FACTOR (WAREHOUSES): 200    DURATION OF TEST (in sec): 7200Result :                              Response Time (s)     Transaction      %    Average :    90th %        Total        Rollbacks      %    ------------  -----  ---------------------  -----------  ---------------  -----        Delivery   3.98      9.095 :    16.103        15234                0   0.00       New Order  45.33      7.896 :    14.794       173539             1661   0.97    Order Status   3.96      8.165 :    13.989        15156                0   0.00         Payment  42.76      7.295 :    12.470       163726                0   0.00     Stock Level   3.97      7.198 :    12.520        15198                0   0.00    ------------  -----  ---------------------  -----------  ---------------  -----        10432.09 new-order transactions per minute (NOTPM)    16.3 minute duration    0 total unknown errors    11227 second(s) ramping up    These results show that dbt2 test actually did not run for 2 hours but it start varying with the increase of  \"TERMINALS PER WAREHOUSE\" value i.e. 1st Run ( 120.1 minute duration ), 2nd Run (69.8 minute duration) and 3rd Run (16.3 minute duration).To fix and sync with the rampup time, I have made a minor change in the dbt2-run-workload script i.e.    --- dbt2-run-workload      2010-07-02 08:18:06.000000000 -0400    +++ dbt2-run-workload   2010-07-02 08:20:11.000000000 -0400    @@ -625,7 +625,11 @@     done         echo -n \"estimated rampup time: \"    -do_sleep $SLEEP_RAMPUP    +#do_sleep $SLEEP_RAMPUP    +while ! grep START ${DRIVER_OUTPUT_DIR}/*/mix.log ; do    +       sleep 1    +done    +date     echo \"estimated rampup time has elapsed\"         # Clear the readprofile data after the driver ramps up. What is rempup time ? And what do you think about the patch?. Can you please guide me?. Thanks.Best Regards,Asif Naeem Hotmail: Trusted email with Microsoft’s powerful SPAM protection. Sign up now.", "msg_date": "Fri, 2 Jul 2010 20:38:51 +0600", "msg_from": "MUHAMMAD ASIF <[email protected]>", "msg_from_op": true, "msg_subject": "using dbt2 postgresql 8.4 - rampup time issue" }, { "msg_contents": "On Fri, Jul 2, 2010 at 7:38 AM, MUHAMMAD ASIF <[email protected]> wrote:\n> Hi,\n>\n> We are using dbt2 to check performance of postgresql 8.4 on Linux64 machine.\n> When we increase \"TERMINALS PER WAREHOUSE\" TPM value increase rapidly but\n> rampup time increase too , dbt2 estimated rampup time calculation do not\n> work properly that’s why it run the test for wrong duration i.e.\n\nA clarification of terms may help to start. The \"terminals per\nwarehouse\" in the scripts correlates to the number terminals emulated.\n An emulated terminal is tied to a warehouse's district. In other\nwords, the number of terminals translates to the number of districts\nin a warehouse across the entire database. To increase the terminals\nper warehouse implies you have scaled the database differently, which\nI'm assuming is not the case here.\n\n> 1.\n> Settings :\n>     DATABASE CONNECTIONS: 50\n>     TERMINALS PER WAREHOUSE: 10\n>     SCALE FACTOR (WAREHOUSES): 200\n>     DURATION OF TEST (in sec): 7200\n> Result :\n>                              Response Time (s)\n>      Transaction      %    Average :    90th %        Total\n> Rollbacks      %\n>     ------------  -----  ---------------------  -----------\n> ---------------  -----\n>         Delivery   3.96      0.285 :     0.023        26883\n> 0   0.00\n>        New Order  45.26      0.360 :     0.010       307335\n> 3082   1.01\n>     Order Status   3.98      0.238 :     0.003        27059\n> 0   0.00\n>          Payment  42.82      0.233 :     0.003       290802\n> 0   0.00\n>      Stock Level   3.97      0.245 :     0.002        26970\n> 0   0.00\n>     ------------  -----  ---------------------  -----------\n> ---------------  -----\n>\n>     2508.36 new-order transactions per minute (NOTPM)\n>     120.1 minute duration\n>     0 total unknown errors\n>     2000 second(s) ramping up\n>\n> 2.\n> Settings :\n>     DATABASE CONNECTIONS: 50\n>     TERMINALS PER WAREHOUSE: 40\n>     SCALE FACTOR (WAREHOUSES): 200\n>     DURATION OF TEST (in sec): 7200\n> Result :\n>                              Response Time (s)\n>      Transaction      %    Average :    90th %        Total\n> Rollbacks      %\n>     ------------  -----  ---------------------  -----------\n> ---------------  -----\n>         Delivery   3.95      8.123 :     4.605        43672\n> 0   0.00\n>        New Order  45.19     12.205 :     2.563       499356\n> 4933   1.00\n>     Order Status   4.00      7.385 :     3.314        44175\n> 0   0.00\n>          Payment  42.89      7.221 :     1.920       473912\n> 0   0.00\n>      Stock Level   3.97      7.093 :     1.887        43868\n> 0   0.00\n>     ------------  -----  ---------------------  -----------\n> ---------------  -----\n>\n>     7009.40 new-order transactions per minute (NOTPM)\n>     69.8 minute duration\n>     0 total unknown errors\n>     8016 second(s) ramping up\n>\n> 3.\n> Settings :\n>     DATABASE CONNECTIONS: 50\n>     TERMINALS PER WAREHOUSE: 40\n>     SCALE FACTOR (WAREHOUSES): 200\n>     DURATION OF TEST (in sec): 7200\n> Result :\n>                              Response Time (s)\n>      Transaction      %    Average :    90th %        Total\n> Rollbacks      %\n>     ------------  -----  ---------------------  -----------\n> ---------------  -----\n>         Delivery   3.98      9.095 :    16.103        15234\n> 0   0.00\n>        New Order  45.33      7.896 :    14.794       173539\n> 1661   0.97\n>     Order Status   3.96      8.165 :    13.989        15156\n> 0   0.00\n>          Payment  42.76      7.295 :    12.470       163726\n> 0   0.00\n>      Stock Level   3.97      7.198 :    12.520        15198\n> 0   0.00\n>     ------------  -----  ---------------------  -----------\n> ---------------  -----\n>\n>     10432.09 new-order transactions per minute (NOTPM)\n>     16.3 minute duration\n>     0 total unknown errors\n>     11227 second(s) ramping up\n>\n> These results show that dbt2 test actually did not run for 2 hours but it\n> start varying with the increase of  \"TERMINALS PER WAREHOUSE\" value i.e. 1st\n> Run ( 120.1 minute duration ), 2nd Run (69.8 minute duration) and 3rd Run\n> (16.3 minute duration).\n\nThe ramp up times are actually as expected (explained below). What\nyou are witnessing is more likely that the driver is crashing because\nthe values are out of range from the scale of the database. You have\neffectively told the driver that there are more than 10 districts per\nwarehouse, and have likely not built the database that way. I'm\nactually surprised the driver actually ramped up completely.\n\n> To fix and sync with the rampup time, I have made a minor change in the\n> dbt2-run-workload script i.e.\n>\n>     --- dbt2-run-workload      2010-07-02 08:18:06.000000000 -0400\n>     +++ dbt2-run-workload   2010-07-02 08:20:11.000000000 -0400\n>     @@ -625,7 +625,11 @@\n>      done\n>\n>      echo -n \"estimated rampup time: \"\n>     -do_sleep $SLEEP_RAMPUP\n>     +#do_sleep $SLEEP_RAMPUP\n>     +while ! grep START ${DRIVER_OUTPUT_DIR}/*/mix.log ; do\n>     +       sleep 1\n>     +done\n>     +date\n>      echo \"estimated rampup time has elapsed\"\n>\n>      # Clear the readprofile data after the driver ramps up.\n>\n> What is rempup time ? And what do you think about the patch?. Can you please\n> guide me?. Thanks.\n\nThe ramp up time is supposed to be the multiplication of the terminals\nper warehouse, the number of warehouses with the sleep time between\nthe creation of each terminal. The only problem with your patch is\nthat the latest scripts (in the source code repo) breaks out the\nclient load into multiple instances of the driver program. Thus there\nis a log file per instance of the driver so your patch work work as\nis. Well, and there is that the ramp up calculation doesn't appear to\nbe broken. ;)\n\nRegards,\nMark\n", "msg_date": "Fri, 2 Jul 2010 16:56:26 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: using dbt2 postgresql 8.4 - rampup time issue" }, { "msg_contents": "> A clarification of terms may help to start. The \"terminals per\n> warehouse\" in the scripts correlates to the number terminals emulated.\n> An emulated terminal is tied to a warehouse's district. In other\n> words, the number of terminals translates to the number of districts\n> in a warehouse across the entire database. To increase the terminals\n> per warehouse implies you have scaled the database differently, which\n> I'm assuming is not the case here.\n> \n\nScale the database … Can you please elaborate ? . To increase \"terminals per warehouse\" I added only one option ( i.e. \"-t\" for dbt2-run-workload ) with normal dbt2 test i.e. \n\n ./dbt2-pgsql-create-db\n ./dbt2-pgsql-build-db -d $DBDATA -g -r -w $WAREHOUSES\n ./dbt2-run-workload -a pgsql -c $DB_CONNECTIONS -d $REGRESS_DURATION_SEC -w $WAREHOUSES -o $OUTPUT_DIR -t $TERMINAL_PER_WAREHOUSE\n ./dbt2-pgsql-stop-db\n\nIs this change enough or I am missing some thing ?\n\n> > 1.\n> > Settings :\n> > DATABASE CONNECTIONS: 50\n> > TERMINALS PER WAREHOUSE: 10\n> > SCALE FACTOR (WAREHOUSES): 200\n> > DURATION OF TEST (in sec): 7200\n> > Result :\n> > Response Time (s)\n> > Transaction % Average : 90th % Total\n> > Rollbacks %\n> > ------------ ----- --------------------- -----------\n> > --------------- -----\n> > Delivery 3.96 0.285 : 0.023 26883\n> > 0 0.00\n> > New Order 45.26 0.360 : 0.010 307335\n> > 3082 1.01\n> > Order Status 3.98 0.238 : 0.003 27059\n> > 0 0.00\n> > Payment 42.82 0.233 : 0.003 290802\n> > 0 0.00\n> > Stock Level 3.97 0.245 : 0.002 26970\n> > 0 0.00\n> > ------------ ----- --------------------- -----------\n> > --------------- -----\n> >\n> > 2508.36 new-order transactions per minute (NOTPM)\n> > 120.1 minute duration\n> > 0 total unknown errors\n> > 2000 second(s) ramping up\n> >\n> > 2.\n> > Settings :\n> > DATABASE CONNECTIONS: 50\n> > TERMINALS PER WAREHOUSE: 40\n> > SCALE FACTOR (WAREHOUSES): 200\n> > DURATION OF TEST (in sec): 7200\n> > Result :\n> > Response Time (s)\n> > Transaction % Average : 90th % Total\n> > Rollbacks %\n> > ------------ ----- --------------------- -----------\n> > --------------- -----\n> > Delivery 3.95 8.123 : 4.605 43672\n> > 0 0.00\n> > New Order 45.19 12.205 : 2.563 499356\n> > 4933 1.00\n> > Order Status 4.00 7.385 : 3.314 44175\n> > 0 0.00\n> > Payment 42.89 7.221 : 1.920 473912\n> > 0 0.00\n> > Stock Level 3.97 7.093 : 1.887 43868\n> > 0 0.00\n> > ------------ ----- --------------------- -----------\n> > --------------- -----\n> >\n> > 7009.40 new-order transactions per minute (NOTPM)\n> > 69.8 minute duration\n> > 0 total unknown errors\n> > 8016 second(s) ramping up\n> >\n\n8016 (actual rampup time) + ( 69.8 * 60 ) = 12204 \n\n5010 (estimated rampup time) + 7200 (estimated steady state time) = \n12210 \n\n> > 3.\n> > Settings :\n> > DATABASE CONNECTIONS: 50\n> > TERMINALS PER WAREHOUSE: 40\n> > SCALE FACTOR (WAREHOUSES): 200\n> > DURATION OF TEST (in sec): 7200\n> > Result :\n> > Response Time (s)\n> > Transaction % Average : 90th % Total\n> > Rollbacks %\n> > ------------ ----- --------------------- -----------\n> > --------------- -----\n> > Delivery 3.98 9.095 : 16.103 15234\n> > 0 0.00\n> > New Order 45.33 7.896 : 14.794 173539\n> > 1661 0.97\n> > Order Status 3.96 8.165 : 13.989 15156\n> > 0 0.00\n> > Payment 42.76 7.295 : 12.470 163726\n> > 0 0.00\n> > Stock Level 3.97 7.198 : 12.520 15198\n> > 0 0.00\n> > ------------ ----- --------------------- -----------\n> > --------------- -----\n> >\n> > 10432.09 new-order transactions per minute (NOTPM)\n> > 16.3 minute duration\n> > 0 total unknown errors\n> > 11227 second(s) ramping up\n\n11227 (actual rampup time) + ( 16.3 * 60 ) = 12205 \n5010 (estimated rampup time) + 7200 (estimated steady state time) = 12210 \n\n> >\n> > These results show that dbt2 test actually did not run for 2 hours but it\n> > start varying with the increase of \"TERMINALS PER WAREHOUSE\" value i.e. 1st\n> > Run ( 120.1 minute duration ), 2nd Run (69.8 minute duration) and 3rd Run\n> > (16.3 minute duration).\n> \n> The ramp up times are actually as expected (explained below). What\n> you are witnessing is more likely that the driver is crashing because\n> the values are out of range from the scale of the database. You have\n> effectively told the driver that there are more than 10 districts per\n> warehouse, and have likely not built the database that way. I'm\n> actually surprised the driver actually ramped up completely.\n> \n\nI run the dbt2 test with the following configuration i.e.\n\n WAREHOUSES=100\n DB_CONNECTIONS=20\n REGRESS_DURATION=7200 #HOURS\n TERMINAL_PER_WAREHOUSE=32\n \n Or\n \n WAREHOUSES=100\n DB_CONNECTIONS=20\n REGRESS_DURATION=7200 #HOURS\n TERMINAL_PER_WAREHOUSE=40\n \n Or\n \n WAREHOUSES=100\n DB_CONNECTIONS=20\n REGRESS_DURATION=7200 #HOURS\n TERMINAL_PER_WAREHOUSE=56\n\nI always end up estimate the same rampup timei.e.\n\n estimated rampup time: Sleeping 5010 seconds\n estimated steady state time: Sleeping 7200 seconds\n\nIt means it expects thats rampup time will be able to complete in 5010 seconds and wait for 501 (Stage 1. Starting up client) + 5010 (estimated rampup time) + 7200 (estimated steady state time) seconds to complete the test and then kill dbt2-driver and dbt2-client and generate report etc.\n\nRampup time is increasing with the increase in TERMINAL_PER_WAREHOUSE but on the other end dbt2 estimated time (501+5010+7200 seconds) is not increasing and rampup time end up consuming stread state time.. ( There is no process crash observed in any dbt2 or postgresql related process )\nTo sync up the dbt2-run-workload script with rampup time, it now checks mix.log.\n\n> > To fix and sync with the rampup time, I have made a minor change in the\n> > dbt2-run-workload script i.e.\n> >\n> > --- dbt2-run-workload 2010-07-02 08:18:06.000000000 -0400\n> > +++ dbt2-run-workload 2010-07-02 08:20:11.000000000 -0400\n> > @@ -625,7 +625,11 @@\n> > done\n> >\n> > echo -n \"estimated rampup time: \"\n> > -do_sleep $SLEEP_RAMPUP\n> > +#do_sleep $SLEEP_RAMPUP\n> > +while ! grep START ${DRIVER_OUTPUT_DIR}/*/mix.log ; do\n> > + sleep 1\n> > +done\n> > +date\n> > echo \"estimated rampup time has elapsed\"\n> >\n> > # Clear the readprofile data after the driver ramps up.\n> >\n> > What is rempup time ? And what do you think about the patch?. Can you please\n> > guide me?. Thanks.\n> \n> The ramp up time is supposed to be the multiplication of the terminals\n> per warehouse, the number of warehouses with the sleep time between\n> the creation of each terminal. The only problem with your patch is\n> that the latest scripts (in the source code repo) breaks out the\n> client load into multiple instances of the driver program. Thus there\n> is a log file per instance of the driver so your patch work work as\n> is. Well, and there is that the ramp up calculation doesn't appear to\n> be broken. ;)\n\nIt seems that a driver handles upto 500 warehouses and there will be more drivers if warehouse # is greater than this i.e. \n W_CHUNK=500 #(default)\n\nI have some other question too.\n >How I can get maximum TPM value for postgresql ?, what dbt2 parameters I should play with ?\n\nThank you very much for your detailed reply. Thanks.\n\nBest Regards,\nAsif Naeem\n\n \t\t \t \t\t \n_________________________________________________________________\nHotmail: Trusted email with Microsoft’s powerful SPAM protection.\nhttps://signup.live.com/signup.aspx?id=60969\n\n\n\n\n\n> A clarification of terms may help to start. The \"terminals per> warehouse\" in the scripts correlates to the number terminals emulated.> An emulated terminal is tied to a warehouse's district. In other> words, the number of terminals translates to the number of districts> in a warehouse across the entire database. To increase the terminals> per warehouse implies you have scaled the database differently, which> I'm assuming is not the case here.> Scale the database … Can you please elaborate ? . To increase  \"terminals per warehouse\"  I added only one option ( i.e. \"-t\" for dbt2-run-workload ) with normal dbt2 test i.e.         ./dbt2-pgsql-create-db        ./dbt2-pgsql-build-db -d $DBDATA -g -r -w $WAREHOUSES        ./dbt2-run-workload -a pgsql -c $DB_CONNECTIONS -d $REGRESS_DURATION_SEC -w $WAREHOUSES -o $OUTPUT_DIR -t $TERMINAL_PER_WAREHOUSE        ./dbt2-pgsql-stop-dbIs this change enough or I am missing some thing ?> > 1.> > Settings :> >     DATABASE CONNECTIONS: 50> >     TERMINALS PER WAREHOUSE: 10> >     SCALE FACTOR (WAREHOUSES): 200> >     DURATION OF TEST (in sec): 7200> > Result :> >                              Response Time (s)> >      Transaction      %    Average :    90th %        Total> > Rollbacks      %> >     ------------  -----  ---------------------  -----------> > ---------------  -----> >         Delivery   3.96      0.285 :     0.023        26883> > 0   0.00> >        New Order  45.26      0.360 :     0.010       307335> > 3082   1.01> >     Order Status   3.98      0.238 :     0.003        27059> > 0   0.00> >          Payment  42.82      0.233 :     0.003       290802> > 0   0.00> >      Stock Level   3.97      0.245 :     0.002        26970> > 0   0.00> >     ------------  -----  ---------------------  -----------> > ---------------  -----> >> >     2508.36 new-order transactions per minute (NOTPM)> >     120.1 minute duration> >     0 total unknown errors> >     2000 second(s) ramping up> >> > 2.> > Settings :> >     DATABASE CONNECTIONS: 50> >     TERMINALS PER WAREHOUSE: 40> >     SCALE FACTOR (WAREHOUSES): 200> >     DURATION OF TEST (in sec): 7200> > Result :> >                              Response Time (s)> >      Transaction      %    Average :    90th %        Total> > Rollbacks      %> >     ------------  -----  ---------------------  -----------> > ---------------  -----> >         Delivery   3.95      8.123 :     4.605        43672> > 0   0.00> >        New Order  45.19     12.205 :     2.563       499356> > 4933   1.00> >     Order Status   4.00      7.385 :     3.314        44175> > 0   0.00> >          Payment  42.89      7.221 :     1.920       473912> > 0   0.00> >      Stock Level   3.97      7.093 :     1.887        43868> > 0   0.00> >     ------------  -----  ---------------------  -----------> > ---------------  -----> >> >     7009.40 new-order transactions per minute (NOTPM)> >     69.8 minute duration> >     0 total unknown errors> >     8016 second(s) ramping up> >8016 (actual rampup time) + ( 69.8 * 60 ) = 12204 \n5010 (estimated rampup time) + 7200 (estimated steady state time) = \n12210 > > 3.> > Settings :> >     DATABASE CONNECTIONS: 50> >     TERMINALS PER WAREHOUSE: 40> >     SCALE FACTOR (WAREHOUSES): 200> >     DURATION OF TEST (in sec): 7200> > Result :> >                              Response Time (s)> >      Transaction      %    Average :    90th %        Total> > Rollbacks      %> >     ------------  -----  ---------------------  -----------> > ---------------  -----> >         Delivery   3.98      9.095 :    16.103        15234> > 0   0.00> >        New Order  45.33      7.896 :    14.794       173539> > 1661   0.97> >     Order Status   3.96      8.165 :    13.989        15156> > 0   0.00> >          Payment  42.76      7.295 :    12.470       163726> > 0   0.00> >      Stock Level   3.97      7.198 :    12.520        15198> > 0   0.00> >     ------------  -----  ---------------------  -----------> > ---------------  -----> >> >     10432.09 new-order transactions per minute (NOTPM)> >     16.3 minute duration> >     0 total unknown errors> >     11227 second(s) ramping up11227 (actual rampup time) + ( 16.3 * 60 ) = 12205 5010 (estimated rampup time) + 7200 (estimated steady state time) = 12210 > >> > These results show that dbt2 test actually did not run for 2 hours but it> > start varying with the increase of  \"TERMINALS PER WAREHOUSE\" value i.e. 1st> > Run ( 120.1 minute duration ), 2nd Run (69.8 minute duration) and 3rd Run> > (16.3 minute duration).> > The ramp up times are actually as expected (explained below). What> you are witnessing is more likely that the driver is crashing because> the values are out of range from the scale of the database. You have> effectively told the driver that there are more than 10 districts per> warehouse, and have likely not built the database that way. I'm> actually surprised the driver actually ramped up completely.> I run the dbt2 test with the following configuration i.e.    WAREHOUSES=100    DB_CONNECTIONS=20    REGRESS_DURATION=7200 #HOURS    TERMINAL_PER_WAREHOUSE=32        Or        WAREHOUSES=100    DB_CONNECTIONS=20    REGRESS_DURATION=7200 #HOURS    TERMINAL_PER_WAREHOUSE=40        Or        WAREHOUSES=100    DB_CONNECTIONS=20    REGRESS_DURATION=7200 #HOURS    TERMINAL_PER_WAREHOUSE=56I always end up estimate the same rampup timei.e.    estimated rampup time: Sleeping 5010 seconds    estimated steady state time: Sleeping 7200 secondsIt means it expects thats rampup time will be able to complete in 5010 seconds and wait for 501 (Stage 1. Starting up client) +  5010 (estimated rampup time) + 7200 (estimated steady state time) seconds to complete the test and then kill dbt2-driver and dbt2-client and generate report etc.Rampup time is increasing with the increase in TERMINAL_PER_WAREHOUSE but on the other end dbt2 estimated time (501+5010+7200 seconds) is not increasing and rampup time end up consuming stread state time.. ( There is no process crash observed in any dbt2 or postgresql related process )To sync up the dbt2-run-workload script with rampup time, it now checks mix.log.> > To fix and sync with the rampup time, I have made a minor change in the> > dbt2-run-workload script i.e.> >> >     --- dbt2-run-workload      2010-07-02 08:18:06.000000000 -0400> >     +++ dbt2-run-workload   2010-07-02 08:20:11.000000000 -0400> >     @@ -625,7 +625,11 @@> >      done> >> >      echo -n \"estimated rampup time: \"> >     -do_sleep $SLEEP_RAMPUP> >     +#do_sleep $SLEEP_RAMPUP> >     +while ! grep START ${DRIVER_OUTPUT_DIR}/*/mix.log ; do> >     +       sleep 1> >     +done> >     +date> >      echo \"estimated rampup time has elapsed\"> >> >      # Clear the readprofile data after the driver ramps up.> >> > What is rempup time ? And what do you think about the patch?. Can you please> > guide me?. Thanks.> > The ramp up time is supposed to be the multiplication of the terminals> per warehouse, the number of warehouses with the sleep time between> the creation of each terminal. The only problem with your patch is> that the latest scripts (in the source code repo) breaks out the> client load into multiple instances of the driver program. Thus there> is a log file per instance of the driver so your patch work work as> is. Well, and there is that the ramp up calculation doesn't appear to> be broken. ;)It seems that a driver handles upto 500 warehouses and there will be more drivers if warehouse # is greater than this i.e.     W_CHUNK=500  #(default)I have some other question too. >How I can get maximum TPM value for postgresql ?, what dbt2 parameters I should play with ?Thank you very much for your detailed reply.  Thanks.Best Regards,Asif Naeem Hotmail: Trusted email with Microsoft’s powerful SPAM protection. Sign up now.", "msg_date": "Mon, 5 Jul 2010 23:24:07 +0600", "msg_from": "MUHAMMAD ASIF <[email protected]>", "msg_from_op": true, "msg_subject": "Re: using dbt2 postgresql 8.4 - rampup time issue" }, { "msg_contents": "On Mon, Jul 5, 2010 at 10:24 AM, MUHAMMAD ASIF <[email protected]> wrote:\n>> A clarification of terms may help to start. The \"terminals per\n>> warehouse\" in the scripts correlates to the number terminals emulated.\n>> An emulated terminal is tied to a warehouse's district. In other\n>> words, the number of terminals translates to the number of districts\n>> in a warehouse across the entire database. To increase the terminals\n>> per warehouse implies you have scaled the database differently, which\n>> I'm assuming is not the case here.\n>>\n>\n> Scale the database … Can you please elaborate ? . To increase  \"terminals\n> per warehouse\"  I added only one option ( i.e. \"-t\" for dbt2-run-workload )\n> with normal dbt2 test i.e.\n>\n>         ./dbt2-pgsql-create-db\n>         ./dbt2-pgsql-build-db -d $DBDATA -g -r -w $WAREHOUSES\n>         ./dbt2-run-workload -a pgsql -c $DB_CONNECTIONS -d\n> $REGRESS_DURATION_SEC -w $WAREHOUSES -o $OUTPUT_DIR -t\n> $TERMINAL_PER_WAREHOUSE\n>         ./dbt2-pgsql-stop-db\n>\n> Is this change enough or I am missing some thing ?\n\nThis isn't a trivial question even though at face value I do\nunderstand that you want to see what the performance of postgres is on\n64-bit linux. This kit is complex enough where the answer it \"it\ndepends\". If you want to increase the workload following\nspecification guidelines, then I think you need to understand the\nspecification referenced above better. To best use this kit does\ninvolve a fair amount of understanding of the TPC-C specification. If\nyou just want to increase the load on the database system there are\nseveral ways to do it. You can use the '-n' flag for the\ndbt2-run-workload so that all database transactions are run\nimmediately after each other. If you build the database to a larger\nscale factor (using TPC terminology) by increasing the warehouses,\nthen the scripts will appropriately scale the workload. Tweaking the\n-t flag would be a more advanced method that requires a better\nunderstand of the specification.\n\nPerhaps some more familiarity with the TPC-C specification would help here:\n\nhttp://www.tpc.org/tpcc/spec/tpcc_current.pdf\n\nClause 4.1 discusses the scaling rules for sizing the database.\nUnfortunately that clause may not directly clarify things for you.\nThe other thing to understand is that the dbt2 scripts allow you to\nbreak the specification guidelines in some ways, and not in others. I\ndon't know how to better explain it. The database was built one way,\nand you told the scripts to run the programs in a way that asked for\ndata that doesn't exist.\n\n>> > 1.\n>> > Settings :\n>> >     DATABASE CONNECTIONS: 50\n>> >     TERMINALS PER WAREHOUSE: 10\n>> >     SCALE FACTOR (WAREHOUSES): 200\n>> >     DURATION OF TEST (in sec): 7200\n>> > Result :\n>> >                              Response Time (s)\n>> >      Transaction      %    Average :    90th %        Total\n>> > Rollbacks      %\n>> >     ------------  -----  ---------------------  -----------\n>> > ---------------  -----\n>> >         Delivery   3.96      0.285 :     0.023        26883\n>> > 0   0.00\n>> >        New Order  45.26      0.360 :     0.010       307335\n>> > 3082   1.01\n>> >     Order Status   3.98      0.238 :     0.003        27059\n>> > 0   0.00\n>> >          Payment  42.82      0.233 :     0.003       290802\n>> > 0   0.00\n>> >      Stock Level   3.97      0.245 :     0.002        26970\n>> > 0   0.00\n>> >     ------------  -----  ---------------------  -----------\n>> > ---------------  -----\n>> >\n>> >     2508.36 new-order transactions per minute (NOTPM)\n>> >     120.1 minute duration\n>> >     0 total unknown errors\n>> >     2000 second(s) ramping up\n>> >\n>> > 2.\n>> > Settings :\n>> >     DATABASE CONNECTIONS: 50\n>> >     TERMINALS PER WAREHOUSE: 40\n>> >     SCALE FACTOR (WAREHOUSES): 200\n>> >     DURATION OF TEST (in sec): 7200\n>> > Result :\n>> >                              Response Time (s)\n>> >      Transaction      %    Average :    90th %        Total\n>> > Rollbacks      %\n>> >     ------------  -----  ---------------------  -----------\n>> > ---------------  -----\n>> >         Delivery   3.95      8.123 :     4.605        43672\n>> > 0   0.00\n>> >        New Order  45.19     12.205 :     2.563       499356\n>> > 4933   1.00\n>> >     Order Status   4.00      7.385 :     3.314        44175\n>> > 0   0.00\n>> >          Payment  42.89      7.221 :     1.920       473912\n>> > 0   0.00\n>> >      Stock Level   3.97      7.093 :     1.887        43868\n>> > 0   0.00\n>> >     ------------  -----  ---------------------  -----------\n>> > ---------------  -----\n>> >\n>> >     7009.40 new-order transactions per minute (NOTPM)\n>> >     69.8 minute duration\n>> >     0 total unknown errors\n>> >     8016 second(s) ramping up\n>> >\n>\n> 8016 (actual rampup time) + ( 69.8 * 60 ) = 12204\n> 5010 (estimated rampup time) + 7200 (estimated steady state time) = 12210\n\nI can see where you're pulling numbers from, but I'm having trouble\nunderstanding what correlation you are trying to make.\n\n>> > 3.\n>> > Settings :\n>> >     DATABASE CONNECTIONS: 50\n>> >     TERMINALS PER WAREHOUSE: 40\n>> >     SCALE FACTOR (WAREHOUSES): 200\n>> >     DURATION OF TEST (in sec): 7200\n>> > Result :\n>> >                              Response Time (s)\n>> >      Transaction      %    Average :    90th %        Total\n>> > Rollbacks      %\n>> >     ------------  -----  ---------------------  -----------\n>> > ---------------  -----\n>> >         Delivery   3.98      9.095 :    16.103        15234\n>> > 0   0.00\n>> >        New Order  45.33      7.896 :    14.794       173539\n>> > 1661   0.97\n>> >     Order Status   3.96      8.165 :    13.989        15156\n>> > 0   0.00\n>> >          Payment  42.76      7.295 :    12.470       163726\n>> > 0   0.00\n>> >      Stock Level   3.97      7.198 :    12.520        15198\n>> > 0   0.00\n>> >     ------------  -----  ---------------------  -----------\n>> > ---------------  -----\n>> >\n>> >     10432.09 new-order transactions per minute (NOTPM)\n>> >     16.3 minute duration\n>> >     0 total unknown errors\n>> >     11227 second(s) ramping up\n>\n> 11227 (actual rampup time) + ( 16.3 * 60 ) = 12205\n> 5010 (estimated rampup time) + 7200 (estimated steady state time) = 12210\n\nDitto.\n\n>> >\n>> > These results show that dbt2 test actually did not run for 2 hours but\n>> > it\n>> > start varying with the increase of  \"TERMINALS PER WAREHOUSE\" value i.e.\n>> > 1st\n>> > Run ( 120.1 minute duration ), 2nd Run (69.8 minute duration) and 3rd\n>> > Run\n>> > (16.3 minute duration).\n>>\n>> The ramp up times are actually as expected (explained below). What\n>> you are witnessing is more likely that the driver is crashing because\n>> the values are out of range from the scale of the database. You have\n>> effectively told the driver that there are more than 10 districts per\n>> warehouse, and have likely not built the database that way. I'm\n>> actually surprised the driver actually ramped up completely.\n>>\n>\n> I run the dbt2 test with the following configuration i.e.\n>\n>     WAREHOUSES=100\n>     DB_CONNECTIONS=20\n>     REGRESS_DURATION=7200 #HOURS\n>     TERMINAL_PER_WAREHOUSE=32\n>\n>     Or\n>\n>     WAREHOUSES=100\n>     DB_CONNECTIONS=20\n>     REGRESS_DURATION=7200 #HOURS\n>     TERMINAL_PER_WAREHOUSE=40\n>\n>     Or\n>\n>     WAREHOUSES=100\n>     DB_CONNECTIONS=20\n>     REGRESS_DURATION=7200 #HOURS\n>     TERMINAL_PER_WAREHOUSE=56\n>\n> I always end up estimate the same rampup timei.e.\n>\n>     estimated rampup time: Sleeping 5010 seconds\n>     estimated steady state time: Sleeping 7200 seconds\n>\n> It means it expects thats rampup time will be able to complete in 5010\n> seconds and wait for 501 (Stage 1. Starting up client) +  5010 (estimated\n> rampup time) + 7200 (estimated steady state time) seconds to complete the\n> test and then kill dbt2-driver and dbt2-client and generate report etc.\n\nSorry, I used \"estimate\" to mean \"in a perfect world, it will be\nexactly this time\". The reality is that it will be no sooner than the\ncalculated values.\n\n> Rampup time is increasing with the increase in TERMINAL_PER_WAREHOUSE but on\n> the other end dbt2 estimated time (501+5010+7200 seconds) is not increasing\n> and rampup time end up consuming stread state time.. ( There is no process\n> crash observed in any dbt2 or postgresql related process )\n> To sync up the dbt2-run-workload script with rampup time, it now checks\n> mix.log.\n\nAgain, I don't think this will be clear without understanding the\nscaling rules in the TPC-C specification. I can reiterate that\nTERMINAL_PER_WAREHOUSE tells the scripts how to run the test, now how\nto build the database. Perhaps that is part of the confusion?\n\n>> > To fix and sync with the rampup time, I have made a minor change in the\n>> > dbt2-run-workload script i.e.\n>> >\n>> >     --- dbt2-run-workload      2010-07-02 08:18:06.000000000 -0400\n>> >     +++ dbt2-run-workload   2010-07-02 08:20:11.000000000 -0400\n>> >     @@ -625,7 +625,11 @@\n>> >      done\n>> >\n>> >      echo -n \"estimated rampup time: \"\n>> >     -do_sleep $SLEEP_RAMPUP\n>> >     +#do_sleep $SLEEP_RAMPUP\n>> >     +while ! grep START ${DRIVER_OUTPUT_DIR}/*/mix.log ; do\n>> >     +       sleep 1\n>> >     +done\n>> >     +date\n>> >      echo \"estimated rampup time has elapsed\"\n>> >\n>> >      # Clear the readprofile data after the driver ramps up.\n>> >\n>> > What is rempup time ? And what do you think about the patch?. Can you\n>> > please\n>> > guide me?. Thanks.\n>>\n>> The ramp up time is supposed to be the multiplication of the terminals\n>> per warehouse, the number of warehouses with the sleep time between\n>> the creation of each terminal. The only problem with your patch is\n>> that the latest scripts (in the source code repo) breaks out the\n>> client load into multiple instances of the driver program. Thus there\n>> is a log file per instance of the driver so your patch work work as\n>> is. Well, and there is that the ramp up calculation doesn't appear to\n>> be broken. ;)\n>\n> It seems that a driver handles upto 500 warehouses and there will be more\n> drivers if warehouse # is greater than this i.e.\n>     W_CHUNK=500  #(default)\n\nIt's kludgey, I can't offer any better excuse for the lack of clarity\nhere, but I think you have the general idea. I don't think what I\nintended to do here was done very well.\n\n> I have some other question too.\n>  >How I can get maximum TPM value for postgresql ?, what dbt2 parameters I\n> should play with ?\n\nUnfortunately I must give you a non-answer here. The kit is designed\nto be used as a tool to stress the system to characterize the system\nfor development, so I can't answer how to get the maximum TPM value.\nGetting the maximum TPM value isn't a indicator for how well the\nsystem is performing because there are many way to inflate that value\nwithout stressing the system in a meaningful way. The TPM values are\nonly helpful as a gage for measuring changes to the system.\n\nRegards,\nMark\n", "msg_date": "Tue, 6 Jul 2010 17:35:43 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: using dbt2 postgresql 8.4 - rampup time issue" } ]
[ { "msg_contents": "Hi,\n\nMy question is regarding ORDER BY / LIMIT query behavior when using partitioning.\n\nI have a large table (about 100 columns, several million rows) partitioned by a column called day (which is the date stored as yyyymmdd - say 20100502 for May 2nd 2010 etc.). Say the main table is called FACT_TABLE and each child table is called FACT_TABLE_yyyymmdd (e.g. FACT_TABLE_20100502, FACT_TABLE_20100503 etc.) and has an appropriate CHECK constraint created on it to CHECK (day = yyyymmdd).\n\nPostgres Version: PostgreSQL 8.4.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit\n\nThe query pattern I am looking at is (I have tried to simplify the column names for readability):\n\nSELECT F1 from FACT_TABLE \nwhere day >= 20100502 and day <= 20100507 # selecting for a week\nORDER BY F2 desc\nLIMIT 100\n\n\nThis is what is happening:\n\nWhen I query from the specific day's (child) table, I get what I expect - a descending Index scan and good performance.\n\n# explain select F1 from FACT_TABLE_20100502 where day = 20100502 order by F2 desc limit 100;\n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------\n--\n Limit (cost=0.00..4.81 rows=100 width=41)\n -> Index Scan Backward using F2_20100502 on FACT_TABLE_20100502 (cost=0.00..90355.89 rows=1876985 width=41\n)\n Filter: (day = 20100502)\n\n\n\nBUT:\n\nWhen I do the same query against the parent table it is much slower - two things seem to happen - one is that the descending scan of the index is not done and secondly there seems to be a separate sort/limit at the end - i.e. all data from all partitions is retrieved and then sorted and limited - This seems to be much less efficient than doing a descending scan on each partition and limiting the results and then combining and reapplying the limit at the end.\n\nexplain select F1 from FACT_TABLE where day = 20100502 order by F2 desc limit 100;\n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------\n---\n Limit (cost=20000084948.01..20000084948.01 rows=100 width=41)\n -> Sort (cost=20000084948.01..20000084994.93 rows=1876986 width=41)\n Sort Key: public.FACT_TABLE.F2\n -> Result (cost=10000000000.00..20000084230.64 rows=1876986 width=41)\n -> Append (cost=10000000000.00..20000084230.64 rows=1876986 width=41)\n -> Seq Scan on FACT_TABLE (cost=10000000000.00..10000000010.02 rows=1 width=186)\n Filter: (day = 20100502)\n -> Seq Scan on FACT_TABLE_20100502 FACT_TABLE (cost=10000000000.00..10000084220.62 rows=1876985 width=4\n1)\n Filter: (day = 20100502)\n(9 rows)\n\n\nCould anyone please explain why this is happening and what I can do to get the query to perform well even when querying from the parent table?\n\nThanks,\n\nRanga\n\n\n\n\n \t\t \t \t\t \n_________________________________________________________________\nHotmail is redefining busy with tools for the New Busy. Get more from your inbox.\nhttp://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_2\n\n\n\n\n\nHi,My question is regarding ORDER BY / LIMIT query behavior when using partitioning.I have a large table (about 100 columns, several million rows) partitioned by a column called day (which is the date stored as yyyymmdd - say 20100502 for May 2nd 2010 etc.). Say the main table  is called FACT_TABLE and each child table is called FACT_TABLE_yyyymmdd (e.g. FACT_TABLE_20100502, FACT_TABLE_20100503 etc.) and has an appropriate CHECK constraint created on it to CHECK (day = yyyymmdd).Postgres Version:  PostgreSQL 8.4.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bitThe query pattern I am looking at is (I have tried to simplify the column names for readability):SELECT F1 from FACT_TABLE where day >= 20100502 and day <= 20100507  # selecting for a weekORDER BY F2 descLIMIT 100This is what is happening:When I query from the specific day's (child) table, I get what I expect - a descending Index scan and good performance.# explain  select F1 from FACT_TABLE_20100502 where day = 20100502 order by F2 desc limit 100;                                                                    QUERY PLAN                                                                    -------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.00..4.81 rows=100 width=41)   ->  Index Scan Backward using F2_20100502 on FACT_TABLE_20100502  (cost=0.00..90355.89 rows=1876985 width=41)         Filter: (day = 20100502)BUT:When I do the same query against the parent table it is much slower - two things seem to happen - one is that the descending scan of the index is not done and secondly there seems to be a separate sort/limit at the end - i.e. all data from all partitions is retrieved and then sorted and limited - This seems to be much less efficient than doing a descending scan on each partition and limiting the results and then combining and reapplying the limit at the end.explain  select F1 from FACT_TABLE where day = 20100502 order by F2 desc limit 100;                                                                    QUERY PLAN                                                                     --------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=20000084948.01..20000084948.01 rows=100 width=41)   ->  Sort  (cost=20000084948.01..20000084994.93 rows=1876986 width=41)         Sort Key: public.FACT_TABLE.F2         ->  Result  (cost=10000000000.00..20000084230.64 rows=1876986 width=41)               ->  Append  (cost=10000000000.00..20000084230.64 rows=1876986 width=41)                     ->  Seq Scan on FACT_TABLE  (cost=10000000000.00..10000000010.02 rows=1 width=186)                           Filter: (day = 20100502)                     ->  Seq Scan on FACT_TABLE_20100502 FACT_TABLE  (cost=10000000000.00..10000084220.62 rows=1876985 width=41)                           Filter: (day = 20100502)(9 rows)Could anyone please explain why this is happening and what I can do to get the query to perform well even when querying from the parent table?Thanks,Ranga Hotmail is redefining busy with tools for the New Busy. Get more from your inbox. See how.", "msg_date": "Fri, 2 Jul 2010 15:28:45 +0000", "msg_from": "Ranga Gopalan <[email protected]>", "msg_from_op": true, "msg_subject": "Question about partitioned query behavior" }, { "msg_contents": "In postgresql.conf, what are your settings for constraint_exclusion?\n\nThere are 3 settings - on, off, or partition.\n\nMine are set as follows:\n\n \n\nconstraint_exclusion = on # on, off, or partition\n\n \n\nUnder 8.4.4 I had it set to partition, but the behavior was not what I\nexpected so I set it back to \"on\" and only the applicable partitions get\nprocessed.\n\n \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Ranga\nGopalan\nSent: Friday, July 02, 2010 9:29 AM\nTo: [email protected]\nSubject: [PERFORM] Question about partitioned query behavior\n\n \n\nHi,\n\nMy question is regarding ORDER BY / LIMIT query behavior when using\npartitioning.\n\nI have a large table (about 100 columns, several million rows)\npartitioned by a column called day (which is the date stored as yyyymmdd\n- say 20100502 for May 2nd 2010 etc.). Say the main table is called\nFACT_TABLE and each child table is called FACT_TABLE_yyyymmdd (e.g.\nFACT_TABLE_20100502, FACT_TABLE_20100503 etc.) and has an appropriate\nCHECK constraint created on it to CHECK (day = yyyymmdd).\n\nPostgres Version: PostgreSQL 8.4.2 on x86_64-unknown-linux-gnu,\ncompiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit\n\nThe query pattern I am looking at is (I have tried to simplify the\ncolumn names for readability):\n\nSELECT F1 from FACT_TABLE \nwhere day >= 20100502 and day <= 20100507 # selecting for a week\nORDER BY F2 desc\nLIMIT 100\n\n\nThis is what is happening:\n\nWhen I query from the specific day's (child) table, I get what I expect\n- a descending Index scan and good performance.\n\n# explain select F1 from FACT_TABLE_20100502 where day = 20100502 order\nby F2 desc limit 100;\n \nQUERY PLAN\n\n \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n--\n Limit (cost=0.00..4.81 rows=100 width=41)\n -> Index Scan Backward using F2_20100502 on FACT_TABLE_20100502\n(cost=0.00..90355.89 rows=1876985 width=41\n)\n Filter: (day = 20100502)\n\n\n\nBUT:\n\nWhen I do the same query against the parent table it is much slower -\ntwo things seem to happen - one is that the descending scan of the index\nis not done and secondly there seems to be a separate sort/limit at the\nend - i.e. all data from all partitions is retrieved and then sorted and\nlimited - This seems to be much less efficient than doing a descending\nscan on each partition and limiting the results and then combining and\nreapplying the limit at the end.\n\nexplain select F1 from FACT_TABLE where day = 20100502 order by F2 desc\nlimit 100;\n \nQUERY PLAN\n\n \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n---\n Limit (cost=20000084948.01..20000084948.01 rows=100 width=41)\n -> Sort (cost=20000084948.01..20000084994.93 rows=1876986 width=41)\n Sort Key: public.FACT_TABLE.F2\n -> Result (cost=10000000000.00..20000084230.64 rows=1876986\nwidth=41)\n -> Append (cost=10000000000.00..20000084230.64\nrows=1876986 width=41)\n -> Seq Scan on FACT_TABLE\n(cost=10000000000.00..10000000010.02 rows=1 width=186)\n Filter: (day = 20100502)\n -> Seq Scan on FACT_TABLE_20100502 FACT_TABLE\n(cost=10000000000.00..10000084220.62 rows=1876985 width=4\n1)\n Filter: (day = 20100502)\n(9 rows)\n\n\nCould anyone please explain why this is happening and what I can do to\nget the query to perform well even when querying from the parent table?\n\nThanks,\n\nRanga\n\n\n\n\n\n\n________________________________\n\nHotmail is redefining busy with tools for the New Busy. Get more from\nyour inbox. See how.\n<http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL\n:ON:WL:en-US:WM_HMP:042010_2> \n\n\n\n\n\n\n\n\n\n\n\n\nIn postgresql.conf, what are your settings for\nconstraint_exclusion?\nThere are 3 settings – on, off, or partition.\nMine are set as follows:\n \nconstraint_exclusion = on            # on, off, or partition\n \nUnder 8.4.4 I had it set to partition, but the behavior was not\nwhat I expected so I set it back to “on” and only the applicable partitions get\nprocessed.\n \n\n\n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Ranga\nGopalan\nSent: Friday, July 02, 2010 9:29 AM\nTo: [email protected]\nSubject: [PERFORM] Question about partitioned query behavior\n\n\n \nHi,\n\nMy question is regarding ORDER BY / LIMIT query behavior when using\npartitioning.\n\nI have a large table (about 100 columns, several million rows) partitioned by a\ncolumn called day (which is the date stored as yyyymmdd - say 20100502 for May\n2nd 2010 etc.). Say the main table  is called FACT_TABLE and each child\ntable is called FACT_TABLE_yyyymmdd (e.g. FACT_TABLE_20100502,\nFACT_TABLE_20100503 etc.) and has an appropriate CHECK constraint created on it\nto CHECK (day = yyyymmdd).\n\nPostgres Version:  PostgreSQL 8.4.2 on x86_64-unknown-linux-gnu, compiled\nby GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit\n\nThe query pattern I am looking at is (I have tried to simplify the column names\nfor readability):\n\nSELECT F1 from FACT_TABLE \nwhere day >= 20100502 and day <= 20100507  # selecting for a week\nORDER BY F2 desc\nLIMIT 100\n\n\nThis is what is happening:\n\nWhen I query from the specific day's (child) table, I get what I expect - a\ndescending Index scan and good performance.\n\n# explain  select F1 from FACT_TABLE_20100502 where day = 20100502 order\nby F2 desc limit 100;\n                                                                   \nQUERY\nPLAN                                                                 \n\n  \n------------------------------------------------------------------------------------------------------------------------------------------------\n--\n Limit  (cost=0.00..4.81 rows=100 width=41)\n   ->  Index Scan Backward using F2_20100502 on\nFACT_TABLE_20100502  (cost=0.00..90355.89 rows=1876985 width=41\n)\n         Filter: (day = 20100502)\n\n\n\nBUT:\n\nWhen I do the same query against the parent table it is much slower - two\nthings seem to happen - one is that the descending scan of the index is not\ndone and secondly there seems to be a separate sort/limit at the end - i.e. all\ndata from all partitions is retrieved and then sorted and limited - This seems\nto be much less efficient than doing a descending scan on each partition and\nlimiting the results and then combining and reapplying the limit at the end.\n\nexplain  select F1 from FACT_TABLE where day = 20100502 order by F2 desc\nlimit 100;\n                                                                   \nQUERY\nPLAN                                                                 \n\n   \n------------------------------------------------------------------------------------------------------------------------------------------------\n---\n Limit  (cost=20000084948.01..20000084948.01 rows=100 width=41)\n   ->  Sort  (cost=20000084948.01..20000084994.93\nrows=1876986 width=41)\n         Sort Key: public.FACT_TABLE.F2\n         ->  Result \n(cost=10000000000.00..20000084230.64 rows=1876986 width=41)\n              \n->  Append  (cost=10000000000.00..20000084230.64 rows=1876986\nwidth=41)\n                    \n->  Seq Scan on FACT_TABLE  (cost=10000000000.00..10000000010.02\nrows=1 width=186)\n                          \nFilter: (day = 20100502)\n                    \n->  Seq Scan on FACT_TABLE_20100502 FACT_TABLE \n(cost=10000000000.00..10000084220.62 rows=1876985 width=4\n1)\n                          \nFilter: (day = 20100502)\n(9 rows)\n\n\nCould anyone please explain why this is happening and what I can do to get the\nquery to perform well even when querying from the parent table?\n\nThanks,\n\nRanga\n\n\n\n\n\n\n\n\nHotmail\nis redefining busy with tools for the New Busy. Get more from your inbox. See how.", "msg_date": "Fri, 2 Jul 2010 10:40:03 -0600", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about partitioned query behavior" } ]
[ { "msg_contents": "Hello.\n\nI have a tree-like table with a three-field PK (name, date, id) and one \nparent field.\nIt has 5k to 6k records as of now, but it will hold about 1 million \nrecords.\n\nI am trying the following WITH RECURSIVE query:\n\nWITH RECURSIVE t AS (\n SELECT par.id AS tid, par.name, par.date, par.id, \npar.text, par.h_title, par.h_name, par.parent\n FROM _books.par\n UNION\n SELECT t.tid AS pid, p.name, p.date, p.id, p.text, \np.h_title, p.h_name, p.parent\n FROM t, _books.par p\n WHERE p.name = t.name AND p.date = t.date AND t.id = \np.parent\n )\n SELECT t.tid, t.name, t.date, t.id, t.text, t.h_title, t.h_name, t.parent\n FROM t WHERE name = 'cfx' AND date = '2009-08-19' AND tid = '28340';\n\n... which takes 2547.503 ms\n\nHowever, if I try the same query but adding the same WHERE clause to the\nnon-recursive term, I get much better results.\n\n\nWITH RECURSIVE t AS (\n SELECT par.id AS tid, par.name, par.date, par.id, \npar.text, par.h_title, par.h_name, par.parent\n FROM _books.par WHERE name = 'cfx' AND date = \n'2009-08-19' AND par.id = '28340'\n UNION\n SELECT t.tid AS pid, p.name, p.date, p.id, p.text, \np.h_title, p.h_name, p.parent\n FROM t, _books.par p\n WHERE p.name = t.name AND p.date = t.date AND t.id = \np.parent\n )\n SELECT t.tid, t.name, t.date, t.id, t.text, t.h_title, t.h_name, t.parent\n FROM t WHERE name = 'cfx' AND date = '2009-08-19' AND tid = '28340';\n\n... which takes 0.221 ms\n\nI am being forced to use the slow query because I want to define it as a\nview, leaving the WHERE clause to the application.\n\nI fail to see where the two queries might be different, or, what cases the\nslow one considers that the fast one doesn't, as to get a clue on how to\nworkaround this.\n\nI have taken the EXPLAIN ANALYZE output for both queries. It looks like\nthe slow one is processing all records (read: not adding the WHERE clause\nto the non-recursive term).\n\n\n QUERY \nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n CTE Scan on t (cost=96653.20..96820.57 rows=1 width=144) (actual \ntime=32.931..2541.792 rows=1 loops=1)\n Filter: (((name)::text = 'cfx'::text) AND (date = '2009-08-19'::date) \nAND (tid = 28340))\n CTE t\n -> Recursive Union (cost=0.00..96653.20 rows=6086 width=212) \n(actual time=0.017..2442.655 rows=33191 loops=1)\n -> Seq Scan on par (cost=0.00..237.96 rows=5996 width=208) \n(actual time=0.011..5.591 rows=5996 loops=1)\n -> Merge Join (cost=8909.74..9629.35 rows=9 width=212) \n(actual time=225.979..254.727 rows=3022 loops=9)\n Merge Cond: (((t.name)::text = (p.name)::text) AND \n(t.date = p.date) AND (t.id = p.parent))\n -> Sort (cost=7700.54..7850.44 rows=59960 width=44) \n(actual time=58.163..59.596 rows=3685 loops=9)\n Sort name: t.name, t.date, t.id\n Sort Method: quicksort Memory: 17kB\n -> WorkTable Scan on t (cost=0.00..1199.20 \nrows=59960 width=44) (actual time=0.027..3.486 rows=3688 loops=9)\n -> Materialize (cost=1209.20..1284.15 rows=5996 \nwidth=208) (actual time=163.062..177.415 rows=5810 loops=9)\n -> Sort (cost=1209.20..1224.19 rows=5996 \nwidth=208) (actual time=163.054..172.543 rows=5810 loops=9)\n Sort name: p.name, p.date, p.parent\n Sort Method: external merge Disk: 1304kB\n -> Seq Scan on par p (cost=0.00..237.96 \nrows=5996 width=208) (actual time=0.015..3.330 rows=5996 loops=9)\n Total runtime: 2547.503 ms\n(17 rows)\n\n\n\n QUERY \nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n CTE Scan on t (cost=927.80..928.10 rows=1 width=144) (actual \ntime=0.036..0.132 rows=1 loops=1)\n Filter: (((name)::text = 'cfx'::text) AND (date = '2009-08-19'::date) \nAND (tid = 28340))\n CTE t\n -> Recursive Union (cost=0.00..927.80 rows=11 width=212) (actual \ntime=0.030..0.124 rows=1 loops=1)\n -> Index Scan using par_id on par (cost=0.00..8.27 rows=1 \nwidth=208) (actual time=0.024..0.026 rows=1 loops=1)\n Index Cond: (id = 28340)\n Filter: (((name)::text = 'cfx'::text) AND (date = \n'2009-08-19'::date))\n -> Nested Loop (cost=0.00..91.93 rows=1 width=212) (actual \ntime=0.091..0.091 rows=0 loops=1)\n Join Filter: (((t.name)::text = (p.name)::text) AND \n(t.date = p.date))\n -> WorkTable Scan on t (cost=0.00..0.20 rows=10 \nwidth=44) (actual time=0.001..0.001 rows=1 loops=1)\n -> Index Scan using par_parent on par p \n(cost=0.00..9.07 rows=6 width=208) (actual time=0.085..0.085 rows=0 \nloops=1)\n Index Cond: (p.parent = t.id)\n Total runtime: 0.221 ms\n(13 rows)\n\n\n\nbooks=# \\d _books.par\n Table \"_books.par\"\n Column | Type | Modifiers\n--------------+-------------------+-----------\n name | character varying | not null\n date | date | not null\n id | integer | not null\n text | character varying |\n h_title | character varying |\n h_name | character varying |\n parent | integer |\nIndexes:\n \"par_pkey\" PRIMARY KEY, btree (name, date, id)\n \"par_name\" btree (name)\n \"par_name_fpub_parent\" btree (name, date, parent)\n \"par_id\" btree (id)\n \"par_parent\" btree (parent)\n\n\n\n$ psql --version\npsql (PostgreSQL) 8.4.4\ncontains support for command-line editing\n\n\n\n\n-- \nOctavio.\n", "msg_date": "Sun, 04 Jul 2010 23:07:20 -0700", "msg_from": "\"Octavio Alvarez\" <[email protected]>", "msg_from_op": true, "msg_subject": "Two \"equivalent\" WITH RECURSIVE queries, one of them slow." }, { "msg_contents": "On Mon, Jul 5, 2010 at 2:07 AM, Octavio Alvarez\n<[email protected]> wrote:\n> Hello.\n>\n> I have a tree-like table with a three-field PK (name, date, id) and one\n> parent field.\n> It has 5k to 6k records as of now, but it will hold about 1 million records.\n>\n> I am trying the following WITH RECURSIVE query:\n>\n> WITH RECURSIVE t AS (\n>                 SELECT par.id AS tid, par.name, par.date, par.id, par.text,\n> par.h_title, par.h_name, par.parent\n>                   FROM _books.par\n>        UNION\n>                 SELECT t.tid AS pid, p.name, p.date, p.id, p.text,\n> p.h_title, p.h_name, p.parent\n>                   FROM t, _books.par p\n>                  WHERE p.name = t.name AND p.date = t.date AND t.id =\n> p.parent\n>        )\n>  SELECT t.tid, t.name, t.date, t.id, t.text, t.h_title, t.h_name, t.parent\n>   FROM t WHERE name = 'cfx' AND date = '2009-08-19' AND tid = '28340';\n>\n> ... which takes 2547.503 ms\n>\n> However, if I try the same query but adding the same WHERE clause to the\n> non-recursive term, I get much better results.\n>\n>\n> WITH RECURSIVE t AS (\n>                 SELECT par.id AS tid, par.name, par.date, par.id, par.text,\n> par.h_title, par.h_name, par.parent\n>                   FROM _books.par WHERE name = 'cfx' AND date = '2009-08-19'\n> AND par.id = '28340'\n>        UNION\n>                 SELECT t.tid AS pid, p.name, p.date, p.id, p.text,\n> p.h_title, p.h_name, p.parent\n>                   FROM t, _books.par p\n>                  WHERE p.name = t.name AND p.date = t.date AND t.id =\n> p.parent\n>        )\n>  SELECT t.tid, t.name, t.date, t.id, t.text, t.h_title, t.h_name, t.parent\n>   FROM t WHERE name = 'cfx' AND date = '2009-08-19' AND tid = '28340';\n>\n> ... which takes 0.221 ms\n>\n> I am being forced to use the slow query because I want to define it as a\n> view, leaving the WHERE clause to the application.\n\nI think this is just a limitation of the optimizer. Recursive queries\nare a relatively new feature and the optimizer doesn't know a whole\nlot about how to deal with them. That may get improved at some point,\nbut the optimization you're hoping for here is pretty tricky. In\norder to push the WHERE clauses down into the non-recursive term, the\noptimizer would need to prove that this doesn't change the final\nresults. I think that's possible here because it so happens that your\nrecursive term only generates results that have the same name, date,\nand tid as some existing result, but with a slightly different\nrecursive query that wouldn't be true, so you'd need to make the code\npretty smart to work this one out.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 6 Jul 2010 15:21:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two \"equivalent\" WITH RECURSIVE queries, one of them\n\tslow." }, { "msg_contents": "On Mon, Jul 5, 2010 at 2:07 AM, Octavio Alvarez\n<[email protected]> wrote:\n> Hello.\n>\n> I have a tree-like table with a three-field PK (name, date, id) and one\n> parent field.\n> It has 5k to 6k records as of now, but it will hold about 1 million records.\n>\n> I am trying the following WITH RECURSIVE query:\n>\n> WITH RECURSIVE t AS (\n>                 SELECT par.id AS tid, par.name, par.date, par.id, par.text,\n> par.h_title, par.h_name, par.parent\n>                   FROM _books.par\n>        UNION\n>                 SELECT t.tid AS pid, p.name, p.date, p.id, p.text,\n> p.h_title, p.h_name, p.parent\n>                   FROM t, _books.par p\n>                  WHERE p.name = t.name AND p.date = t.date AND t.id =\n> p.parent\n>        )\n>  SELECT t.tid, t.name, t.date, t.id, t.text, t.h_title, t.h_name, t.parent\n>   FROM t WHERE name = 'cfx' AND date = '2009-08-19' AND tid = '28340';\n>\n> ... which takes 2547.503 ms\n>\n> However, if I try the same query but adding the same WHERE clause to the\n> non-recursive term, I get much better results.\n>\n>\n> WITH RECURSIVE t AS (\n>                 SELECT par.id AS tid, par.name, par.date, par.id, par.text,\n> par.h_title, par.h_name, par.parent\n>                   FROM _books.par WHERE name = 'cfx' AND date = '2009-08-19'\n> AND par.id = '28340'\n>        UNION\n>                 SELECT t.tid AS pid, p.name, p.date, p.id, p.text,\n> p.h_title, p.h_name, p.parent\n>                   FROM t, _books.par p\n>                  WHERE p.name = t.name AND p.date = t.date AND t.id =\n> p.parent\n>        )\n>  SELECT t.tid, t.name, t.date, t.id, t.text, t.h_title, t.h_name, t.parent\n>   FROM t WHERE name = 'cfx' AND date = '2009-08-19' AND tid = '28340';\n>\n> ... which takes 0.221 ms\n\nIf you want the fast plan, you might want to consider reworking your\nquery into a set returning function. It's pretty easy to do:\n\n\ncreate or replace function f(arg int) returns setof something as\n$$\n with recursive foo as\n (\n select * from bar where id = $1\n union all\n [...]\n )\n select * from foo\n$$ language sql;\n\nObviously, a pure view approach would be nicer but it just isn't going\nto hapen at present. CTE are currently problematic generally when you\nneed quals in the 'with' term, especially in the case of recursive\nCTE.\n\nmerlin\n", "msg_date": "Wed, 7 Jul 2010 09:14:25 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two \"equivalent\" WITH RECURSIVE queries, one of them\n\tslow." } ]
[ { "msg_contents": "Hello everyone,\n\nWe've recently finished developing a bigger webapplication, and we are\nabout to put it online.\n\nI ran some load tests yesterday, and configured 'slow query' logging\nbeforehand, so I could see if there might be a performance bottleneck\nin the PG. While I discovered no real problems, the log file analysis\nusing pgFouine revealed two queries, which are executed often, and\ntake quite a bit some time.\n\nI'm just curious if there is any way to improve the performance of\nthose queries. I'm seeing SeqScans in the EXPLAIN ANALYZE, but nothing\nI have done yet has removed those.\n\nThe statements and query plans are:\n\n---- Query 1 -----\n\nexplain analyze SELECT\nn.name_short,n.flag,n.nation_id,n.urlidentifier,count(p.person_id) as\nathletes from nations n left join persons p on n.nation_id =\np.nation_id left join efclicences e on p.person_id = e.person_id where\ncontinent = 'eu' and p.deleted = false and p.inactive = false and\ne.fencer = true group by\nn.name_short,n.flag,n.nation_id,n.urlidentifier order by n.name_short;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=9997.21..9997.32 rows=44 width=33) (actual\ntime=872.000..872.000 rows=44 loops=1)\n Sort Key: n.name_short\n Sort Method: quicksort Memory: 28kB\n -> HashAggregate (cost=9995.45..9996.01 rows=44 width=33) (actual\ntime=872.000..872.000 rows=44 loops=1)\n -> Hash Join (cost=5669.49..9611.83 rows=30690 width=33)\n(actual time=332.000..720.000 rows=142240 loops=1)\n Hash Cond: (e.person_id = p.person_id)\n -> Seq Scan on efclicences e (cost=0.00..2917.29\nrows=143629 width=8) (actual time=0.000..80.000 rows=143629 loops=1)\n Filter: fencer\n -> Hash (cost=5285.87..5285.87 rows=30690 width=33)\n(actual time=332.000..332.000 rows=142240 loops=1)\n -> Hash Join (cost=7.10..5285.87 rows=30690\nwidth=33) (actual time=0.000..256.000 rows=142240 loops=1)\n Hash Cond: (p.nation_id = n.nation_id)\n -> Seq Scan on persons p\n(cost=0.00..4438.29 rows=142288 width=16) (actual time=0.000..112.000\nrows=142418 loops=1)\n Filter: ((NOT deleted) AND (NOT inactive))\n -> Hash (cost=6.55..6.55 rows=44\nwidth=25) (actual time=0.000..0.000 rows=44 loops=1)\n -> Seq Scan on nations n\n(cost=0.00..6.55 rows=44 width=25) (actual time=0.000..0.000 rows=44\nloops=1)\n Filter: ((continent)::text = 'eu'::text)\n Total runtime: 880.000 ms\n(17 rows)\n\n--- Query 2 ---\nexplain analyze SELECT persons.person_id AS persons_person_id FROM\npersons LEFT OUTER JOIN indexing_persons ON persons.person_id =\nindexing_persons.person_id WHERE indexing_persons.person_id IS NULL\nOR persons.modified > indexing_persons.indexed ORDER BY\npersons.modified DESC LIMIT 1000;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=17755.23..17757.73 rows=1000 width=16) (actual\ntime=372.000..372.000 rows=0 loops=1)\n -> Sort (cost=17755.23..17994.61 rows=95753 width=16) (actual\ntime=372.000..372.000 rows=0 loops=1)\n Sort Key: persons.modified\n Sort Method: quicksort Memory: 25kB\n -> Hash Left Join (cost=4313.44..12505.20 rows=95753\nwidth=16) (actual time=372.000..372.000 rows=0 loops=1)\n Hash Cond: (persons.person_id = indexing_persons.person_id)\n Filter: ((indexing_persons.person_id IS NULL) OR\n(persons.modified > indexing_persons.indexed))\n -> Seq Scan on persons (cost=0.00..4438.29\nrows=143629 width=16) (actual time=0.000..56.000 rows=143629 loops=1)\n -> Hash (cost=2534.86..2534.86 rows=142286 width=16)\n(actual time=140.000..140.000 rows=143629 loops=1)\n -> Seq Scan on indexing_persons\n(cost=0.00..2534.86 rows=142286 width=16) (actual time=0.000..72.000\nrows=143629 loops=1)\n Total runtime: 372.000 ms\n(11 rows)\n\n---- Table definitions ---\n\n\\d persons\n Table \"public.persons\"\n Column | Type |\n Modifiers\n---------------------+--------------------------+-------------------------------------------------------------\n person_id | bigint | not null default\nnextval('persons_person_id_seq'::regclass)\n givenname | character varying(100) | not null\n surname | character varying(100) | not null\n name_display_short | character varying(20) | not null\n name_display_long | character varying(50) | not null\n title | character varying(50) |\n postnominals | character varying(10) |\n gender | character varying(1) |\n dateofbirth | date |\n nation_id | bigint |\n club_id | bigint |\n handed | character varying(1) |\n comment | text |\n national_identifier | character varying(50) |\n fie_identifier | character varying(50) |\n honorary_member_efc | boolean | not null\n honorary_member_fie | boolean | not null\n created | timestamp with time zone | not null\n modified | timestamp with time zone | not null\n dead | boolean | not null\n inactive | boolean | not null\n deleted | boolean | not null\n urlidentifier | character varying(50) | not null\n profilepicture | bigint |\n ophardt_identifier | bigint |\n idtoken | character varying(10) |\n consolidated | bigint |\nIndexes:\n \"persons_pkey\" PRIMARY KEY, btree (person_id)\n \"persons_urlidentifier_key\" UNIQUE, btree (urlidentifier)\n \"idx_persons_deleted\" btree (deleted)\n \"idx_persons_inactive\" btree (inactive)\n \"idx_persons_inactive_deleted\" btree (inactive, deleted)\nForeign-key constraints:\n \"persons_club_id_fkey\" FOREIGN KEY (club_id) REFERENCES\nclubs(club_id) ON UPDATE CASCADE ON DELETE SET NULL\n \"persons_consolidated_fkey\" FOREIGN KEY (consolidated) REFERENCES\npersons(person_id) ON UPDATE CASCADE ON DELETE CASCADE\n \"persons_nation_id_fkey\" FOREIGN KEY (nation_id) REFERENCES\nnations(nation_id)\nTriggers:\n persons_modified BEFORE UPDATE ON persons FOR EACH ROW EXECUTE\nPROCEDURE setmodified()\n\n\n \\d nations\n Table \"public.nations\"\n Column | Type |\n Modifiers\n-------------------+--------------------------+-------------------------------------------------------------\n nation_id | bigint | not null default\nnextval('nations_nation_id_seq'::regclass)\n code | character varying(3) | not null\n name_short | character varying(100) | not null\n name_official | character varying(200) | not null\n name_official_en | character varying(200) | not null\n website | character varying(255) |\n flag | character varying(255) |\n comment | text |\n geocode_longitude | double precision |\n geocode_latitude | double precision |\n geocode_zoom | double precision |\n created | timestamp with time zone | not null\n modified | timestamp with time zone | not null\n inactive | boolean | not null default false\n deleted | boolean | not null default false\n efc | boolean | not null\n subname | character varying(255) |\n street | character varying(255) |\n postcode | character varying(255) |\n city | character varying(255) |\n country | character varying(255) |\n fax | character varying(255) |\n mobile | character varying(255) |\n phone | character varying(255) |\n email | character varying(255) |\n urlidentifier | character varying(50) | not null\n continent | character varying(2) | not null default\n'eu'::character varying\n logo_p2picture_id | bigint |\n idtoken | character varying(10) |\nIndexes:\n \"nations_pkey\" PRIMARY KEY, btree (nation_id)\nForeign-key constraints:\n \"nations_logo_p2picture_id_fkey\" FOREIGN KEY (logo_p2picture_id)\nREFERENCES p2picture(picture_id) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n \\d efclicences\n Table \"public.efclicences\"\n Column | Type |\n Modifiers\n---------------+--------------------------+---------------------------------------------------------------------\n efclicence_id | bigint | not null default\nnextval('efclicences_efclicence_id_seq'::regclass)\n person_id | bigint | not null\n valid_from | date | not null\n valid_to | date |\n created | timestamp with time zone | not null\n modified | timestamp with time zone | not null\n inactive | boolean | not null\n fencer | boolean | not null\n official | boolean | not null\n referee | boolean | not null\n member | boolean | not null\nIndexes:\n \"efclicences_pkey\" PRIMARY KEY, btree (efclicence_id)\nForeign-key constraints:\n \"efclicences_person_id_fkey\" FOREIGN KEY (person_id) REFERENCES\npersons(person_id) ON UPDATE CASCADE ON DELETE CASCADE\n\n \\d indexing_persons\n Table \"public.indexing_persons\"\n Column | Type | Modifiers\n-----------+--------------------------+----------------------------------------------------------------------\n person_id | bigint | not null default\nnextval('indexing_persons_person_id_seq'::regclass)\n indexed | timestamp with time zone |\nIndexes:\n \"indexing_persons_pkey\" PRIMARY KEY, btree (person_id)\nForeign-key constraints:\n \"indexing_persons_person_id_fkey\" FOREIGN KEY (person_id)\nREFERENCES persons(person_id) ON DELETE CASCADE\n\n\n--- Additional info ---\n\nThese are mostly stock table definitions, and not much has done yet to\nimprove performance there.\n\nAutovacuuming is turned on for the PG, I have increased the available\nmemory a bit (as the db server as 4 GB of RAM), and added logging\noptions to the stock Debian configuration, but nothing more.\n\nThe system in a XEN vServer running on 4 Cores, with those said 4 GB of RAM.\n\nIt is nothing deal breaking at the moment, the performance of those\nqueries, as we don't have a problem at the moment, but I'm curious to\nlearn more about query optimization, to maybe be able to analyze and\ncorrect problems in the future myself, so any help and remarks are\ngreatly appreciated.\n\nThanks in advance!\n\nJens\n", "msg_date": "Mon, 5 Jul 2010 13:36:46 +0200", "msg_from": "Jens Hoffrichter <[email protected]>", "msg_from_op": true, "msg_subject": "SeqScans on boolen values / How to speed this up?" }, { "msg_contents": "Jens,\n\n* Jens Hoffrichter ([email protected]) wrote:\n> I'm just curious if there is any way to improve the performance of\n> those queries. I'm seeing SeqScans in the EXPLAIN ANALYZE, but nothing\n> I have done yet has removed those.\n\nSeqScans aren't necessairly bad. Also, providing your postgresql.conf\nparameters would be useful in doing any kind of analysis work like this.\n\nFor starters, why are you using left joins for these queries? When you\nuse a left-join and then have a filter on the right-hand table that\nrequires it to be non-null, you're causing it to be an inner join\nanyway. Fixing that might change/improve the plans you're getting.\n\n> The statements and query plans are:\n> \n> ---- Query 1 -----\n> \n> explain analyze SELECT\n> n.name_short,n.flag,n.nation_id,n.urlidentifier,count(p.person_id) as\n> athletes from nations n left join persons p on n.nation_id =\n> p.nation_id left join efclicences e on p.person_id = e.person_id where\n> continent = 'eu' and p.deleted = false and p.inactive = false and\n> e.fencer = true group by\n> n.name_short,n.flag,n.nation_id,n.urlidentifier order by n.name_short;\n\nAlright, for this one, you're processing 144k rows in persons\nup into the aggregate, how big is the table? If it's anything less than\n1M, seqscanning that is almost certainly the fastest way. You could\n*test* that theory by disabling seqscans and running the query again for\nthe timing. If it's faster, then you probably need to adjust some PG\nparameters (eg: effective_cache_size, maybe random_page_cost) for your\nsystem.\n\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=9997.21..9997.32 rows=44 width=33) (actual\n> time=872.000..872.000 rows=44 loops=1)\n> Sort Key: n.name_short\n> Sort Method: quicksort Memory: 28kB\n> -> HashAggregate (cost=9995.45..9996.01 rows=44 width=33) (actual\n> time=872.000..872.000 rows=44 loops=1)\n> -> Hash Join (cost=5669.49..9611.83 rows=30690 width=33)\n> (actual time=332.000..720.000 rows=142240 loops=1)\n> Hash Cond: (e.person_id = p.person_id)\n> -> Seq Scan on efclicences e (cost=0.00..2917.29\n> rows=143629 width=8) (actual time=0.000..80.000 rows=143629 loops=1)\n> Filter: fencer\n> -> Hash (cost=5285.87..5285.87 rows=30690 width=33)\n> (actual time=332.000..332.000 rows=142240 loops=1)\n> -> Hash Join (cost=7.10..5285.87 rows=30690\n> width=33) (actual time=0.000..256.000 rows=142240 loops=1)\n> Hash Cond: (p.nation_id = n.nation_id)\n> -> Seq Scan on persons p\n> (cost=0.00..4438.29 rows=142288 width=16) (actual time=0.000..112.000\n> rows=142418 loops=1)\n> Filter: ((NOT deleted) AND (NOT inactive))\n> -> Hash (cost=6.55..6.55 rows=44\n> width=25) (actual time=0.000..0.000 rows=44 loops=1)\n> -> Seq Scan on nations n\n> (cost=0.00..6.55 rows=44 width=25) (actual time=0.000..0.000 rows=44\n> loops=1)\n> Filter: ((continent)::text = 'eu'::text)\n> Total runtime: 880.000 ms\n> (17 rows)\n> \n> --- Query 2 ---\n> explain analyze SELECT persons.person_id AS persons_person_id FROM\n> persons LEFT OUTER JOIN indexing_persons ON persons.person_id =\n> indexing_persons.person_id WHERE indexing_persons.person_id IS NULL\n> OR persons.modified > indexing_persons.indexed ORDER BY\n> persons.modified DESC LIMIT 1000;\n\nFor this one, you might try indexing persons.modified and\nindexing_persons.indexed and see if that changes things.\n\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=17755.23..17757.73 rows=1000 width=16) (actual\n> time=372.000..372.000 rows=0 loops=1)\n> -> Sort (cost=17755.23..17994.61 rows=95753 width=16) (actual\n> time=372.000..372.000 rows=0 loops=1)\n> Sort Key: persons.modified\n> Sort Method: quicksort Memory: 25kB\n> -> Hash Left Join (cost=4313.44..12505.20 rows=95753\n> width=16) (actual time=372.000..372.000 rows=0 loops=1)\n> Hash Cond: (persons.person_id = indexing_persons.person_id)\n> Filter: ((indexing_persons.person_id IS NULL) OR\n> (persons.modified > indexing_persons.indexed))\n> -> Seq Scan on persons (cost=0.00..4438.29\n> rows=143629 width=16) (actual time=0.000..56.000 rows=143629 loops=1)\n> -> Hash (cost=2534.86..2534.86 rows=142286 width=16)\n> (actual time=140.000..140.000 rows=143629 loops=1)\n> -> Seq Scan on indexing_persons\n> (cost=0.00..2534.86 rows=142286 width=16) (actual time=0.000..72.000\n> rows=143629 loops=1)\n> Total runtime: 372.000 ms\n> (11 rows)\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Mon, 5 Jul 2010 09:47:07 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SeqScans on boolen values / How to speed this up?" }, { "msg_contents": "On 05/07/10 19:36, Jens Hoffrichter wrote:\n> Hello everyone,\n> \n> We've recently finished developing a bigger webapplication, and we are\n> about to put it online.\n> \n> I ran some load tests yesterday, and configured 'slow query' logging\n> beforehand, so I could see if there might be a performance bottleneck\n> in the PG. While I discovered no real problems, the log file analysis\n> using pgFouine revealed two queries, which are executed often, and\n> take quite a bit some time.\n\nIt might be worth looking at what queries have results that change\ninfrequently or don't have to be up to the minute accurate, so they're\ncandidates for caching. Memcached is an incredibly handy tool for taking\nload off your database.\n\n--\nCraig Ringer\n", "msg_date": "Tue, 06 Jul 2010 10:42:36 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SeqScans on boolen values / How to speed this up?" }, { "msg_contents": "On Mon, Jul 5, 2010 at 5:36 AM, Jens Hoffrichter\n<[email protected]> wrote:\n> Hello everyone,\n>\n> We've recently finished developing a bigger webapplication, and we are\n> about to put it online.\n\nIf you're checking for bools, and 99.99% of the result is just true or\njust false, look at creating partial indexes on the .01% part.\n\ncreate index .... (boolfield) where boolfield is true\n\n(or is false)\n", "msg_date": "Mon, 5 Jul 2010 21:19:44 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SeqScans on boolen values / How to speed this up?" } ]
[ { "msg_contents": "Hi,\n\nIt seems that this is an issue faced by others as well - Please see this link: http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-table\n\nIs this a known bug? Is this something that someone is working on or is there a known work around?\n\nThanks,\n\nRanga\n\n\nFrom: [email protected]\nTo: [email protected]\nSubject: Question about partitioned query behavior\nDate: Fri, 2 Jul 2010 15:28:45 +0000\n\n\n\n\n\n\n\n\nHi,\n\nMy question is regarding ORDER BY / LIMIT query behavior when using partitioning.\n\nI have a large table (about 100 columns, several million rows) partitioned by a column called day (which is the date stored as yyyymmdd - say 20100502 for May 2nd 2010 etc.). Say the main table is called FACT_TABLE and each child table is called FACT_TABLE_yyyymmdd (e.g. FACT_TABLE_20100502, FACT_TABLE_20100503 etc.) and has an appropriate CHECK constraint created on it to CHECK (day = yyyymmdd).\n\nPostgres Version: PostgreSQL 8.4.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit\n\nThe query pattern I am looking at is (I have tried to simplify the column names for readability):\n\nSELECT F1 from FACT_TABLE \nwhere day >= 20100502 and day <= 20100507 # selecting for a week\nORDER BY F2 desc\nLIMIT 100\n\n\nThis is what is happening:\n\nWhen I query from the specific day's (child) table, I get what I expect - a descending Index scan and good performance.\n\n# explain select F1 from FACT_TABLE_20100502 where day = 20100502 order by F2 desc limit 100;\n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------\n--\n Limit (cost=0.00..4.81 rows=100 width=41)\n -> Index Scan Backward using F2_20100502 on FACT_TABLE_20100502 (cost=0.00..90355.89 rows=1876985 width=41\n)\n Filter: (day = 20100502)\n\n\n\nBUT:\n\nWhen I do the same query against the parent table it is much slower - two things seem to happen - one is that the descending scan of the index is not done and secondly there seems to be a separate sort/limit at the end - i.e. all data from all partitions is retrieved and then sorted and limited - This seems to be much less efficient than doing a descending scan on each partition and limiting the results and then combining and reapplying the limit at the end.\n\nexplain select F1 from FACT_TABLE where day = 20100502 order by F2 desc limit 100;\n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------\n---\n Limit (cost=20000084948.01..20000084948.01 rows=100 width=41)\n -> Sort (cost=20000084948.01..20000084994.93 rows=1876986 width=41)\n Sort Key: public.FACT_TABLE.F2\n -> Result (cost=10000000000.00..20000084230.64 rows=1876986 width=41)\n -> Append (cost=10000000000.00..20000084230.64 rows=1876986 width=41)\n -> Seq Scan on FACT_TABLE (cost=10000000000.00..10000000010.02 rows=1 width=186)\n Filter: (day = 20100502)\n -> Seq Scan on FACT_TABLE_20100502 FACT_TABLE (cost=10000000000.00..10000084220.62 rows=1876985 width=4\n1)\n Filter: (day = 20100502)\n(9 rows)\n\n\nCould anyone please explain why this is happening and what I can do to get the query to perform well even when querying from the parent table?\n\nThanks,\n\nRanga\n\n\n\n\n \t\t \t \t\t \nHotmail is redefining busy with tools for the New Busy. Get more from your inbox. See how. \t\t \t \t\t \n_________________________________________________________________\nHotmail has tools for the New Busy. Search, chat and e-mail from your inbox.\nhttp://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_1\n\n\n\n\n\nHi,It seems that this is an issue faced by others as well - Please see this link: http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-tableIs this a known bug? Is this something that someone is working on or is there a known work around?Thanks,RangaFrom: [email protected]: [email protected]: Question about partitioned query behaviorDate: Fri, 2 Jul 2010 15:28:45 +0000\n\n\n\nHi,My question is regarding ORDER BY / LIMIT query behavior when using partitioning.I have a large table (about 100 columns, several million rows) partitioned by a column called day (which is the date stored as yyyymmdd - say 20100502 for May 2nd 2010 etc.). Say the main table  is called FACT_TABLE and each child table is called FACT_TABLE_yyyymmdd (e.g. FACT_TABLE_20100502, FACT_TABLE_20100503 etc.) and has an appropriate CHECK constraint created on it to CHECK (day = yyyymmdd).Postgres Version:  PostgreSQL 8.4.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bitThe query pattern I am looking at is (I have tried to simplify the column names for readability):SELECT F1 from FACT_TABLE where day >= 20100502 and day <= 20100507  # selecting for a weekORDER BY F2 descLIMIT 100This is what is happening:When I query from the specific day's (child) table, I get what I expect - a descending Index scan and good performance.# explain  select F1 from FACT_TABLE_20100502 where day = 20100502 order by F2 desc limit 100;                                                                    QUERY PLAN                                                                    -------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.00..4.81 rows=100 width=41)   ->  Index Scan Backward using F2_20100502 on FACT_TABLE_20100502  (cost=0.00..90355.89 rows=1876985 width=41)         Filter: (day = 20100502)BUT:When I do the same query against the parent table it is much slower - two things seem to happen - one is that the descending scan of the index is not done and secondly there seems to be a separate sort/limit at the end - i.e. all data from all partitions is retrieved and then sorted and limited - This seems to be much less efficient than doing a descending scan on each partition and limiting the results and then combining and reapplying the limit at the end.explain  select F1 from FACT_TABLE where day = 20100502 order by F2 desc limit 100;                                                                    QUERY PLAN                                                                     --------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=20000084948.01..20000084948.01 rows=100 width=41)   ->  Sort  (cost=20000084948.01..20000084994.93 rows=1876986 width=41)         Sort Key: public.FACT_TABLE.F2         ->  Result  (cost=10000000000.00..20000084230.64 rows=1876986 width=41)               ->  Append  (cost=10000000000.00..20000084230.64 rows=1876986 width=41)                     ->  Seq Scan on FACT_TABLE  (cost=10000000000.00..10000000010.02 rows=1 width=186)                           Filter: (day = 20100502)                     ->  Seq Scan on FACT_TABLE_20100502 FACT_TABLE  (cost=10000000000.00..10000084220.62 rows=1876985 width=41)                           Filter: (day = 20100502)(9 rows)Could anyone please explain why this is happening and what I can do to get the query to perform well even when querying from the parent table?Thanks,Ranga Hotmail is redefining busy with tools for the New Busy. Get more from your inbox. See how. Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox. Learn more.", "msg_date": "Tue, 6 Jul 2010 16:30:06 +0000", "msg_from": "Ranga Gopalan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about partitioned query behavior" }, { "msg_contents": "On Tue, Jul 6, 2010 at 12:30 PM, Ranga Gopalan\n<[email protected]> wrote:\n> It seems that this is an issue faced by others as well - Please see this\n> link:\n> http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-table\n>\n> Is this a known bug? Is this something that someone is working on or is\n> there a known work around?\n\nI think that we know this problem exists, but I'm not aware anyone is\nworking on fixing it. There is a \"Merge Append\" patch floating around\nout there that I think might help with this, but AFAICS it was last\nupdated on July 5, 2009, and still needed some more work at that time.\n\nSince this is an all-volunteer effort, complicated problems like this\ndon't always get fixed as fast as we'd like; most of us have to spend\nmost of our time on whatever it is that our employer pays us to do.\nOf course if you're in a position to sponsor a developer there are a\nnumber of companies that will be happy to work with you.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 6 Jul 2010 15:30:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about partitioned query behavior" }, { "msg_contents": "Ranga,\n\n* Ranga Gopalan ([email protected]) wrote:\n> It seems that this is an issue faced by others as well - Please see this link: http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-table\n> \n> Is this a known bug? Is this something that someone is working on or is there a known work around?\n\nActually, if you look at that, the problem the original poster had was\nthat they didn't have constraint_exclusion turned on. Then they were\ncomplaining about having the (empty) master table and the needed\npartition included (which, really, shouldn't be that big a deal).\n\nDid you look at what the other reply suggested? Do you have\nconstraint_exclusion = 'on' in your postgresql.conf?\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 6 Jul 2010 16:26:23 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about partitioned query behavior" }, { "msg_contents": "Hi Stephen,\n\n\n\nConstraint exclusion was initially partition and I set it to \"on\" as suggested and \ntried that - the query planner in both cases was correctly identifying the specific partitions being queried - the problem seems to be a generic issue related to the way queries on partition tables are handled and how the order by / limit is applied in this scenario.\n\n\nThanks,\n\n\n\nRanga\n\n> Date: Tue, 6 Jul 2010 16:26:23 -0400\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> Subject: Re: [PERFORM] Question about partitioned query behavior\n> \n> Ranga,\n> \n> * Ranga Gopalan ([email protected]) wrote:\n> > It seems that this is an issue faced by others as well - Please see this link: http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-table\n> > \n> > Is this a known bug? Is this something that someone is working on or is there a known work around?\n> \n> Actually, if you look at that, the problem the original poster had was\n> that they didn't have constraint_exclusion turned on. Then they were\n> complaining about having the (empty) master table and the needed\n> partition included (which, really, shouldn't be that big a deal).\n> \n> Did you look at what the other reply suggested? Do you have\n> constraint_exclusion = 'on' in your postgresql.conf?\n> \n> \tThanks,\n> \n> \t\tStephen\n \t\t \t \t\t \n_________________________________________________________________\nThe New Busy is not the too busy. Combine all your e-mail accounts with Hotmail.\nhttp://www.windowslive.com/campaign/thenewbusy?tile=multiaccount&ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_4\n\n\n\n\n\nHi Stephen,\n\nConstraint exclusion was initially partition and I set it to \"on\" as suggested and \ntried that - the query planner in both cases was correctly identifying the specific partitions being queried - the problem seems to be a generic issue related to the way queries on partition tables are handled and how the order by / limit is applied in this scenario.\nThanks,\n\nRanga> Date: Tue, 6 Jul 2010 16:26:23 -0400> From: [email protected]> To: [email protected]> CC: [email protected]> Subject: Re: [PERFORM] Question about partitioned query behavior> > Ranga,> > * Ranga Gopalan ([email protected]) wrote:> > It seems that this is an issue faced by others as well - Please see this link: http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-table> > > > Is this a known bug? Is this something that someone is working on or is there a known work around?> > Actually, if you look at that, the problem the original poster had was> that they didn't have constraint_exclusion turned on. Then they were> complaining about having the (empty) master table and the needed> partition included (which, really, shouldn't be that big a deal).> > Did you look at what the other reply suggested? Do you have> constraint_exclusion = 'on' in your postgresql.conf?> > \tThanks,> > \t\tStephen The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail. Get busy.", "msg_date": "Tue, 6 Jul 2010 21:02:22 +0000", "msg_from": "Ranga Gopalan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about partitioned query behavior" } ]
[ { "msg_contents": "Hi everyone,\n\nI'm running 8.4.2 on a CentOS machine, and postgres recently died with signal 6 because the pg_xlog partition filled up (33GB) on 7/4/10 10:34:23 (perfect timing, as I was hiking in the mountains in the remotest parts of our country). I did some digging and found the following:\n\n-- The current state of /db/data/pg_xlog (after postgres died on 7/4/10) indicates there are 2056 files, all created between 7/4/10 10:01 and 7/4/10 10:34, and a quick bash-kung-fu check shows that these files were created anywhere from 3 per minute (10:22) to 106 per minute (10:12)\n-- wal_buffers=8MB; commit_siblings=200; commit_delay=0; checkpoint_segments=16\n-- All other WAL-related parameters are either defaults or commented out\n\n-- syslog shows that on 7/2/10 from 10:16:11 to 10:21:51, messages similar to this occurred 29 times:\n> Jul 2 10:16:11 db4.sac postgres[20526]: [56-1] 2010-07-02 10:16:11.117 PDT [user=,db= PID:20526 XID:0]LOG: checkpoints are occurring too frequently (4 seconds apart)\n> Jul 2 10:16:11 db4.sac postgres[20526]: [56-2] 2010-07-02 10:16:11.117 PDT [user=,db= PID:20526 XID:0]HINT: Consider increasing the configuration parameter \"checkpoint_segments\".\n\n-- On 7/4/10 from 09:09:02 to 09:10:08, the same type of messages occurred 21 times in syslog\n-- This message did not occur on 7/3/10 at all\n\n-- Preceding the \"checkpoints are occurring too frequently\" syslog entries are autovacuum entries:\n> Jul 2 10:16:04 db4.sac postgres[11357]: [7-1] 2010-07-02 10:16:04.576 PDT [user=,db= PID:11357 XID:0]LOG: automatic vacuum of table \"tii._sac_global.sl_event\": index scans: 1\n> Jul 2 10:16:04 db4.sac postgres[11357]: [7-2] pages: 0 removed, 25 remain\n> Jul 2 10:16:04 db4.sac postgres[11357]: [7-3] tuples: 676 removed, 955 remain\n> Jul 2 10:16:04 db4.sac postgres[11357]: [7-4] system usage: CPU 0.01s/0.00u sec elapsed 0.02 sec\n> Jul 2 10:16:04 db4.sac postgres[11357]: [8-1] 2010-07-02 10:16:04.580 PDT [user=,db= PID:11357 XID:197431667]LOG: automatic analyze of table \"tii._sac_global.sl_event\" system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec\n> Jul 2 10:16:04 db4.sac postgres[11357]: [9-1] 2010-07-02 10:16:04.965 PDT [user=,db= PID:11357 XID:0]LOG: automatic vacuum of table \"tii._sac_global.sl_confirm\": index scans: 1\n> Jul 2 10:16:04 db4.sac postgres[11357]: [9-2] pages: 0 removed, 154 remain\n> Jul 2 10:16:04 db4.sac postgres[11357]: [9-3] tuples: 1834 removed, 8385 remain\n> Jul 2 10:16:04 db4.sac postgres[11357]: [9-4] system usage: CPU 0.32s/0.04u sec elapsed 0.37 sec\n\n> Jul 4 09:08:56 db4.sac postgres[21798]: [13-1] 2010-07-04 09:08:56.107 PDT [user=,db= PID:21798 XID:0]LOG: automatic vacuum of table \"tii.public.city\": index scans: 0\n> Jul 4 09:08:56 db4.sac postgres[21798]: [13-2] pages: 0 removed, 151 remain\n> Jul 4 09:08:56 db4.sac postgres[21798]: [13-3] tuples: 0 removed, 20223 remain\n> Jul 4 09:08:56 db4.sac postgres[21798]: [13-4] system usage: CPU 0.00s/0.00u sec elapsed 0.01 sec\n> Jul 4 09:08:56 db4.sac postgres[21798]: [14-1] 2010-07-04 09:08:56.118 PDT [user=,db= PID:21798 XID:0]LOG: automatic vacuum of table \"tii.public.country\": index scans: 1\n> Jul 4 09:08:56 db4.sac postgres[21798]: [14-2] pages: 0 removed, 2 remain\n> Jul 4 09:08:56 db4.sac postgres[21798]: [14-3] tuples: 77 removed, 185 remain\n> Jul 4 09:08:56 db4.sac postgres[21798]: [14-4] system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec\n> Jul 4 09:08:56 db4.sac postgres[21798]: [15-1] 2010-07-04 09:08:56.335 PDT [user=,db= PID:21798 XID:0]LOG: automatic vacuum of table \"tii.public.gm3_clipboard\": index scans: 1\n> Jul 4 09:08:56 db4.sac postgres[21798]: [15-2] pages: 0 removed, 897 remain\n> Jul 4 09:08:56 db4.sac postgres[21798]: [15-3] tuples: 2 removed, 121594 remain\n> Jul 4 09:08:56 db4.sac postgres[21798]: [15-4] system usage: CPU 0.07s/0.08u sec elapsed 0.20 sec\n...snip...\n> Jul 4 09:10:25 db4.sac postgres[22066]: [23-1] 2010-07-04 09:10:25.921 PDT [user=,db= PID:22066 XID:0]LOG: automatic vacuum of table \"tii.public.pm_assignment\": index scans: 1\n> Jul 4 09:10:25 db4.sac postgres[22066]: [23-2] pages: 0 removed, 995 remain\n> Jul 4 09:10:25 db4.sac postgres[22066]: [23-3] tuples: 323 removed, 83964 remain\n> Jul 4 09:10:25 db4.sac postgres[22066]: [23-4] system usage: CPU 0.01s/0.09u sec elapsed 0.52 sec\n> Jul 4 09:10:25 db4.sac postgres[22073]: [7-1] 2010-07-04 09:10:25.978 PDT [user=,db= PID:22073 XID:0]LOG: automatic vacuum of table \"tii.public.pm_question_type\": index scans: 0\n> Jul 4 09:10:25 db4.sac postgres[22073]: [7-2] pages: 0 removed, 1 remain\n> Jul 4 09:10:25 db4.sac postgres[22073]: [7-3] tuples: 0 removed, 2 remain\n> Jul 4 09:10:25 db4.sac postgres[22073]: [7-4] system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec\n> Jul 4 09:10:26 db4.sac postgres[22066]: [24-1] 2010-07-04 09:10:26.301 PDT [user=,db= PID:22066 XID:0]LOG: automatic vacuum of table \"tii.public.pm_association_rule\": index scans: 1\n> Jul 4 09:10:26 db4.sac postgres[22066]: [24-2] pages: 0 removed, 286 remain\n> Jul 4 09:10:26 db4.sac postgres[22066]: [24-3] tuples: 46 removed, 51321 remain\n> Jul 4 09:10:26 db4.sac postgres[22066]: [24-4] system usage: CPU 0.00s/0.03u sec elapsed 0.34 sec\n> Jul 4 09:10:26 db4.sac postgres[22066]: [25-1] 2010-07-04 09:10:26.328 PDT [user=,db= PID:22066 XID:0]LOG: automatic vacuum of table \"tii.public.role\": index scans: 0\n> Jul 4 09:10:26 db4.sac postgres[22066]: [25-2] pages: 0 removed, 1 remain\n> Jul 4 09:10:26 db4.sac postgres[22066]: [25-3] tuples: 0 removed, 5 remain\n> Jul 4 09:10:26 db4.sac postgres[22066]: [25-4] system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec\n> Jul 4 09:10:26 db4.sac postgres[22066]: [26-1] 2010-07-04 09:10:26.373 PDT [user=,db= PID:22066 XID:0]LOG: automatic vacuum of table \"tii.public.pm_review_type\": index scans: 0\n> Jul 4 09:10:26 db4.sac postgres[22066]: [26-2] pages: 0 removed, 1 remain\n> Jul 4 09:10:26 db4.sac postgres[22066]: [26-3] tuples: 0 removed, 4 remain\n> Jul 4 09:10:26 db4.sac postgres[22066]: [26-4] system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec\n> Jul 4 09:10:26 db4.sac postgres[22066]: [27-1] 2010-07-04 09:10:26.428 PDT [user=,db= PID:22066 XID:0]LOG: automatic vacuum of table \"tii.public.permission\": index scans: 0\n> Jul 4 09:10:26 db4.sac postgres[22066]: [27-2] pages: 0 removed, 1 remain\n> Jul 4 09:10:26 db4.sac postgres[22066]: [27-3] tuples: 0 removed, 8 remain\n> Jul 4 09:10:26 db4.sac postgres[22066]: [27-4] system usage: CPU 0.00s/0.00u sec elapsed 0.01 sec\n\n-- min_duration_statement=1000, so there is a chance that the volume of really-fast write traffic might be masked, but I'm doubtful as this is our low-usage season (we run a web service for the education sector).\n\n-- Another tidbit of information is that I am using slony 2.0.3 for replication, and my sync_interval is 500. However, I'm doubtful that slony traffic is responsible because my other nodes use the exact same config and hardware, and none of them had this issue.\n\n-- My last administrative action on this machine was to SIGHUP this machine to push changes to pg_hba.conf on 6/29/10. Just now, I diff-ed my postgresql.conf file with a copy that I keep in svn--no changes at all.\n\nThis leads me to believe that there was a sudden flurry of write activity that occurred, and the process that would flush WAL files to /db/data/ couldn't keep up, thereby filling up the disk. I'm wondering if anyone else out there might be able to give me some insight or comments to my assessment--is it accurate? Any input would be helpful, and I'll try to make necessary architectural changes to keep this from happening again.\n\nYour help is much appreciated!\n--Richard", "msg_date": "Tue, 6 Jul 2010 18:10:10 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "WAL partition overloaded--by autovacuum?" }, { "msg_contents": "On Tue, Jul 6, 2010 at 7:10 PM, Richard Yen <[email protected]> wrote:\n> This leads me to believe that there was a sudden flurry of write activity that occurred, and the process that would flush WAL files to /db/data/ couldn't keep up, thereby filling up the disk.  I'm wondering if anyone else out there might be able to give me some insight or comments to my assessment--is it accurate?  Any input would be helpful, and I'll try to make necessary architectural changes to keep this from happening again.\n\nI tend to agree. What kind of disk setup is under your data store?\nIf it's RAID-5 or RAID-6 is it possibly degraded?\n\nI'd run iostat -xd 10 while the write load it high to see how your\nxlog partition %util compares to the main data store partition.\n\nDo you have your main data store partition under a battery backed\ncaching RAID controller? Tell us what you can about your hardware\nsetup.\n", "msg_date": "Tue, 6 Jul 2010 21:25:28 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL partition overloaded--by autovacuum?" }, { "msg_contents": "On Tue, Jul 6, 2010 at 7:10 PM, Richard Yen <[email protected]> wrote:\n\nOne more thing, do you have long running transactions during these periods?\n", "msg_date": "Tue, 6 Jul 2010 21:27:10 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL partition overloaded--by autovacuum?" }, { "msg_contents": "And, read this:\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n", "msg_date": "Tue, 6 Jul 2010 22:13:53 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL partition overloaded--by autovacuum?" }, { "msg_contents": "On 07/07/10 13:10, Richard Yen wrote:\n>\n> This leads me to believe that there was a sudden flurry of write activity that occurred, and the process that would flush WAL files to /db/data/ couldn't keep up, thereby filling up the disk. I'm wondering if anyone else out there might be able to give me some insight or comments to my assessment--is it accurate? Any input would be helpful, and I'll try to make necessary architectural changes to keep this from happening again.\n>\n\nDo you have wal archiving enabled? (if so lets see your archive_command).\n\nCheers\n\nMark\n", "msg_date": "Wed, 07 Jul 2010 20:58:35 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by autovacuum?" }, { "msg_contents": "Sorry, I forgot to mention that archive_mode is \"off\" and commented out, and archive command is '' and commented out. \n\nThanks for following up!\n-- Richard\n\nOn Jul 7, 2010, at 1:58, Mark Kirkwood <[email protected]> wrote:\n\n> On 07/07/10 13:10, Richard Yen wrote:\n>> \n>> This leads me to believe that there was a sudden flurry of write activity that occurred, and the process that would flush WAL files to /db/data/ couldn't keep up, thereby filling up the disk. I'm wondering if anyone else out there might be able to give me some insight or comments to my assessment--is it accurate? Any input would be helpful, and I'll try to make necessary architectural changes to keep this from happening again.\n>> \n> \n> Do you have wal archiving enabled? (if so lets see your archive_command).\n> \n> Cheers\n> \n> Mark\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 7 Jul 2010 09:39:11 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by autovacuum?" }, { "msg_contents": "On Jul 6, 2010, at 8:25 PM, Scott Marlowe wrote:\n\n> Tell us what you can about your hardware setup.\n\nSorry, I made the bad assumption that the hardware setup would be irrelevant--dunno why I thought that.\n\nMy hardware setup is 2 FusionIO 160GB drives in a RAID-1 configuration, running on an HP DL360 G5\n\nI think I figured out the problem:\n\n-- I figured that pg_xlog and data/base could both be on the FusionIO drive, since there would be no latency when there are no spindles.\n-- However, I didn't take into account the fact that pg_xlog might grow in size when autovacuum does its work when vacuuming to prevent XID wraparound. I *just* discovered this when one of my other replication nodes decided to die on me and fill up its disk.\n-- Unfortunately, my db is 114GB (including indexes) or 60GB (without indexes), leaving ~37GB for pg_xlog (since they are sharing a partition). So I'm guessing what happened was that when autovacuum ran to prevent XID wraparound, it takes each table and changes the XID, and it gets recorded in WAL, causing WAL to bloat. This this the correct understanding?\n\nQuestion for now is, documentation says:\n> There will always be at least one WAL segment file, and will normally not be more than (2 + checkpoint_completion_target) * checkpoint_segments + 1 files. Each segment file is normally 16 MB (though this size can be altered when building the server). You can use this to estimate space requirements for WAL. Ordinarily, when old log segment files are no longer needed, they are recycled (renamed to become the next segments in the numbered sequence). If, due to a short-term peak of log output rate, there are more than 3 * checkpoint_segments + 1 segment files, the unneeded segment files will be deleted instead of recycled until the system gets back under this limit.\n\nThis means my pg_xlog partition should be (2 + checkpoint_completion_target) * checkpoint_segments + 1 = 41 files, or 656MB. Then, if there are more than 49 files, unneeded segment files will be deleted, but in this case all segment files are needed, so they never got deleted. Perhaps we should add in the docs that pg_xlog should be the size of the DB or larger?\n\n--Richard\n\n\n\n", "msg_date": "Wed, 7 Jul 2010 12:32:17 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by autovacuum?" }, { "msg_contents": "On Wed, Jul 7, 2010 at 3:32 PM, Richard Yen <[email protected]> wrote:\n> On Jul 6, 2010, at 8:25 PM, Scott Marlowe wrote:\n>\n>> Tell us what you can about your hardware setup.\n>\n> Sorry, I made the bad assumption that the hardware setup would be irrelevant--dunno why I thought that.\n>\n> My hardware setup is 2 FusionIO 160GB drives in a RAID-1 configuration, running on an HP DL360 G5\n>\n> I think I figured out the problem:\n>\n> -- I figured that pg_xlog and data/base could both be on the FusionIO drive, since there would be no latency when there are no spindles.\n> -- However, I didn't take into account the fact that pg_xlog might grow in size when autovacuum does its work when vacuuming to prevent XID wraparound.  I *just* discovered this when one of my other replication nodes decided to die on me and fill up its disk.\n> -- Unfortunately, my db is 114GB (including indexes) or 60GB (without indexes), leaving ~37GB for pg_xlog (since they are sharing a partition).  So I'm guessing what happened was that when autovacuum ran to prevent XID wraparound, it takes each table and changes the XID, and it gets recorded in WAL, causing WAL to bloat.  This this the correct understanding?\n\nThat seems logical (and un-fun), but I don't understand how you\nmanaged to fill up 37GB of disk with WAL files. Every time you fill\nup checkpoint_segments * 16MB of WAL files, you ought to get a\ncheckpoint. When it's complete, WAL segments completely written\nbefore the start of the checkpoint should be recyclable. Unless I'm\nconfused, which apparently I am.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 8 Jul 2010 14:51:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by\n\tautovacuum?" }, { "msg_contents": "Robert Haas <[email protected]> wrote:\n \n> I don't understand how you managed to fill up 37GB of disk with\n> WAL files. Every time you fill up checkpoint_segments * 16MB of\n> WAL files, you ought to get a checkpoint. When it's complete, WAL\n> segments completely written before the start of the checkpoint\n> should be recyclable. Unless I'm confused, which apparently I am.\n \nYou're not alone. At first I was assuming that it was because of\narchiving, but the OP says that's turned off. Unless it had been on\nand there wasn't a *restart* after changing the configuration, I\ncan't see how this could happen, and was hoping someone else could\ncast some light on the issue.\n \nThe one setting that gave me pause was:\n \ncommit_siblings=200\n \nbut it doesn't seem like that should matter with:\n \ncommit_delay=0;\n \n-Kevin\n", "msg_date": "Thu, 08 Jul 2010 14:04:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by\n\t autovacuum?" }, { "msg_contents": "\nOn Jul 8, 2010, at 12:04 PM, Kevin Grittner wrote:\n\n> Robert Haas <[email protected]> wrote:\n> \n>> I don't understand how you managed to fill up 37GB of disk with\n>> WAL files. Every time you fill up checkpoint_segments * 16MB of\n>> WAL files, you ought to get a checkpoint. When it's complete, WAL\n>> segments completely written before the start of the checkpoint\n>> should be recyclable. Unless I'm confused, which apparently I am.\n> \n> You're not alone. At first I was assuming that it was because of\n> archiving, but the OP says that's turned off. Unless it had been on\n> and there wasn't a *restart* after changing the configuration, I\n> can't see how this could happen, and was hoping someone else could\n> cast some light on the issue.\n\nI'm fairly confused myself. I'm beginning to think that because data/base and data/pg_xlog were on the same partition (/db), when the /db partition filled, up the WAL files couldn't get flushed to data/base, thereby preventing data/pg_xlog from being emptied out, as per the documentation.\n\nMy concern is that--as in the original post--there were moments where 129 WAL files were generated in one minute. Is it plausible that this autovacuum could be responsible for this?\n\n--Richard", "msg_date": "Thu, 8 Jul 2010 12:25:01 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by autovacuum?" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> You're not alone. At first I was assuming that it was because of\n> archiving, but the OP says that's turned off. Unless it had been on\n> and there wasn't a *restart* after changing the configuration,\n\nYeah, I was less than convinced by his eyeball report of that, too.\n\"show archive_mode\" would be a much more convincing check of the\nserver's state. Or would have been, if the server hadn't been restarted\nsince the problem occurred.\n\narchive_mode on with a bad archive_command would lead directly to the\nreported problem ... although it should also lead to pretty obvious\ncomplaints in the postmaster log.\n\n(Hmm ... but those complaints are logged at level WARNING, which as\ndiscussed elsewhere is really lower than LOG. Should we change them?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jul 2010 15:27:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by autovacuum? " }, { "msg_contents": "\nOn Jul 8, 2010, at 12:27 PM, Tom Lane wrote:\n> \n> (Hmm ... but those complaints are logged at level WARNING, which as\n> discussed elsewhere is really lower than LOG. Should we change them?)\n\nHmm, I did a grep on \"WARNING\" on my log, and the only thing that turns up are the \"WARNING: terminating connection because of crash of another server process\" entries when postgres died and shut down when the disks filled up.\n\nWould this be conclusive evidence that archive_mode=off?\n\n--Richard", "msg_date": "Thu, 8 Jul 2010 12:34:19 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by autovacuum? " }, { "msg_contents": "Richard Yen <[email protected]> wrote:\n \n> there were moments where 129 WAL files were generated in one\n> minute. Is it plausible that this autovacuum could be responsible\n> for this?\n \nI don't remember seeing your autovacuum and vacuum config settings,\nor an answer to my question about whether there was a bulk load of a\nsignificant portion of current data. With agressive autovacuum\nsettings, hitting the freeze threshold for bulk-loaded data could do\nthat.\n \n-Kevin\n", "msg_date": "Thu, 08 Jul 2010 14:50:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by\n\t autovacuum?" }, { "msg_contents": "Richard Yen <[email protected]> writes:\n> My concern is that--as in the original post--there were moments where 129 WAL files were generated in one minute. Is it plausible that this autovacuum could be responsible for this?\n\nThat's not a particularly surprising WAL creation rate for a busy\ndatabase. I wouldn't expect autovacuum to cause it by itself, but\nthat's true only because autovacuum processing is typically throttled\nby autovacuum_vacuum_cost_delay. Perhaps you had that set to zero?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jul 2010 15:50:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by autovacuum? " }, { "msg_contents": "\nOn Jul 8, 2010, at 12:50 PM, Tom Lane wrote:\n\n> Richard Yen <[email protected]> writes:\n>> My concern is that--as in the original post--there were moments where 129 WAL files were generated in one minute. Is it plausible that this autovacuum could be responsible for this?\n> \n> That's not a particularly surprising WAL creation rate for a busy\n> database. I wouldn't expect autovacuum to cause it by itself, but\n> that's true only because autovacuum processing is typically throttled\n> by autovacuum_vacuum_cost_delay. Perhaps you had that set to zero?\n> \n\nAh, yes, autovacuum_vacuum_cost_delay = 0 in my config. That explains it--guess I'm playing with knives if I set things that way...\n\n--Richard\n\n", "msg_date": "Thu, 8 Jul 2010 13:06:27 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by autovacuum? " }, { "msg_contents": "\nOn Jul 8, 2010, at 12:50 PM, Kevin Grittner wrote:\n\n> Richard Yen <[email protected]> wrote:\n> \n>> there were moments where 129 WAL files were generated in one\n>> minute. Is it plausible that this autovacuum could be responsible\n>> for this?\n> \n> I don't remember seeing your autovacuum and vacuum config settings,\n> or an answer to my question about whether there was a bulk load of a\n> significant portion of current data. With agressive autovacuum\n> settings, hitting the freeze threshold for bulk-loaded data could do\n> that.\n> \n\nYeah, autovacuum is pretty aggressive, as I recall:\nautovacuum = on\nlog_autovacuum_min_duration = 0\nautovacuum_max_workers = 8\nautovacuum_naptime = 1min\nautovacuum_vacuum_threshold = 400\nautovacuum_analyze_threshold = 200\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_freeze_max_age = 200000000\nautovacuum_vacuum_cost_delay = 0\nautovacuum_vacuum_cost_limit = -1\n\nvacuum_cost_delay = 0\nvacuum_cost_limit = 200\n\nWhen you say \"bulk-loaded,\" I suppose that also includes loading the data via slony ADD NODE as well--correct? I created this node maybe 6 months ago via slony ADD NODE\n\nThanks much for your time,\n--Richard", "msg_date": "Thu, 8 Jul 2010 13:06:34 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by autovacuum?" }, { "msg_contents": "Richard Yen <[email protected]> wrote:\n \n> When you say \"bulk-loaded,\" I suppose that also includes loading\n> the data via slony ADD NODE as well--correct?\n \nI would think so. Anything which loads a lot of data in relatively\nfew database transactions would qualify; I would think slony would\ndo this, as it's generally required to get decent performance.\n \n-Kevin\n", "msg_date": "Thu, 08 Jul 2010 15:14:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by\n\t autovacuum?" }, { "msg_contents": "Richard Yen wrote:\n> I figured that pg_xlog and data/base could both be on the FusionIO drive, since there would be no latency when there are no spindles.\n> \n\n\n(Rolls eyes) Please be careful about how much SSD Kool-Aid you drink, \nand be skeptical of vendor claims. They don't just make latency go away, \nparticularly on heavy write workloads where the technology is at its \nweakest.\n\nAlso, random note, I'm seeing way too many FusionIO drive setups where \npeople don't have any redundancy to cope with a drive failure, because \nthe individual drives are so expensive they don't have more than one. \nMake sure that if you lose one of the drives, you won't have a massive \ndata loss. Replication might help with that, if you can stand a little \nbit of data loss when the SSD dies. Not if--when. Even if you have a \ngood one they don't last forever.\n\n> This means my pg_xlog partition should be (2 + checkpoint_completion_target) * checkpoint_segments + 1 = 41 files, or 656MB. Then, if there are more than 49 files, unneeded segment files will be deleted, but in this case all segment files are needed, so they never got deleted. Perhaps we should add in the docs that pg_xlog should be the size of the DB or larger?\n> \n\nExcessive write volume beyond the capacity of the hardware can end up \ndelaying the normal checkpoint that would have cleaned up all the xlog \nfiles. There's a nasty spiral that can get into I've seen a couple of \ntimes in similar form to what you reported. The pg_xlog should never \nexceed the size computed by that formula for very long, but it can burst \nabove its normal size limits for a little bit. This is already mentioned \nas possibility in the manual: \"If, due to a short-term peak of log \noutput rate, there are more than 3 * checkpoint_segments + 1 segment \nfiles, the unneeded segment files will be deleted instead of recycled \nuntil the system gets back under this limit.\" Autovacuum is an easy way \nto get the sort of activity needed to cause this problem, but I don't \nknow if it's a necessary component to see the problem. You have to be in \nan unusual situation before the sum of the xlog files is anywhere close \nto the size of the database though.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 10 Jul 2010 00:23:53 +0100", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Slony1-general] WAL partition overloaded--by autovacuum?" } ]
[ { "msg_contents": "LinkedIn\n------------Gourish Singbal requested to add you as a connection on LinkedIn:\n------------------------------------------\n\nDimi,\n\nI'd like to add you to my professional network on LinkedIn.\n\n- Gourish\n\nAccept invitation from Gourish Singbal\nhttp://www.linkedin.com/e/-w2td3k-gbbslalr-65/pkJZe8TYN23PC0V37kBBjDgscEaP6Wqep-btW4e1lt1E-6xG/blk/I4938343_9/1BpC5vrmRLoRZcjkkZt5YCpnlOt3RApnhMpmdzgmhxrSNBszYVnPcQcPwPejh9bT18l49UmBFbbPgPczcOejgQd34LrCBxbOYWrSlI/EML_comm_afe/\n\nView invitation from Gourish Singbal\nhttp://www.linkedin.com/e/-w2td3k-gbbslalr-65/pkJZe8TYN23PC0V37kBBjDgscEaP6Wqep-btW4e1lt1E-6xG/blk/I4938343_9/3BvcPgPe3cVd4ALqnpPbOYWrSlI/svi/ \n------------------------------------------\n\nDID YOU KNOW LinkedIn can help you find the right service providers using recommendations from your trusted network? Using LinkedIn Services, you can take the risky guesswork out of selecting service providers by reading the recommendations of credible, trustworthy members of your network. \nhttp://www.linkedin.com/e/-w2td3k-gbbslalr-65/svp/inv-25/\n\n \n------\n(c) 2010, LinkedIn Corporation\n\n\n\nLinkedIn\n\nGourish Singbal requested to add you as a connection on LinkedIn:\n\n Dimi,\n\nI'd like to add you to my professional network on LinkedIn.\n\n- Gourish\n \n \n\n\n\nAccept\n\n\nView invitation from Gourish Singbal\n\n\n\n\n \n\n DID YOU KNOW LinkedIn can help you find the right service providers using recommendations from your trusted network? Using LinkedIn Services, you can take the risky guesswork out of selecting service providers by reading the recommendations of credible, trustworthy members of your network. \n \n\n© 2010, LinkedIn Corporation", "msg_date": "Tue, 6 Jul 2010 23:33:01 -0700 (PDT)", "msg_from": "Gourish Singbal <[email protected]>", "msg_from_op": true, "msg_subject": "Invitation to connect on LinkedIn" } ]
[ { "msg_contents": "\n\n\n\nHi,\n\nI've trouble with some SQL request which have different execution plans\nwhen ran on two different servers. One server is the development\nenvironment, the othe rone is th pre-production env.\nBoth servers run postgreSQL 8.3.0 on Linux and :\n - both databases contains the same data (pg_dump/pg_restore between\nservers)\n - instances have the same configuration parameters\n - vaccum and analyze is run every day.\nThe only difference I can see is the hardware. The pre-preoduction env.\nhas more RAM, more CPU and a RAID5 disk array through LVM where data\nare stored. \nPerformances should be better on the pre-production but unfortunatelly\nthis is not the case.\nBelow are the execution plan on development env and pre-production. As\nyou can see pre-production performance are poor, far slower than dev.\nenv !\nFor information, enable_seqscan is turned off (some DBA advice). \nAlso I can get the same execution plan on both environment by turning\noff enable_mergejoin and enable_hashjoin on the pre-production. Then\nexecution matches and performances are much better.\nCould anyone help to guess why both servers don't have the same\nexecution plans ? Can this be due to hardware difference ?\n\nLet me know if you need more detailed informations on these\nconfigurations.\n\nThanks.\n\nDev. environment :\nEXPLAIN analyze SELECT DISTINCT\nConstantesTableNBienService.id,ConstantesTableNBienService.code,ConstantesTableNBienService.lib_code\nFROM T_DEMANDE ConstantesTableDemande\nLEFT OUTER JOIN  T_OPERATION ConstantesTableOperation\n    ON ConstantesTableDemande.id_tech =\nConstantesTableOperation.id_demande\nLEFT OUTER JOIN T_BIEN_SERVICE ConstantesTableBienService\n    ON  ConstantesTableBienService.id_operation =\nConstantesTableOperation.id_tech\nLEFT OUTER JOIN N_BIEN_SERVICE ConstantesTableNBienService\n    ON ConstantesTableBienService.bs_code =\nConstantesTableNBienService.id\nWHERE\n    ConstantesTableDemande.id_tech = 'y+3eRapRQjW8mtL4wHd4/A=='\n    AND ConstantesTableOperation.type_operation = 'acq'\n    AND ConstantesTableNBienService.parent is null\nORDER BY ConstantesTableNBienService.code ASC;\n                                                                                            \nQUERY PLAN                                                      \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique  (cost=3586307.73..3586341.94 rows=205 width=123) (actual\ntime=440.626..440.875 rows=1 loops=1)\n   ->  Sort  (cost=3586307.73..3586316.28 rows=3421 width=123)\n(actual time=440.624..440.723 rows=187 loops=1)\n         Sort Key: constantestablenbienservice.code,\nconstantestablenbienservice.id, constantestablenbienservice.lib_code\n         Sort Method:  quicksort  Memory: 24kB\n         ->  Nested Loop Left Join  (cost=40.38..3586106.91\nrows=3421 width=123) (actual time=71.696..440.240 rows=187 loops=1)\n               Filter: (constantestablenbienservice.parent IS NULL)\n               ->  Nested Loop Left Join  (cost=40.38..3554085.80\nrows=6842 width=4) (actual time=66.576..433.797 rows=187 loops=1)\n                     ->  Nested Loop  (cost=0.00..5041.46 rows=1246\nwidth=25) (actual time=22.923..23.054 rows=30 loops=1)\n                           ->  Index Scan using t_demande_pkey on\nt_demande constantestabledemande  (cost=0.00..8.32 rows=1 width=25)\n(actual time=5.534..5.537 rows=1 loops=1)\n                                 Index Cond: ((id_tech)::text =\n'y+3eRapRQjW8mtL4wHd4/A=='::text)\n                           ->  Index Scan using\nidx_operation_demande on t_operation constantestableoperation \n(cost=0.00..5020.68 rows=1246 width=50) (actual time=17.382..17.460\nrows=30 loops=1)\n                                 Index Cond:\n((constantestableoperation.id_demande)::text =\n'y+3eRapRQjW8mtL4wHd4/A=='::text)\n                                 Filter:\n((constantestableoperation.type_operation)::text = 'acq'::text)\n                     ->  Bitmap Heap Scan on t_bien_service\nconstantestablebienservice  (cost=40.38..2836.96 rows=911 width=29)\n(actual time=13.511..13.677 rows=6 loops=30)\n                           Recheck Cond:\n((constantestablebienservice.id_operation)::text =\n(constantestableoperation.id_tech)::text)\n                           ->  Bitmap Index Scan on\nidx_bien_service_operation  (cost=0.00..40.15 rows=911 width=0) (actual\ntime=13.144..13.144 rows=6 loops=30)\n                                 Index Cond:\n((constantestablebienservice.id_operation)::text =\n(constantestableoperation.id_tech)::text)\n               ->  Index Scan using n_bien_service_pkey on\nn_bien_service constantestablenbienservice  (cost=0.00..4.67 rows=1\nwidth=127) (actual time=0.030..0.031 rows=1 loops=187)\n                     Index Cond: (constantestablebienservice.bs_code =\nconstantestablenbienservice.id)\n Total runtime: 2.558 ms\n(20 lignes)\n\n\nPre-production:\nEXPLAIN analyze\nSELECT DISTINCT\nConstantesTableNBienService.id,ConstantesTableNBienService.code,ConstantesTableNBienService.lib_code\nFROM T_DEMANDE ConstantesTableDemande\nLEFT OUTER JOIN  T_OPERATION ConstantesTableOperation\n    ON ConstantesTableDemande.id_tech =\nConstantesTableOperation.id_demande\nLEFT OUTER JOIN T_BIEN_SERVICE ConstantesTableBienService\n    ON  ConstantesTableBienService.id_operation =\nConstantesTableOperation.id_tech\nLEFT OUTER JOIN N_BIEN_SERVICE ConstantesTableNBienService\n    ON ConstantesTableBienService.bs_code =\nConstantesTableNBienService.id\nWHERE\n    ConstantesTableDemande.id_tech = 'y+3eRapRQjW8mtL4wHd4/A=='\n    AND ConstantesTableOperation.type_operation = 'acq'\n    AND ConstantesTableNBienService.parent is null\nORDER BY ConstantesTableNBienService.code ASC;\n                                                                                                       \nQUERY PLAN                                           \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique  (cost=2679729.52..2679763.24 rows=205 width=123) (actual\ntime=740448.007..740448.269 rows=1 loops=1)\n   ->  Sort  (cost=2679729.52..2679737.95 rows=3372 width=123)\n(actual time=740448.004..740448.111 rows=187 loops=1)\n         Sort Key: constantestablenbienservice.code,\nconstantestablenbienservice.id, constantestablenbienservice.lib_code\n         Sort Method:  quicksort  Memory: 24kB\n         ->  Hash Left Join  (cost=2315662.87..2679531.93 rows=3372\nwidth=123) (actual time=723479.640..740447.597 rows=187 loops=1)\n               Hash Cond: (constantestablebienservice.bs_code =\nconstantestablenbienservice.id)\n               Filter: (constantestablenbienservice.parent IS NULL)\n               ->  Hash Left Join  (cost=2315640.98..2679417.33\nrows=6743 width=4) (actual time=723464.693..740432.218 rows=187 loops=1)\n                     Hash Cond:\n((constantestableoperation.id_tech)::text =\n(constantestablebienservice.id_operation)::text)\n                     ->  Nested Loop  (cost=39.49..4659.51 rows=1228\nwidth=25) (actual time=0.131..0.309 rows=30 loops=1)\n                           ->  Index Scan using t_demande_pkey on\nt_demande constantestabledemande  (cost=0.00..8.32 rows=1 width=25)\n(actual time=0.047..0.050 rows=1 loops=1)\n                                 Index Cond: ((id_tech)::text =\n'y+3eRapRQjW8mtL4wHd4/A=='::text)\n                           ->  Bitmap Heap Scan on t_operation\nconstantestableoperation  (cost=39.49..4638.90 rows=1228 width=50)\n(actual time=0.079..0.192 rows=30 loops=1)\n                                 Recheck Cond:\n((constantestableoperation.id_demande)::text =\n'y+3eRapRQjW8mtL4wHd4/A=='::text)\n                                 Filter:\n((constantestableoperation.type_operation)::text = 'acq'::text)\n                                 ->  Bitmap Index Scan on\nidx_operation_demande  (cost=0.00..39.18 rows=1228 width=0) (actual\ntime=0.061..0.061 rows=30 loops=1)\n                                       Index Cond:\n((constantestableoperation.id_demande)::text =\n'y+3eRapRQjW8mtL4wHd4/A=='::text)\n                     ->  Hash  (cost=1486192.10..1486192.10\nrows=42894672 width=29) (actual time=723119.538..723119.538\nrows=42894671 loops=1)\n                           ->  Index Scan using\nidx_bien_service_code on t_bien_service constantestablebienservice \n(cost=0.00..1486192.10 rows=42894672 width=29) (actual\ntime=21.546..671603.500 rows=42894671 loops=1)\n               ->  Hash  (cost=19.33..19.33 rows=205 width=127)\n(actual time=14.706..14.706 rows=205 loops=1)\n                     ->  Index Scan using n_bien_service_pkey on\nn_bien_service constantestablenbienservice  (cost=0.00..19.33 rows=205\nwidth=127) (actual time=10.262..14.401 rows=205 loops=1)\n Total runtime: 740465.922 ms\n(22 lignes)\n\n\n\n\n", "msg_date": "Wed, 07 Jul 2010 09:55:28 +0200", "msg_from": "\"JOUANIN Nicolas (44)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Two different execution plan for the same request " }, { "msg_contents": "JOUANIN Nicolas (44) wrote:\n> Hi,\n>\n> I've trouble with some SQL request which have different execution \n> plans when ran on two different servers. One server is the development \n> environment, the othe rone is th pre-production env.\n> Both servers run postgreSQL 8.3.0 on Linux and :\n> - both databases contains the same data (pg_dump/pg_restore between \n> servers)\n> - instances have the same configuration parameters\n> - vaccum and analyze is run every day.\n> The only difference I can see is the hardware. The pre-preoduction \n> env. has more RAM, more CPU and a RAID5 disk array through LVM where \n> data are stored.\nHello Jouanin,\n\nCould you give some more information following the guidelines from \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions ?\n\nEssential are the contents from both conf files (comments may be removed).\n\nregards,\nYeb Havinga\n\n", "msg_date": "Wed, 07 Jul 2010 10:27:05 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two different execution plan for the same request" }, { "msg_contents": "", "msg_date": "Wed, 07 Jul 2010 10:47:45 +0200", "msg_from": "\"JOUANIN Nicolas (44)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two different execution plan for the same request" }, { "msg_contents": "JOUANIN Nicolas (44) wrote:\n>\n> The strange thing is that this morning explain analyze now gives a \n> much better duration :\n>\n> There were no modification made on the database except a restart \n> yesterday evening and a vacuumdb --analyse ran at night.\nIf the earlier bad query was run on a fresh imported database that was \nnot ANALYZEd, it would explain the different and likely bad plan. If you \nwant to know for sure this is the cause, instead of e.g. faulty \nhardware, you could verify redoing the import + query without analyze.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Wed, 07 Jul 2010 10:59:23 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two different execution plan for the same request" }, { "msg_contents": "Hi Nicolas,\n\nOn Wed, Jul 7, 2010 at 10:47 AM, JOUANIN Nicolas (44)\n<[email protected]> wrote:\n> There were no modification made on the database except a restart yesterday evening and a vacuumdb --analyse ran at night.\n\nIt's not really surprising considering you probably kept the\ndefault_statistics_target to 10 (it's the default in 8.3).\n\nConsider raising it to 100 in your postgresql.conf (100 is the default\nfor newer versions), then reload, and run a new ANALYZE.\n\nYou might need to set it higher on specific columns if you have a lot\nof data and your data distribution is weird.\n\nAnd, btw, please upgrade to the latest 8.3.x.\n\nHTH\n\n-- \nGuillaume\n", "msg_date": "Wed, 7 Jul 2010 10:59:56 +0200", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two different execution plan for the same request" }, { "msg_contents": "\n\n\n\n\nIt seems to work fine (same\nexecution plan and less duration) after :\n - setting default_statistics_target to 100\n - full vacuum with analyze\n - reindexdb\n\nThanks.\n\n\n-------- Message original --------\nSujet : Re: [PERFORM] Two different execution plan for the same request\nDe : Guillaume Smet <[email protected]>\nPour : JOUANIN Nicolas (44)\n<[email protected]>\nCopie à : Yeb Havinga <[email protected]>,\[email protected]\nDate : 07/07/2010 10:59\n\nHi Nicolas,\n\nOn Wed, Jul 7, 2010 at 10:47 AM, JOUANIN Nicolas (44)\n<[email protected]> wrote:\n \n\nThere were no modification made on the database except a restart yesterday evening and a vacuumdb --analyse ran at night.\n \n\n\nIt's not really surprising considering you probably kept the\ndefault_statistics_target to 10 (it's the default in 8.3).\n\nConsider raising it to 100 in your postgresql.conf (100 is the default\nfor newer versions), then reload, and run a new ANALYZE.\n\nYou might need to set it higher on specific columns if you have a lot\nof data and your data distribution is weird.\n\nAnd, btw, please upgrade to the latest 8.3.x.\n\nHTH\n\n \n\n\n\n\n", "msg_date": "Wed, 07 Jul 2010 13:27:07 +0200", "msg_from": "\"JOUANIN Nicolas (44)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two different execution plan for the same request" }, { "msg_contents": "On Wed, 7 Jul 2010, JOUANIN Nicolas (44) wrote:\n> It seems to work fine (same execution plan and less duration) after :\n>  - setting default_statistics_target to 100\n>  - full vacuum with analyze\n\nDon't do VACUUM FULL.\n\nMatthew\n\n-- \n I suppose some of you have done a Continuous Maths course. Yes? Continuous\n Maths? <menacing stares from audience> Whoah, it was like that, was it!\n -- Computer Science Lecturer", "msg_date": "Wed, 7 Jul 2010 13:34:18 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two different execution plan for the same request" } ]
[ { "msg_contents": "Richard Yen wrote:\n \n> the pg_xlog partition filled up (33GB)\n \n> checkpoint_segments=16\n \n> a sudden flurry of write activity\n \nWas this database bulk-loaded in some way (like by restoring the\noutput of pg_dump, for example)? If so, all rows inserted into all\ntables would have the same (or very nearly the same) xmin value. At\nsome later time, virtually all tuples would need to be rewritten to\nfreeze them. This would be a logged operation, so WAL files would be\ngetting created rapidly. If you have WAL archiving turned on, and\nyou can't copy the files out as fast as they're being created, this\nmight happen.\n \nTo avoid such a crushing mass freeze at an unpredictable time, we\nalways run VACUUM FREEZE ANALYZE on a bulk-loaded database before\nturning on WAL archiving.\n \nOf course, if this database wasn't bulk-loaded, this can't be your\nproblem....\n \n-Kevin\n\n", "msg_date": "Wed, 07 Jul 2010 06:58:25 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL partition overloaded--by autovacuum?" } ]
[ { "msg_contents": "On 8.1, I have a very interesting database where the distributions of \nsome values in a multi-million rows table is logarithmic (i.e. the most \nfrequent value is an order of magnitude more frequent than the next \nones). If I analyze the table, the statistics become extremely skewed \ntowards the most frequent values and this prevents the planner from \ngiving any good results on queries that do not target these entries.\n\nIn a recent case, the planner estimated that the number of returned rows \nwould be ~13% of the table size and from this bad assumption generated a \nvery slow conservative plan that executed in days. If I set the \nstatistics at zero for that table, the planner uses a hardcoded ratio \n(seems like 0.5%) for the number of returned rows and this helps \ngenerating a plan that executes in 3 minutes (still sub-optimal, but not \nas bad).\n\nGenerating partial index for the less frequent cases helps, but this \nsolution is not flexible enough for our needs as the number of complex \nqueries grow. We are mostly left with pre-generating a lot of temporary \ntables whenever the planner over-evaluates the number of rows generated \nby a subquery (query execution was trimmed from 3 minutes to 30 seconds \nusing this technique) or using the OFFSET 0 tweak, but it would be nice \nif the planner could handle this on its own.\n\nAm I missing something obvious? Setting the statistics for this table to \nzero seems awkward even if it gives good results.\nJerry.\n\n\n", "msg_date": "Wed, 07 Jul 2010 16:54:48 -0400", "msg_from": "Jerry Gamache <[email protected]>", "msg_from_op": true, "msg_subject": "Logarithmic data frequency distributions and the query planner" }, { "msg_contents": "Jerry Gamache <[email protected]> writes:\n> On 8.1, I have a very interesting database where the distributions of \n> some values in a multi-million rows table is logarithmic (i.e. the most \n> frequent value is an order of magnitude more frequent than the next \n> ones). If I analyze the table, the statistics become extremely skewed \n> towards the most frequent values and this prevents the planner from \n> giving any good results on queries that do not target these entries.\n\nHighly skewed distributions are hardly unusual, and I'm not aware that\nthe planner is totally incapable of dealing with them. You do need a\nlarge enough stats target to get down into the tail of the\ndistribution (the default target for 8.1 is probably too small for you).\nIt might be that there have been some other relevant improvements since\n8.1, too ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jul 2010 17:22:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logarithmic data frequency distributions and the query planner " } ]
[ { "msg_contents": "Postgresql was previously running on a single cpu linux machine with 2 gigs\nof memory and a single sata drive (v8.3). Basically a desktop with linux on\nit. I experienced slow performance.\n\nSo, I finally moved it to a real server. A dually zeon centos machine with\n6 gigs of memory and raid 10, postgres 8.4. But, I am now experiencing even\nworse performance issues.\n\nMy system is consistently highly transactional. However, there is also\nregular complex queries and occasional bulk loads.\n\nOn the new system the bulk loads are extremely slower than on the previous\nmachine and so are the more complex queries. The smaller transactional\nqueries seem comparable but i had expected an improvement. Performing a db\nimport via psql -d databas -f dbfile illustrates this problem. It takes 5\nhours to run this import. By contrast, if I perform this same exact import\non my crappy windows box with only 2 gigs of memory and default postgres\nsettings it takes 1 hour. Same deal with the old linux machine. How is\nthis possible?\n\nHere are some of my key config settings:\nmax_connections = 100\nshared_buffers = 768MB\neffective_cache_size = 2560MB\nwork_mem = 16MB\nmaintenance_work_mem = 128MB\ncheckpoint_segments = 7\ncheckpoint_timeout = 7min\ncheckpoint_completion_target = 0.5\n\nI have tried varying the shared_buffers size from 128 all the way to 1500mbs\nand got basically the same result. Is there a setting change I should be\nconsidering?\n\nDoes 8.4 have performance problems or is this unique to me?\n\nthanks\n\nPostgresql was previously running on a single cpu linux machine with 2 gigs of memory and a single sata drive (v8.3).  Basically a desktop with linux on it.  I experienced slow performance.So, I finally moved it to a real server.  A dually zeon centos machine with 6 gigs of memory and raid 10, postgres 8.4.  But, I am now experiencing even worse performance issues.\nMy system is consistently highly transactional.  However, there is also regular complex queries and occasional bulk loads.On the new system the bulk loads are extremely slower than on the previous machine and so are the more complex queries.  The smaller transactional queries seem comparable but i had expected an improvement.  Performing a db import via psql -d databas -f dbfile illustrates this problem.  It takes 5 hours to run this import.  By contrast, if I perform this same exact import on my crappy windows box with only 2 gigs of memory and default postgres settings it takes 1 hour.  Same deal with the old linux machine.  How is this possible?\nHere are some of my key config settings:max_connections = 100 shared_buffers = 768MB          effective_cache_size = 2560MBwork_mem = 16MB                 maintenance_work_mem = 128MB    checkpoint_segments = 7         \n\ncheckpoint_timeout = 7min       checkpoint_completion_target = 0.5I have tried varying the shared_buffers size from 128 all the way to 1500mbs and got basically the same result.   Is there a setting change I should be considering?\nDoes 8.4 have performance problems or is this unique to me?  thanks", "msg_date": "Wed, 7 Jul 2010 16:06:12 -0700", "msg_from": "Ryan Wexler <[email protected]>", "msg_from_op": true, "msg_subject": "performance on new linux box" }, { "msg_contents": "Ryan Wexler <[email protected]> writes:\n> Postgresql was previously running on a single cpu linux machine with 2 gigs\n> of memory and a single sata drive (v8.3). Basically a desktop with linux on\n> it. I experienced slow performance.\n\n> So, I finally moved it to a real server. A dually zeon centos machine with\n> 6 gigs of memory and raid 10, postgres 8.4. But, I am now experiencing even\n> worse performance issues.\n\nI'm wondering if you moved to a kernel+filesystem version that actually\nenforces fsync, from one that didn't. If so, the apparently faster\nperformance on the old box was being obtained at the cost of (lack of)\ncrash safety. That probably goes double for your windows-box comparison\npoint.\n\nYou could try test_fsync from the Postgres sources to confirm that\ntheory, or do some pgbench benchmarking to have more quantifiable\nnumbers.\n\nSee past discussions about write barriers in this list's archives for\nmore detail.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jul 2010 20:39:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box " }, { "msg_contents": "On Wed, Jul 7, 2010 at 4:06 PM, Ryan Wexler <[email protected]> wrote:\n> Postgresql was previously running on a single cpu linux machine with 2 gigs\n> of memory and a single sata drive (v8.3).  Basically a desktop with linux on\n> it.  I experienced slow performance.\n>\n> So, I finally moved it to a real server.  A dually zeon centos machine with\n> 6 gigs of memory and raid 10, postgres 8.4.  But, I am now experiencing even\n> worse performance issues.\n>\n> My system is consistently highly transactional.  However, there is also\n> regular complex queries and occasional bulk loads.\n>\n> On the new system the bulk loads are extremely slower than on the previous\n> machine and so are the more complex queries.  The smaller transactional\n> queries seem comparable but i had expected an improvement.  Performing a db\n> import via psql -d databas -f dbfile illustrates this problem.  It takes 5\n> hours to run this import.  By contrast, if I perform this same exact import\n> on my crappy windows box with only 2 gigs of memory and default postgres\n> settings it takes 1 hour.  Same deal with the old linux machine.  How is\n> this possible?\n>\n> Here are some of my key config settings:\n> max_connections = 100\n> shared_buffers = 768MB\n> effective_cache_size = 2560MB\n> work_mem = 16MB\n> maintenance_work_mem = 128MB\n> checkpoint_segments = 7\n> checkpoint_timeout = 7min\n> checkpoint_completion_target = 0.5\n>\n> I have tried varying the shared_buffers size from 128 all the way to 1500mbs\n> and got basically the same result.   Is there a setting change I should be\n> considering?\n>\n> Does 8.4 have performance problems or is this unique to me?\n>\n> thanks\n>\n>\n\nI think the most likely explanation is that the crappy box lied about\nfsync'ing data and your server is not. Did you purchase a raid card\nwith a bbu? If so, can you set the write cache policy to write-back?\n\n-- \nRob Wultsch\[email protected]\n", "msg_date": "Wed, 7 Jul 2010 17:39:21 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On 07/07/2010 06:06 PM, Ryan Wexler wrote:\n> Postgresql was previously running on a single cpu linux machine with 2 gigs of memory and a single sata drive (v8.3). Basically a desktop with linux on it. I experienced slow performance.\n>\n> So, I finally moved it to a real server. A dually zeon centos machine with 6 gigs of memory and raid 10, postgres 8.4. But, I am now experiencing even worse performance issues.\n>\n> My system is consistently highly transactional. However, there is also regular complex queries and occasional bulk loads.\n>\n> On the new system the bulk loads are extremely slower than on the previous machine and so are the more complex queries. The smaller transactional queries seem comparable but i had expected an improvement. Performing a db import via psql -d databas -f dbfile illustrates this problem. It takes 5 hours to run this import. By contrast, if I perform this same exact import on my crappy windows box with only 2 gigs of memory and default postgres settings it takes 1 hour. Same deal with the old linux machine. How is this possible?\n>\n> Here are some of my key config settings:\n> max_connections = 100\n> shared_buffers = 768MB\n> effective_cache_size = 2560MB\n> work_mem = 16MB\n> maintenance_work_mem = 128MB\n> checkpoint_segments = 7\n> checkpoint_timeout = 7min\n> checkpoint_completion_target = 0.5\n>\n> I have tried varying the shared_buffers size from 128 all the way to 1500mbs and got basically the same result. Is there a setting change I should be considering?\n>\n> Does 8.4 have performance problems or is this unique to me?\n>\n> thanks\n>\n\nYeah, I inherited a \"server\" (the quotes are sarcastic air quotes), with really bad disk IO... er.. really safe disk IO. Try the dd test. On my desktop I get 60-70 meg a second. On this \"server\" (I laugh) I got about 20. I had to go out of my way (way out) to enable the disk caching, and even then only got 50 meg a second.\n\nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n\n-Andy\n\n", "msg_date": "Wed, 07 Jul 2010 21:07:08 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "\n> On the new system the bulk loads are extremely slower than on the \n> previous\n> machine and so are the more complex queries. The smaller transactional\n> queries seem comparable but i had expected an improvement. Performing a \n> db\n> import via psql -d databas -f dbfile illustrates this problem.\n\nIf you use psql (not pg_restore) and your file contains no BEGIN/COMMIT \nstatements, you're probably doing 1 transaction per SQL command. As the \nothers say, if the old box lied about fsync, and the new one doesn't, \nperformance will suffer greatly. If this is the case, remember to do your \nimports the proper way : either use pg_restore, or group inserts in a \ntransaction, and build indexes in parallel.\n", "msg_date": "Thu, 08 Jul 2010 10:56:02 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Wed, Jul 7, 2010 at 10:07 PM, Andy Colson <[email protected]> wrote:\n\n> On 07/07/2010 06:06 PM, Ryan Wexler wrote:\n>\n>> Postgresql was previously running on a single cpu linux machine with 2\n>> gigs of memory and a single sata drive (v8.3). Basically a desktop with\n>> linux on it. I experienced slow performance.\n>>\n>> So, I finally moved it to a real server. A dually zeon centos machine\n>> with 6 gigs of memory and raid 10, postgres 8.4. But, I am now experiencing\n>> even worse performance issues.\n>>\n>> My system is consistently highly transactional. However, there is also\n>> regular complex queries and occasional bulk loads.\n>>\n>> On the new system the bulk loads are extremely slower than on the previous\n>> machine and so are the more complex queries. The smaller transactional\n>> queries seem comparable but i had expected an improvement. Performing a db\n>> import via psql -d databas -f dbfile illustrates this problem. It takes 5\n>> hours to run this import. By contrast, if I perform this same exact import\n>> on my crappy windows box with only 2 gigs of memory and default postgres\n>> settings it takes 1 hour. Same deal with the old linux machine. How is\n>> this possible?\n>>\n>> Here are some of my key config settings:\n>> max_connections = 100\n>> shared_buffers = 768MB\n>> effective_cache_size = 2560MB\n>> work_mem = 16MB\n>> maintenance_work_mem = 128MB\n>> checkpoint_segments = 7\n>> checkpoint_timeout = 7min\n>> checkpoint_completion_target = 0.5\n>>\n>> I have tried varying the shared_buffers size from 128 all the way to\n>> 1500mbs and got basically the same result. Is there a setting change I\n>> should be considering?\n>>\n>> Does 8.4 have performance problems or is this unique to me?\n>>\n>> thanks\n>>\n>>\n> Yeah, I inherited a \"server\" (the quotes are sarcastic air quotes), with\n> really bad disk IO... er.. really safe disk IO. Try the dd test. On my\n> desktop I get 60-70 meg a second. On this \"server\" (I laugh) I got about\n> 20. I had to go out of my way (way out) to enable the disk caching, and\n> even then only got 50 meg a second.\n>\n> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm<http://www.westnet.com/%7Egsmith/content/postgresql/pg-disktesting.htm>\n\n\n\nFor about $2k - $3k, you can get a server that will do upwards of 300\nMB/sec, assuming the bulk of that cost goes to a good hardware-based RAID\ncontroller with a battery backed-up cache and some good 15k RPM SAS drives.\nSince it sounds like you are disk I/O bound, it's probably not worth it for\nyou to spend extra on CPU and memory. Sink the money into the disk array\ninstead. If you have an extra $4k more money in your budget, you might even\ntry 4 of these in a RAID 10:\n\nhttp://www.provantage.com/ocz-technology-oczssd2-2vtxex100g~7OCZT0L9.htm\n\n\n\n-- \nEliot Gable\n\nOn Wed, Jul 7, 2010 at 10:07 PM, Andy Colson <[email protected]> wrote:\nOn 07/07/2010 06:06 PM, Ryan Wexler wrote:\n\nPostgresql was previously running on a single cpu linux machine with 2 gigs of memory and a single sata drive (v8.3).  Basically a desktop with linux on it.  I experienced slow performance.\n\nSo, I finally moved it to a real server.  A dually zeon centos machine with 6 gigs of memory and raid 10, postgres 8.4.  But, I am now experiencing even worse performance issues.\n\nMy system is consistently highly transactional.  However, there is also regular complex queries and occasional bulk loads.\n\nOn the new system the bulk loads are extremely slower than on the previous machine and so are the more complex queries.  The smaller transactional queries seem comparable but i had expected an improvement.  Performing a db import via psql -d databas -f dbfile illustrates this problem.  It takes 5 hours to run this import.  By contrast, if I perform this same exact import on my crappy windows box with only 2 gigs of memory and default postgres settings it takes 1 hour.  Same deal with the old linux machine.  How is this possible?\n\nHere are some of my key config settings:\nmax_connections = 100\nshared_buffers = 768MB\neffective_cache_size = 2560MB\nwork_mem = 16MB\nmaintenance_work_mem = 128MB\ncheckpoint_segments = 7\ncheckpoint_timeout = 7min\ncheckpoint_completion_target = 0.5\n\nI have tried varying the shared_buffers size from 128 all the way to 1500mbs and got basically the same result.   Is there a setting change I should be considering?\n\nDoes 8.4 have performance problems or is this unique to me?\n\nthanks\n\n\n\nYeah, I inherited a \"server\" (the quotes are sarcastic air quotes), with really bad disk IO... er.. really safe disk IO.  Try the dd test.  On my desktop I get 60-70 meg a second.  On this \"server\" (I laugh) I got about 20.  I had to go out of my way (way out) to enable the disk caching, and even then only got 50 meg a second.\n\nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\nFor about $2k - $3k, you can get a server that will do upwards of 300 MB/sec, assuming the bulk of that cost goes to a good hardware-based RAID controller with a battery backed-up cache and some good 15k RPM SAS drives. Since it sounds like you are disk I/O bound, it's probably not worth it for you to spend extra on CPU and memory. Sink the money into the disk array instead. If you have an extra $4k more money in your budget, you might even try 4 of these in a RAID 10:\nhttp://www.provantage.com/ocz-technology-oczssd2-2vtxex100g~7OCZT0L9.htm-- Eliot Gable", "msg_date": "Thu, 8 Jul 2010 09:45:45 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "Eliot Gable <[email protected]> wrote:\n \n> For about $2k - $3k, you can get a server that will do upwards of\n> 300 MB/sec, assuming the bulk of that cost goes to a good\n> hardware-based RAID controller with a battery backed-up cache and\n> some good 15k RPM SAS drives.\n \nFWIW, I concur that the description so far suggests that this server\neither doesn't have a good RAID controller card with battery backed-\nup (BBU) cache, or that it isn't configured properly.\n \n-Kevin\n", "msg_date": "Thu, 08 Jul 2010 08:53:08 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Thu, Jul 8, 2010 at 9:53 AM, Kevin Grittner\n<[email protected]>wrote:\n\n> Eliot Gable <[email protected]<egable%[email protected]>>\n> wrote:\n>\n> > For about $2k - $3k, you can get a server that will do upwards of\n> > 300 MB/sec, assuming the bulk of that cost goes to a good\n> > hardware-based RAID controller with a battery backed-up cache and\n> > some good 15k RPM SAS drives.\n>\n> FWIW, I concur that the description so far suggests that this server\n> either doesn't have a good RAID controller card with battery backed-\n> up (BBU) cache, or that it isn't configured properly.\n>\n>\nOn another note, it is also entirely possible that just re-writing your\nqueries will completely solve your problem and make your performance\nbottleneck go away. Sometimes throwing hardware at a problem is not the best\n(or cheapest) solution. Personally, I would never throw hardware at a\nproblem until I am certain that I have everything else optimized as much as\npossible. One of the stored procedures I recently wrote in pl/pgsql was\noriginally chewing up my entire development box's processing capabilities at\njust 20 transactions per second. It's a pretty wimpy box, so I was not\nreally expecting a lot out of it. However, after spending several weeks\noptimizing my queries, I now have it doing twice as much work at 120\ntransactions per second on the same box. So, if I had thrown hardware at the\nproblem, I would have spent 12 times more on hardware than I need to spend\nnow for the same level of performance.\n\nIf you can post some of your queries, there are a lot of bright people on\nthis discussion list that can probably help you solve your bottleneck\nwithout spending a ton of money on new hardware. Obviously, there is no\nguarantee -- you might already be as optimized as you can get in your\nqueries, but I doubt it. Even after spending months tweaking my queries, I\nam still finding things here and there where I can get a bit more\nperformance out of them.\n\n-- \nEliot Gable\n\nOn Thu, Jul 8, 2010 at 9:53 AM, Kevin Grittner <[email protected]> wrote:\nEliot Gable <[email protected]> wrote:\n\n> For about $2k - $3k, you can get a server that will do upwards of\n> 300 MB/sec, assuming the bulk of that cost goes to a good\n> hardware-based RAID controller with a battery backed-up cache and\n> some good 15k RPM SAS drives.\n\nFWIW, I concur that the description so far suggests that this server\neither doesn't have a good RAID controller card with battery backed-\nup (BBU) cache, or that it isn't configured properly.\nOn another note, it is also entirely possible that just re-writing your queries will completely solve your problem and make your performance bottleneck go away. Sometimes throwing hardware at a problem is not the best (or cheapest) solution. Personally, I would never throw hardware at a problem until I am certain that I have everything else optimized as much as possible. One of the stored procedures I recently wrote in pl/pgsql was originally chewing up my entire development box's processing capabilities at just 20 transactions per second. It's a pretty wimpy box, so I was not really expecting a lot out of it. However, after spending several weeks optimizing my queries, I now have it doing twice as much work at 120 transactions per second on the same box. So, if I had thrown hardware at the problem, I would have spent 12 times more on hardware than I need to spend now for the same level of performance.\nIf you can post some of your queries, there are a lot of bright people on this discussion list that can probably help you solve your bottleneck without spending a ton of money on new hardware. Obviously, there is no guarantee -- you might already be as optimized as you can get in your queries, but I doubt it. Even after spending months tweaking my queries, I am still finding things here and there where I can get a bit more performance out of them.\n-- Eliot Gable", "msg_date": "Thu, 8 Jul 2010 10:38:33 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "Eliot Gable <[email protected]> wrote:\n \n> If you can post some of your queries, there are a lot of bright\n> people on this discussion list that can probably help you solve\n> your bottleneck\n \nSure, but the original post was because the brand new server class\nmachine was performing much worse than the single-drive desktop\nmachine *on the same queries*, which seems like an issue worthy of\ninvestigation independently of what you suggest.\n \n-Kevin\n", "msg_date": "Thu, 08 Jul 2010 10:02:35 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "Thanks a lot for all the comments. The fact that both my windows box and\nthe old linux box both show a massive performance improvement over the new\nlinux box seems to point to hardware to me. I am not sure how to test the\nfsync issue, but i don't see how that could be it.\n\nThe raid card the server has in it is:\n3Ware 4 Port 9650SE-4LPML RAID Card\n\nLooking it up, it seems to indicate that it has BBU\n\nThe only other difference between the boxes is the postgresql version. The\nnew one has 8.4-2 from the yum install instructions on the site:\nhttp://yum.pgrpms.org/reporpms/repoview/pgdg-centos.html\n\nAny more thoughts?\n\nOn Thu, Jul 8, 2010 at 8:02 AM, Kevin Grittner\n<[email protected]>wrote:\n\n> Eliot Gable <[email protected]<egable%[email protected]>>\n> wrote:\n>\n> > If you can post some of your queries, there are a lot of bright\n> > people on this discussion list that can probably help you solve\n> > your bottleneck\n>\n> Sure, but the original post was because the brand new server class\n> machine was performing much worse than the single-drive desktop\n> machine *on the same queries*, which seems like an issue worthy of\n> investigation independently of what you suggest.\n>\n> -Kevin\n>\n\nThanks a lot for all the comments.  The fact that both my windows box and the old linux box both show a massive performance improvement over the new linux box seems to point to hardware to me.  I am not sure how to test the fsync issue, but i don't see how that could be it.\nThe raid card the server has in it is:3Ware 4 Port 9650SE-4LPML RAID CardLooking it up, it seems to indicate that it has BBUThe only other difference between the boxes is the postgresql version.  The new one has 8.4-2 from the yum install instructions on the site:\nhttp://yum.pgrpms.org/reporpms/repoview/pgdg-centos.html\nAny more thoughts?On Thu, Jul 8, 2010 at 8:02 AM, Kevin Grittner <[email protected]> wrote:\nEliot Gable <[email protected]> wrote:\n\n> If you can post some of your queries, there are a lot of bright\n> people on this discussion list that can probably help you solve\n> your bottleneck\n\nSure, but the original post was because the brand new server class\nmachine was performing much worse than the single-drive desktop\nmachine *on the same queries*, which seems like an issue worthy of\ninvestigation independently of what you suggest.\n\n-Kevin", "msg_date": "Thu, 8 Jul 2010 09:31:32 -0700", "msg_from": "Ryan Wexler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Thu, 2010-07-08 at 09:31 -0700, Ryan Wexler wrote:\n> The raid card the server has in it is:\n> 3Ware 4 Port 9650SE-4LPML RAID Card\n> \n> Looking it up, it seems to indicate that it has BBU \n\nNo. It supports a BBU. It doesn't have one necessarily.\n\nYou need to go into your RAID BIOS. It will tell you.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\n\n", "msg_date": "Thu, 08 Jul 2010 10:04:05 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Thu, Jul 08, 2010 at 09:31:32AM -0700, Ryan Wexler wrote:\n> Thanks a lot for all the comments. The fact that both my windows box and\n> the old linux box both show a massive performance improvement over the new\n> linux box seems to point to hardware to me. I am not sure how to test the\n> fsync issue, but i don't see how that could be it.\n> \n> The raid card the server has in it is:\n> 3Ware 4 Port 9650SE-4LPML RAID Card\n> \n> Looking it up, it seems to indicate that it has BBU\n\nBy \"looking it up\", I assume you mean running tw_cli and looking at\nthe output to make sure the bbu is enabled and the cache is turned on\nfor the raid array u0 or u1 ...?\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n", "msg_date": "Thu, 8 Jul 2010 17:05:05 +0000", "msg_from": "John Rouillard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux boxeradmin(11983)i:\n\tSTATEMENT: update license set expires= '2010-06-15' where lic" }, { "msg_contents": "On 7/8/10 9:31 AM, Ryan Wexler wrote:\n> Thanks a lot for all the comments. The fact that both my windows box\n> and the old linux box both show a massive performance improvement over\n> the new linux box seems to point to hardware to me. I am not sure how\n> to test the fsync issue, but i don't see how that could be it.\n>\n> The raid card the server has in it is:\n> 3Ware 4 Port 9650SE-4LPML RAID Card\n>\n> Looking it up, it seems to indicate that it has BBU\n\nMake sure the battery isn't dead. Most RAID controllers drop to non-BBU speeds if they detect that the battery is faulty.\n\nCraig\n", "msg_date": "Thu, 08 Jul 2010 10:10:15 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "---------- Forwarded message ----------\nFrom: Ryan Wexler <[email protected]>\nDate: Thu, Jul 8, 2010 at 10:12 AM\nSubject: Re: [PERFORM] performance on new linux box\nTo: Craig James <[email protected]>\n\n\nOn Thu, Jul 8, 2010 at 10:10 AM, Craig James <[email protected]>wrote:\n\n> On 7/8/10 9:31 AM, Ryan Wexler wrote:\n>\n>> Thanks a lot for all the comments. The fact that both my windows box\n>> and the old linux box both show a massive performance improvement over\n>> the new linux box seems to point to hardware to me. I am not sure how\n>> to test the fsync issue, but i don't see how that could be it.\n>>\n>> The raid card the server has in it is:\n>> 3Ware 4 Port 9650SE-4LPML RAID Card\n>>\n>> Looking it up, it seems to indicate that it has BBU\n>>\n>\n> Make sure the battery isn't dead. Most RAID controllers drop to non-BBU\n> speeds if they detect that the battery is faulty.\n>\n> Craig\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\nThanks. The server is hosted, so it is a bit of a hassle to figure this\nstuff out, but I am having someone check.\n\n---------- Forwarded message ----------From: Ryan Wexler <[email protected]>\nDate: Thu, Jul 8, 2010 at 10:12 AMSubject: Re: [PERFORM] performance on new linux boxTo: Craig James <[email protected]>\nOn Thu, Jul 8, 2010 at 10:10 AM, Craig James <[email protected]> wrote:\nOn 7/8/10 9:31 AM, Ryan Wexler wrote:\n\nThanks a lot for all the comments.  The fact that both my windows box\nand the old linux box both show a massive performance improvement over\nthe new linux box seems to point to hardware to me.  I am not sure how\nto test the fsync issue, but i don't see how that could be it.\n\nThe raid card the server has in it is:\n3Ware 4 Port 9650SE-4LPML RAID Card\n\nLooking it up, it seems to indicate that it has BBU\n\n\nMake sure the battery isn't dead.  Most RAID controllers drop to non-BBU speeds if they detect that the battery is faulty.\n\nCraig\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nThanks.  The server is hosted, so it is a bit of a hassle to figure this stuff out, but I am having someone check.", "msg_date": "Thu, 8 Jul 2010 10:16:47 -0700", "msg_from": "Ryan Wexler <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: performance on new linux box" }, { "msg_contents": "Thursday, July 8, 2010, 7:16:47 PM you wrote:\n\n> Thanks. The server is hosted, so it is a bit of a hassle to figure this\n> stuff out, but I am having someone check.\n\nIf you have root access to the machine, you should try 'tw_cli /cx show',\nwhere the x in /cx is the controller number. If not present on the machine,\nthe command-line-tools are available from 3ware in their download-section.\n\nYou should get an output showing something like this:\n\nName OnlineState BBUReady Status Volt Temp Hours LastCapTest\n---------------------------------------------------------------------------\nbbu On Yes OK OK OK 202 01-Jan-1970\n\nDon't ask why the 'LastCapTest' does not show a valid value, the bbu here \ncompleted the test successfully.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Thu, 8 Jul 2010 21:13:51 +0200", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Thu, Jul 8, 2010 at 12:13 PM, Jochen Erwied <\[email protected]> wrote:\n\n> Thursday, July 8, 2010, 7:16:47 PM you wrote:\n>\n> > Thanks. The server is hosted, so it is a bit of a hassle to figure this\n> > stuff out, but I am having someone check.\n>\n> If you have root access to the machine, you should try 'tw_cli /cx show',\n> where the x in /cx is the controller number. If not present on the machine,\n> the command-line-tools are available from 3ware in their download-section.\n>\n> You should get an output showing something like this:\n>\n> Name OnlineState BBUReady Status Volt Temp Hours LastCapTest\n> ---------------------------------------------------------------------------\n> bbu On Yes OK OK OK 202 01-Jan-1970\n>\n> Don't ask why the 'LastCapTest' does not show a valid value, the bbu here\n> completed the test successfully.\n>\n> --\n> Jochen Erwied | home: [email protected] +49-208-38800-18, FAX:\n> -19\n> Sauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX:\n> -50\n> D-45470 Muelheim | mobile: [email protected]\n> +49-173-5404164\n>\n>\nThe twi_cli package doesn't appear to be installed. I will try to hunt it\ndown.\nHowever, I just verified with the hosting company that BBU is off on the\nraid controller. I am trying to find out my options, turn it on, different\ncard, etc...\n\nOn Thu, Jul 8, 2010 at 12:13 PM, Jochen Erwied <[email protected]> wrote:\nThursday, July 8, 2010, 7:16:47 PM you wrote:\n\n> Thanks.  The server is hosted, so it is a bit of a hassle to figure this\n> stuff out, but I am having someone check.\n\nIf you have root access to the machine, you should try 'tw_cli /cx show',\nwhere the x in /cx is the controller number. If not present on the machine,\nthe command-line-tools are available from 3ware in their download-section.\n\nYou should get an output showing something like this:\n\nName  OnlineState  BBUReady  Status    Volt     Temp     Hours  LastCapTest\n---------------------------------------------------------------------------\nbbu   On           Yes       OK        OK       OK       202    01-Jan-1970\n\nDon't ask why the 'LastCapTest' does not show a valid value, the bbu here\ncompleted the test successfully.\n\n--\nJochen Erwied     |   home: [email protected]     +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 |   work: [email protected]  +49-2151-7294-24, FAX: -50\nD-45470 Muelheim  | mobile: [email protected]       +49-173-5404164\n\nThe twi_cli package doesn't appear to be installed.  I will try to hunt it down.  However, I just verified with the hosting company that BBU is off on the raid controller.  I am trying to find out my options, turn it on, different card, etc...", "msg_date": "Thu, 8 Jul 2010 12:18:20 -0700", "msg_from": "Ryan Wexler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "Ryan Wexler <[email protected]> wrote:\n \n> I just verified with the hosting company that BBU is off on the\n> raid controller. I am trying to find out my options, turn it on,\n> different card, etc...\n \nIn the \"etc.\" category, make sure that when you get it turned on,\nthe cache is configured for \"write back\" mode, not \"write through\"\nmode. Ideally (if you can't afford to lose the data), it will be\nconfigured to degrade to \"write through\" if the battery fails.\n \n-Kevin\n", "msg_date": "Thu, 08 Jul 2010 14:28:08 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "Thursday, July 8, 2010, 9:18:20 PM you wrote:\n\n> However, I just verified with the hosting company that BBU is off on the\n> raid controller. I am trying to find out my options, turn it on, different\n> card, etc...\n\nTurning it on requires the external BBU to be installed, so even if a 9650\nhas BBU support, it requires the hardware on a pluggable card.\n\nAnd even If the BBU is present, it requires to pass the selftest once until\nyou are able to turn on write caching.\n\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Thu, 8 Jul 2010 21:32:33 +0200", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Thu, Jul 8, 2010 at 12:32 PM, Jochen Erwied <\[email protected]> wrote:\n\n> Thursday, July 8, 2010, 9:18:20 PM you wrote:\n>\n> > However, I just verified with the hosting company that BBU is off on the\n> > raid controller. I am trying to find out my options, turn it on,\n> different\n> > card, etc...\n>\n> Turning it on requires the external BBU to be installed, so even if a 9650\n> has BBU support, it requires the hardware on a pluggable card.\n>\n> And even If the BBU is present, it requires to pass the selftest once until\n> you are able to turn on write caching.\n>\n>\n> --\n> Jochen Erwied | home: [email protected] +49-208-38800-18, FAX:\n> -19\n> Sauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX:\n> -50\n> D-45470 Muelheim | mobile: [email protected]\n> +49-173-5404164\n>\n>\nOne thing I don't understand is why BBU will result in a huge performance\ngain. I thought BBU was all about power failures?\n\nOn Thu, Jul 8, 2010 at 12:32 PM, Jochen Erwied <[email protected]> wrote:\nThursday, July 8, 2010, 9:18:20 PM you wrote:\n\n> However, I just verified with the hosting company that BBU is off on the\n> raid controller.  I am trying to find out my options, turn it on, different\n> card, etc...\n\nTurning it on requires the external BBU to be installed, so even if a 9650\nhas BBU support, it requires the hardware on a pluggable card.\n\nAnd even If the BBU is present, it requires to pass the selftest once until\nyou are able to turn on write caching.\n\n\n--\nJochen Erwied     |   home: [email protected]     +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 |   work: [email protected]  +49-2151-7294-24, FAX: -50\nD-45470 Muelheim  | mobile: [email protected]       +49-173-5404164\n\nOne thing I don't understand is why BBU will result in a huge performance gain.  I thought BBU was all about power failures?", "msg_date": "Thu, 8 Jul 2010 12:37:58 -0700", "msg_from": "Ryan Wexler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Jul 8, 2010, at 12:37 PM, Ryan Wexler wrote:\n\n> One thing I don't understand is why BBU will result in a huge performance gain. I thought BBU was all about power failures?\n\nWhen you have a working BBU, the raid card can safely do write caching. Without it, many raid cards are good about turning off write caching on the disks and refusing to do it themselves. (Safety over performance.)", "msg_date": "Thu, 8 Jul 2010 12:40:30 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "Ryan Wexler <[email protected]> wrote:\n \n> One thing I don't understand is why BBU will result in a huge\n> performance gain. I thought BBU was all about power failures?\n \nWell, it makes it safe for the controller to consider the write\ncomplete as soon as it hits the RAM cache, rather than waiting for\npersistence to the disk itself. It can then schedule the writes in\na manner which is efficient based on the physical medium.\n \nSomething like this was probably happening on your non-server\nmachines, but without BBU it was not actually safe. Server class\nmachines tend to be more conservative about not losing your data,\nbut without a RAID controller with BBU cache, that slows writes down\nto the speed of the rotating disks.\n \n-Kevin\n", "msg_date": "Thu, 08 Jul 2010 14:46:34 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Thu, Jul 8, 2010 at 12:46 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Ryan Wexler <[email protected]> wrote:\n>\n> > One thing I don't understand is why BBU will result in a huge\n> > performance gain. I thought BBU was all about power failures?\n>\n> Well, it makes it safe for the controller to consider the write\n> complete as soon as it hits the RAM cache, rather than waiting for\n> persistence to the disk itself. It can then schedule the writes in\n> a manner which is efficient based on the physical medium.\n>\n> Something like this was probably happening on your non-server\n> machines, but without BBU it was not actually safe. Server class\n> machines tend to be more conservative about not losing your data,\n> but without a RAID controller with BBU cache, that slows writes down\n> to the speed of the rotating disks.\n>\n> -Kevin\n>\nThanks for the explanations that makes things clearer. It still amazes me\nthat it would account for a 5x change in IO.\n\nOn Thu, Jul 8, 2010 at 12:46 PM, Kevin Grittner <[email protected]> wrote:\nRyan Wexler <[email protected]> wrote:\n\n> One thing I don't understand is why BBU will result in a huge\n> performance gain.  I thought BBU was all about power failures?\n\nWell, it makes it safe for the controller to consider the write\ncomplete as soon as it hits the RAM cache, rather than waiting for\npersistence to the disk itself.  It can then schedule the writes in\na manner which is efficient based on the physical medium.\n\nSomething like this was probably happening on your non-server\nmachines, but without BBU it was not actually safe.  Server class\nmachines tend to be more conservative about not losing your data,\nbut without a RAID controller with BBU cache, that slows writes down\nto the speed of the rotating disks.\n\n-Kevin\nThanks for the explanations that makes things clearer.  It still amazes me that it would account for a 5x change in IO.", "msg_date": "Thu, 8 Jul 2010 12:47:52 -0700", "msg_from": "Ryan Wexler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On 7/8/2010 1:47 PM, Ryan Wexler wrote:\n> Thanks for the explanations that makes things clearer. It still \n> amazes me that it would account for a 5x change in IO.\n\nThe buffering allows decoupling of the write rate from the disk rotation \nspeed.\nDisks don't spin that fast, at least not relative to the speed the CPU \nis running at.\n\n\n\n", "msg_date": "Thu, 08 Jul 2010 13:50:52 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "Ryan Wexler <[email protected]> wrote:\n \n> It still amazes me that it would account for a 5x change in IO.\n \nIf you were doing one INSERT per database transaction, for instance,\nthat would not be at all surprising. If you were doing one COPY in\nof a million rows, it would be a bit more surprising.\n \nEach COMMIT of a database transaction, without caching, requires\nthat you wait for the disk to rotate around to the right position. \nCompared to the speed of RAM, that can take quite a long time. With\nwrite caching, you might write quite a few adjacent disk sectors to\nthe cache, which can then all be streamed to disk on one rotation. \nIt can also do tricks like writing a bunch of sectors on one part of\nthe disk before pulling the heads all the way over to another\nportion of the disk to write a bunch of sectors.\n \nIt is very good for performance to cache writes.\n \n-Kevin\n", "msg_date": "Thu, 08 Jul 2010 14:59:24 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On 7/8/10 12:47 PM, Ryan Wexler wrote:\n>\n>\n> On Thu, Jul 8, 2010 at 12:46 PM, Kevin Grittner\n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Ryan Wexler <[email protected] <mailto:[email protected]>>\n> wrote:\n>\n> > One thing I don't understand is why BBU will result in a huge\n> > performance gain. I thought BBU was all about power failures?\n>\n> Well, it makes it safe for the controller to consider the write\n> complete as soon as it hits the RAM cache, rather than waiting for\n> persistence to the disk itself. It can then schedule the writes in\n> a manner which is efficient based on the physical medium.\n>\n> Something like this was probably happening on your non-server\n> machines, but without BBU it was not actually safe. Server class\n> machines tend to be more conservative about not losing your data,\n> but without a RAID controller with BBU cache, that slows writes down\n> to the speed of the rotating disks.\n>\n> -Kevin\n>\n> Thanks for the explanations that makes things clearer. It still amazes\n> me that it would account for a 5x change in IO.\n\nIt's not exactly a 5x change in I/O, rather it's a 5x change in *transactions*. Without a BBU Postgres has to wait for each transaction to by physically written to the disk, which at 7200 RPM (or 10K or 15K) means a few hundred per second. Most of the time Postgres is just sitting there waiting for the disk to say, \"OK, I did it.\" With BBU, once the RAID card has the data, it's virtually guaranteed it will get to the disk even if the power fails, so the RAID controller says, \"OK, I did it\" even though the data is still in the controller's cache and not actually on the disk.\n\nIt means there's no tight relationship between the disk's rotational speed and your transaction rate.\n\nCraig\n", "msg_date": "Thu, 08 Jul 2010 14:01:34 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Thu, Jul 8, 2010 at 12:13 PM, Jochen Erwied <\[email protected]> wrote:\n\n> Thursday, July 8, 2010, 7:16:47 PM you wrote:\n>\n> > Thanks. The server is hosted, so it is a bit of a hassle to figure this\n> > stuff out, but I am having someone check.\n>\n> If you have root access to the machine, you should try 'tw_cli /cx show',\n> where the x in /cx is the controller number. If not present on the machine,\n> the command-line-tools are available from 3ware in their download-section.\n>\n> You should get an output showing something like this:\n>\n> Name OnlineState BBUReady Status Volt Temp Hours LastCapTest\n> ---------------------------------------------------------------------------\n> bbu On Yes OK OK OK 202 01-Jan-1970\n>\n> Don't ask why the 'LastCapTest' does not show a valid value, the bbu here\n> completed the test successfully.\n>\n> --\n> Jochen Erwied | home: [email protected] +49-208-38800-18, FAX:\n> -19\n> Sauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX:\n> -50\n> D-45470 Muelheim | mobile: [email protected]\n> +49-173-5404164\n>\n>\nHere is what I got:\n# ./tw_cli /c0 show\n\nUnit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache\nAVrfy\n------------------------------------------------------------------------------\nu0 RAID-10 OK - - 64K 465.641 OFF ON\n\nPort Status Unit Size Blocks Serial\n---------------------------------------------------------------\np0 OK u0 233.81 GB 490350672 WD-WCAT1F502612\np1 OK u0 233.81 GB 490350672 WD-WCAT1F472718\np2 OK u0 233.81 GB 490350672 WD-WCAT1F216268\np3 OK u0 233.81 GB 490350672 WD-WCAT1F216528\n\nOn Thu, Jul 8, 2010 at 12:13 PM, Jochen Erwied <[email protected]> wrote:\nThursday, July 8, 2010, 7:16:47 PM you wrote:\n\n> Thanks.  The server is hosted, so it is a bit of a hassle to figure this\n> stuff out, but I am having someone check.\n\nIf you have root access to the machine, you should try 'tw_cli /cx show',\nwhere the x in /cx is the controller number. If not present on the machine,\nthe command-line-tools are available from 3ware in their download-section.\n\nYou should get an output showing something like this:\n\nName  OnlineState  BBUReady  Status    Volt     Temp     Hours  LastCapTest\n---------------------------------------------------------------------------\nbbu   On           Yes       OK        OK       OK       202    01-Jan-1970\n\nDon't ask why the 'LastCapTest' does not show a valid value, the bbu here\ncompleted the test successfully.\n\n--\nJochen Erwied     |   home: [email protected]     +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 |   work: [email protected]  +49-2151-7294-24, FAX: -50\nD-45470 Muelheim  | mobile: [email protected]       +49-173-5404164\n\nHere is what I got:# ./tw_cli /c0 showUnit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy------------------------------------------------------------------------------\nu0    RAID-10   OK             -       -       64K     465.641   OFF    ON\nPort   Status           Unit   Size        Blocks        Serial---------------------------------------------------------------p0     OK               u0     233.81 GB   490350672     WD-WCAT1F502612p1     OK               u0     233.81 GB   490350672     WD-WCAT1F472718\n\np2     OK               u0     233.81 GB   490350672     WD-WCAT1F216268p3     OK               u0     233.81 GB   490350672     WD-WCAT1F216528", "msg_date": "Thu, 8 Jul 2010 14:02:50 -0700", "msg_from": "Ryan Wexler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "How does the linux machine know that there is a BBU installed and to\nchange its behavior or change the behavior of Postgres? I am\nexperiencing performance issues, not with searching but more with IO.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Craig James\nSent: Thursday, July 08, 2010 4:02 PM\nTo: [email protected]\nSubject: Re: [PERFORM] performance on new linux box\n\nOn 7/8/10 12:47 PM, Ryan Wexler wrote:\n>\n>\n> On Thu, Jul 8, 2010 at 12:46 PM, Kevin Grittner\n> <[email protected] <mailto:[email protected]>>\nwrote:\n>\n> Ryan Wexler <[email protected] <mailto:[email protected]>>\n> wrote:\n>\n> > One thing I don't understand is why BBU will result in a huge\n> > performance gain. I thought BBU was all about power failures?\n>\n> Well, it makes it safe for the controller to consider the write\n> complete as soon as it hits the RAM cache, rather than waiting for\n> persistence to the disk itself. It can then schedule the writes\nin\n> a manner which is efficient based on the physical medium.\n>\n> Something like this was probably happening on your non-server\n> machines, but without BBU it was not actually safe. Server class\n> machines tend to be more conservative about not losing your data,\n> but without a RAID controller with BBU cache, that slows writes\ndown\n> to the speed of the rotating disks.\n>\n> -Kevin\n>\n> Thanks for the explanations that makes things clearer. It still\namazes\n> me that it would account for a 5x change in IO.\n\nIt's not exactly a 5x change in I/O, rather it's a 5x change in\n*transactions*. Without a BBU Postgres has to wait for each transaction\nto by physically written to the disk, which at 7200 RPM (or 10K or 15K)\nmeans a few hundred per second. Most of the time Postgres is just\nsitting there waiting for the disk to say, \"OK, I did it.\" With BBU,\nonce the RAID card has the data, it's virtually guaranteed it will get\nto the disk even if the power fails, so the RAID controller says, \"OK, I\ndid it\" even though the data is still in the controller's cache and not\nactually on the disk.\n\nIt means there's no tight relationship between the disk's rotational\nspeed and your transaction rate.\n\nCraig\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 8 Jul 2010 17:18:11 -0400", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "Thursday, July 8, 2010, 11:02:50 PM you wrote:\n\n> Here is what I got:\n> # ./tw_cli /c0 show\n\nIf that's all you get, than there's no BBU installed, or not correctly\nconnected to the controller.\n\nYou could try 'tw_cli /c0/bbu show all' to be sure, but I doubt your output \nwill change-\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Thu, 8 Jul 2010 23:39:23 +0200", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On 7/8/10 2:18 PM, [email protected] wrote:\n> How does the linux machine know that there is a BBU installed and to\n> change its behavior or change the behavior of Postgres? I am\n> experiencing performance issues, not with searching but more with IO.\n\nIt doesn't. It trusts the disk controller. Linux says, \"Flush your cache\" and the controller says, \"OK, it's flushed.\" In the case of a BBU controller, the controller can say that almost instantly because it's got the data in a battery-backed memory that will survive even if the power goes out. In the case of a non-BBU controller (RAID or non-RAID), the controller has to actually wait for the head to move to the right spot, then wait for the disk to spin around to the right sector, then write the data. Only then can it say, \"OK, it's flushed.\"\n\nSo to Linux, it just appears to be a disk that's exceptionally fast at flushing its buffers.\n\nCraig\n\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Craig James\n> Sent: Thursday, July 08, 2010 4:02 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] performance on new linux box\n>\n> On 7/8/10 12:47 PM, Ryan Wexler wrote:\n>>\n>>\n>> On Thu, Jul 8, 2010 at 12:46 PM, Kevin Grittner\n>> <[email protected]<mailto:[email protected]>>\n> wrote:\n>>\n>> Ryan Wexler<[email protected]<mailto:[email protected]>>\n>> wrote:\n>>\n>> > One thing I don't understand is why BBU will result in a huge\n>> > performance gain. I thought BBU was all about power failures?\n>>\n>> Well, it makes it safe for the controller to consider the write\n>> complete as soon as it hits the RAM cache, rather than waiting for\n>> persistence to the disk itself. It can then schedule the writes\n> in\n>> a manner which is efficient based on the physical medium.\n>>\n>> Something like this was probably happening on your non-server\n>> machines, but without BBU it was not actually safe. Server class\n>> machines tend to be more conservative about not losing your data,\n>> but without a RAID controller with BBU cache, that slows writes\n> down\n>> to the speed of the rotating disks.\n>>\n>> -Kevin\n>>\n>> Thanks for the explanations that makes things clearer. It still\n> amazes\n>> me that it would account for a 5x change in IO.\n>\n> It's not exactly a 5x change in I/O, rather it's a 5x change in\n> *transactions*. Without a BBU Postgres has to wait for each transaction\n> to by physically written to the disk, which at 7200 RPM (or 10K or 15K)\n> means a few hundred per second. Most of the time Postgres is just\n> sitting there waiting for the disk to say, \"OK, I did it.\" With BBU,\n> once the RAID card has the data, it's virtually guaranteed it will get\n> to the disk even if the power fails, so the RAID controller says, \"OK, I\n> did it\" even though the data is still in the controller's cache and not\n> actually on the disk.\n>\n> It means there's no tight relationship between the disk's rotational\n> speed and your transaction rate.\n>\n> Craig\n>\n\n", "msg_date": "Thu, 08 Jul 2010 14:42:06 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On 7/8/2010 3:18 PM, [email protected] wrote:\n> How does the linux machine know that there is a BBU installed and to\n> change its behavior or change the behavior of Postgres? I am\n> experiencing performance issues, not with searching but more with IO.\n> \nIt doesn't change its behavior at all. It's in the business of writing \nstuff to a file and waiting until that stuff has been put on the disk \n(it wants a durable write). What the write buffer/cache does is to \ninform the OS, and hence PG, that the write has been done when in fact \nit hasn't (yet). So the change in behavior is only to the extent that \nthe application doesn't spend as much time waiting.\n\n\n", "msg_date": "Thu, 08 Jul 2010 15:43:13 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On 09/07/10 02:31, Ryan Wexler wrote:\n> Thanks a lot for all the comments. The fact that both my windows box\n> and the old linux box both show a massive performance improvement over\n> the new linux box seems to point to hardware to me. I am not sure how\n> to test the fsync issue, but i don't see how that could be it.\n>\n> The raid card the server has in it is:\n> 3Ware 4 Port 9650SE-4LPML RAID Card\n>\n> Looking it up, it seems to indicate that it has BBU\n>\n> The only other difference between the boxes is the postgresql\n> version. The new one has 8.4-2 from the yum install instructions on\n> the site:\n> http://yum.pgrpms.org/reporpms/repoview/pgdg-centos.html\n>\n> Any more thoughts?\nReally dumb idea, you don't happen to have the build of the RPM's that\nhad debug enabled do you? That resulted in significant performance problem?\n\nRegards\n\nRussell\n\n\n\n\n\n\nOn 09/07/10 02:31, Ryan Wexler wrote:\nThanks a lot for all the comments.  The fact that both my\nwindows box and the old linux box both show a massive performance\nimprovement over the new linux box seems to point to hardware to me.  I\nam not sure how to test the fsync issue, but i don't see how that could\nbe it.\n\nThe raid card the server has in it is:\n3Ware 4 Port 9650SE-4LPML RAID Card\n\nLooking it up, it seems to indicate that it has BBU\n\nThe only other difference between the boxes is the postgresql version. \nThe new one has 8.4-2 from the yum install instructions on the site:\n\n\n\n\n\nhttp://yum.pgrpms.org/reporpms/repoview/pgdg-centos.html\n\nAny more thoughts?\n\nReally dumb idea, you don't happen to have the build of the RPM's that\nhad debug enabled do you?  That resulted in significant performance\nproblem?\n\nRegards\n\nRussell", "msg_date": "Fri, 09 Jul 2010 19:08:12 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Fri, Jul 9, 2010 at 2:08 AM, Russell Smith <[email protected]> wrote:\n> On 09/07/10 02:31, Ryan Wexler wrote:\n>\n>\n> The only other difference between the boxes is the postgresql version.  The\n> new one has 8.4-2 from the yum install instructions on the site:\n> http://yum.pgrpms.org/reporpms/repoview/pgdg-centos.html\n>\n> Any more thoughts?\n>\n> Really dumb idea, you don't happen to have the build of the RPM's that had\n> debug enabled do you?  That resulted in significant performance problem?\n>\n\nThe OP mentions that the new system underperforms on a straight dd\ntest, so it isn't the database config or postgres build.\n", "msg_date": "Fri, 9 Jul 2010 02:38:40 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Fri, Jul 9, 2010 at 2:38 AM, Samuel Gendler <[email protected]>wrote:\n\n> On Fri, Jul 9, 2010 at 2:08 AM, Russell Smith <[email protected]> wrote:\n> > On 09/07/10 02:31, Ryan Wexler wrote:\n> >\n> >\n> > The only other difference between the boxes is the postgresql version.\n> The\n> > new one has 8.4-2 from the yum install instructions on the site:\n> > http://yum.pgrpms.org/reporpms/repoview/pgdg-centos.html\n> >\n> > Any more thoughts?\n> >\n> > Really dumb idea, you don't happen to have the build of the RPM's that\n> had\n> > debug enabled do you? That resulted in significant performance problem?\n> >\n>\n> The OP mentions that the new system underperforms on a straight dd\n> test, so it isn't the database config or postgres build.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\nWell I got me a new raid card, MegaRAID 8708EM2, fully equipped with BBU and\nread and write caching are enabled. It completely solved my performance\nproblems. Now everything is way faster than the previous server. Thanks\nfor all the help everyone.\n\nOne question I do have is this card has a setting called Read Policy which\napparently helps with sequentially reads. Do you think that is something I\nshould enable?\n\nOn Fri, Jul 9, 2010 at 2:38 AM, Samuel Gendler <[email protected]> wrote:\nOn Fri, Jul 9, 2010 at 2:08 AM, Russell Smith <[email protected]> wrote:\n> On 09/07/10 02:31, Ryan Wexler wrote:\n>\n>\n> The only other difference between the boxes is the postgresql version.  The\n> new one has 8.4-2 from the yum install instructions on the site:\n> http://yum.pgrpms.org/reporpms/repoview/pgdg-centos.html\n>\n> Any more thoughts?\n>\n> Really dumb idea, you don't happen to have the build of the RPM's that had\n> debug enabled do you?  That resulted in significant performance problem?\n>\n\nThe OP mentions that the new system underperforms on a straight dd\ntest, so it isn't the database config or postgres build.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nWell I got me a new raid card, MegaRAID 8708EM2, fully equipped with BBU and read and write caching are enabled.  It completely solved my performance problems.  Now everything is way faster than the previous server.  Thanks for all the help everyone.\nOne question I do have is this card has a setting called Read Policy which apparently helps with sequentially reads.  Do you think that is something I should enable?", "msg_date": "Sun, 11 Jul 2010 13:02:50 -0700", "msg_from": "Ryan Wexler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "Ryan Wexler wrote:\n> One question I do have is this card has a setting called Read Policy \n> which apparently helps with sequentially reads. Do you think that is \n> something I should enable?\n\nLinux will do some amount of read-ahead in a similar way on its own. \nYou run \"blockdev --getra\" and \"blockdev --setra\" on each disk device on \nthe system to see the settings and increase them. I've found that \ntweaking there, where you can control exactly the amount of readahead, \nto be more effective than relying on the less tunable Read Policy modes \nin RAID cards that do something similar. That said, it doesn't seem to \nhurt to use both on the LSI card you have; giving more information there \nto the controller for its use in optimizing how it caches things, by \nchanging to the more aggressive Read Policy setting, hasn't ever \ndegraded results significantly when I've tried.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 12 Jul 2010 11:23:16 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On 07/11/2010 03:02 PM, Ryan Wexler wrote:\n\n>\n> Well I got me a new raid card, MegaRAID 8708EM2, fully equipped with\n> BBU and read and write caching are enabled. It completely solved my\n> performance problems. Now everything is way faster than the previous\n> server. Thanks for all the help everyone.\n>\n> One question I do have is this card has a setting called Read Policy\n> which apparently helps with sequentially reads. Do you think that is\n> something I should enable?\n>\n>\n>\n\nI would think it depends on your usage. If you use clustered indexes (and understand how/when they help) then enabling it would help (cuz clustered is assuming sequential reads).\n\nor if you seq scan a table, it might help (as long as the table is stored relatively close together).\n\nBut if you have a big db, that doesnt fit into cache, and you bounce all over the place doing seeks, I doubt it'll help.\n\n-Andy\n", "msg_date": "Tue, 13 Jul 2010 18:26:09 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "But none of this explains why a 4-disk raid 10 is slower than a 1 disk system. If there is no write-back caching on the RAID, it should still be similar to the one disk setup.\n\nUnless that one-disk setup turned off fsync() or was configured with synchronous_commit off. Even low end laptop drives don't lie these days about a cache flush or sync() -- OS's/file systems can, and some SSD's do.\n\nIf loss of a transaction during a power failure is OK, then just turn synchronous_commit off and get the performance back. The discussion about transaction rate being limited by the disks is related to that, and its not necessary _IF_ its ok to lose a transaction if the power fails. For most applications, losing a transaction or two in a power failure is fine. Obviously, its not with financial transactions or other such work.\n \n\nOn Jul 8, 2010, at 2:42 PM, Craig James wrote:\n\n> On 7/8/10 2:18 PM, [email protected] wrote:\n>> How does the linux machine know that there is a BBU installed and to\n>> change its behavior or change the behavior of Postgres? I am\n>> experiencing performance issues, not with searching but more with IO.\n> \n> It doesn't. It trusts the disk controller. Linux says, \"Flush your cache\" and the controller says, \"OK, it's flushed.\" In the case of a BBU controller, the controller can say that almost instantly because it's got the data in a battery-backed memory that will survive even if the power goes out. In the case of a non-BBU controller (RAID or non-RAID), the controller has to actually wait for the head to move to the right spot, then wait for the disk to spin around to the right sector, then write the data. Only then can it say, \"OK, it's flushed.\"\n> \n> So to Linux, it just appears to be a disk that's exceptionally fast at flushing its buffers.\n> \n> Craig\n> \n>> \n>> -----Original Message-----\n>> From: [email protected]\n>> [mailto:[email protected]] On Behalf Of Craig James\n>> Sent: Thursday, July 08, 2010 4:02 PM\n>> To: [email protected]\n>> Subject: Re: [PERFORM] performance on new linux box\n>> \n>> On 7/8/10 12:47 PM, Ryan Wexler wrote:\n>>> \n>>> \n>>> On Thu, Jul 8, 2010 at 12:46 PM, Kevin Grittner\n>>> <[email protected]<mailto:[email protected]>>\n>> wrote:\n>>> \n>>> Ryan Wexler<[email protected]<mailto:[email protected]>>\n>>> wrote:\n>>> \n>>>> One thing I don't understand is why BBU will result in a huge\n>>>> performance gain. I thought BBU was all about power failures?\n>>> \n>>> Well, it makes it safe for the controller to consider the write\n>>> complete as soon as it hits the RAM cache, rather than waiting for\n>>> persistence to the disk itself. It can then schedule the writes\n>> in\n>>> a manner which is efficient based on the physical medium.\n>>> \n>>> Something like this was probably happening on your non-server\n>>> machines, but without BBU it was not actually safe. Server class\n>>> machines tend to be more conservative about not losing your data,\n>>> but without a RAID controller with BBU cache, that slows writes\n>> down\n>>> to the speed of the rotating disks.\n>>> \n>>> -Kevin\n>>> \n>>> Thanks for the explanations that makes things clearer. It still\n>> amazes\n>>> me that it would account for a 5x change in IO.\n>> \n>> It's not exactly a 5x change in I/O, rather it's a 5x change in\n>> *transactions*. Without a BBU Postgres has to wait for each transaction\n>> to by physically written to the disk, which at 7200 RPM (or 10K or 15K)\n>> means a few hundred per second. Most of the time Postgres is just\n>> sitting there waiting for the disk to say, \"OK, I did it.\" With BBU,\n>> once the RAID card has the data, it's virtually guaranteed it will get\n>> to the disk even if the power fails, so the RAID controller says, \"OK, I\n>> did it\" even though the data is still in the controller's cache and not\n>> actually on the disk.\n>> \n>> It means there's no tight relationship between the disk's rotational\n>> speed and your transaction rate.\n>> \n>> Craig\n>> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 14 Jul 2010 18:57:52 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:\n\n> But none of this explains why a 4-disk raid 10 is slower than a 1 disk system. If there is no write-back caching on the RAID, it should still be similar to the one disk setup.\n\nMany raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n\nTake away the controller, and most OS's by default enable the write cache on the drive. You can turn it off if you want, but if you know how to do that, then you're probably also the same kind of person that would have purchased a raid card with a BBU.", "msg_date": "Wed, 14 Jul 2010 19:50:28 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Wed, Jul 14, 2010 at 6:57 PM, Scott Carey <[email protected]>wrote:\n\n> But none of this explains why a 4-disk raid 10 is slower than a 1 disk\n> system. If there is no write-back caching on the RAID, it should still be\n> similar to the one disk setup.\n>\n> Unless that one-disk setup turned off fsync() or was configured with\n> synchronous_commit off. Even low end laptop drives don't lie these days\n> about a cache flush or sync() -- OS's/file systems can, and some SSD's do.\n>\n> If loss of a transaction during a power failure is OK, then just turn\n> synchronous_commit off and get the performance back. The discussion about\n> transaction rate being limited by the disks is related to that, and its not\n> necessary _IF_ its ok to lose a transaction if the power fails. For most\n> applications, losing a transaction or two in a power failure is fine.\n> Obviously, its not with financial transactions or other such work.\n>\n>\n> On Jul 8, 2010, at 2:42 PM, Craig James wrote:\n>\n> > On 7/8/10 2:18 PM, [email protected] wrote:\n> >> How does the linux machine know that there is a BBU installed and to\n> >> change its behavior or change the behavior of Postgres? I am\n> >> experiencing performance issues, not with searching but more with IO.\n> >\n> > It doesn't. It trusts the disk controller. Linux says, \"Flush your\n> cache\" and the controller says, \"OK, it's flushed.\" In the case of a BBU\n> controller, the controller can say that almost instantly because it's got\n> the data in a battery-backed memory that will survive even if the power goes\n> out. In the case of a non-BBU controller (RAID or non-RAID), the controller\n> has to actually wait for the head to move to the right spot, then wait for\n> the disk to spin around to the right sector, then write the data. Only then\n> can it say, \"OK, it's flushed.\"\n> >\n> > So to Linux, it just appears to be a disk that's exceptionally fast at\n> flushing its buffers.\n> >\n> > Craig\n> >\n> >>\n> >> -----Original Message-----\n> >> From: [email protected]\n> >> [mailto:[email protected]] On Behalf Of Craig\n> James\n> >> Sent: Thursday, July 08, 2010 4:02 PM\n> >> To: [email protected]\n> >> Subject: Re: [PERFORM] performance on new linux box\n> >>\n> >> On 7/8/10 12:47 PM, Ryan Wexler wrote:\n> >>>\n> >>>\n> >>> On Thu, Jul 8, 2010 at 12:46 PM, Kevin Grittner\n> >>> <[email protected]<mailto:[email protected]>>\n> >> wrote:\n> >>>\n> >>> Ryan Wexler<[email protected]<mailto:[email protected]>>\n> >>> wrote:\n> >>>\n> >>>> One thing I don't understand is why BBU will result in a huge\n> >>>> performance gain. I thought BBU was all about power failures?\n> >>>\n> >>> Well, it makes it safe for the controller to consider the write\n> >>> complete as soon as it hits the RAM cache, rather than waiting for\n> >>> persistence to the disk itself. It can then schedule the writes\n> >> in\n> >>> a manner which is efficient based on the physical medium.\n> >>>\n> >>> Something like this was probably happening on your non-server\n> >>> machines, but without BBU it was not actually safe. Server class\n> >>> machines tend to be more conservative about not losing your data,\n> >>> but without a RAID controller with BBU cache, that slows writes\n> >> down\n> >>> to the speed of the rotating disks.\n> >>>\n> >>> -Kevin\n> >>>\n> >>> Thanks for the explanations that makes things clearer. It still\n> >> amazes\n> >>> me that it would account for a 5x change in IO.\n> >>\n> >> It's not exactly a 5x change in I/O, rather it's a 5x change in\n> >> *transactions*. Without a BBU Postgres has to wait for each transaction\n> >> to by physically written to the disk, which at 7200 RPM (or 10K or 15K)\n> >> means a few hundred per second. Most of the time Postgres is just\n> >> sitting there waiting for the disk to say, \"OK, I did it.\" With BBU,\n> >> once the RAID card has the data, it's virtually guaranteed it will get\n> >> to the disk even if the power fails, so the RAID controller says, \"OK, I\n> >> did it\" even though the data is still in the controller's cache and not\n> >> actually on the disk.\n> >>\n> >> It means there's no tight relationship between the disk's rotational\n> >> speed and your transaction rate.\n> >>\n> >> Craig\n> >>\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nSomething was clearly wrong with my former raid card. Frankly, I am not\nsure if it was configuration or simply hardware failure. The server is\nhosted so I only had so much access. But the card was swapped out with a\nnew one and now performance is quite good. I am just trying to tune the new\ncard now.\nthanks for all the input\n\nOn Wed, Jul 14, 2010 at 6:57 PM, Scott Carey <[email protected]> wrote:\nBut none of this explains why a 4-disk raid 10 is slower than a 1 disk system.  If there is no write-back caching on the RAID, it should still be similar to the one disk setup.\n\nUnless that one-disk setup turned off fsync() or was configured with synchronous_commit off.  Even low end laptop drives don't lie these days about a cache flush or sync() -- OS's/file systems can, and some SSD's do.\n\nIf loss of a transaction during a power failure is OK, then just turn synchronous_commit off and get the performance back.  The discussion about transaction rate being limited by the disks is related to that, and its not necessary _IF_ its ok to lose a transaction if the power fails.  For most applications, losing a transaction or two in a power failure is fine.  Obviously, its not with financial transactions or other such work.\n\n\nOn Jul 8, 2010, at 2:42 PM, Craig James wrote:\n\n> On 7/8/10 2:18 PM, [email protected] wrote:\n>> How does the linux machine know that there is a BBU installed and to\n>> change its behavior or change the behavior of Postgres? I am\n>> experiencing performance issues, not with searching but more with IO.\n>\n> It doesn't.  It trusts the disk controller.  Linux says, \"Flush your cache\" and the controller says, \"OK, it's flushed.\"  In the case of a BBU controller, the controller can say that almost instantly because it's got the data in a battery-backed memory that will survive even if the power goes out.  In the case of a non-BBU controller (RAID or non-RAID), the controller has to actually wait for the head to move to the right spot, then wait for the disk to spin around to the right sector, then write the data.  Only then can it say, \"OK, it's flushed.\"\n\n>\n> So to Linux, it just appears to be a disk that's exceptionally fast at flushing its buffers.\n>\n> Craig\n>\n>>\n>> -----Original Message-----\n>> From: [email protected]\n>> [mailto:[email protected]] On Behalf Of Craig James\n>> Sent: Thursday, July 08, 2010 4:02 PM\n>> To: [email protected]\n>> Subject: Re: [PERFORM] performance on new linux box\n>>\n>> On 7/8/10 12:47 PM, Ryan Wexler wrote:\n>>>\n>>>\n>>> On Thu, Jul 8, 2010 at 12:46 PM, Kevin Grittner\n>>> <[email protected]<mailto:[email protected]>>\n>> wrote:\n>>>\n>>>     Ryan Wexler<[email protected]<mailto:[email protected]>>\n>>>     wrote:\n>>>\n>>>> One thing I don't understand is why BBU will result in a huge\n>>>> performance gain.  I thought BBU was all about power failures?\n>>>\n>>>     Well, it makes it safe for the controller to consider the write\n>>>     complete as soon as it hits the RAM cache, rather than waiting for\n>>>     persistence to the disk itself.  It can then schedule the writes\n>> in\n>>>     a manner which is efficient based on the physical medium.\n>>>\n>>>     Something like this was probably happening on your non-server\n>>>     machines, but without BBU it was not actually safe.  Server class\n>>>     machines tend to be more conservative about not losing your data,\n>>>     but without a RAID controller with BBU cache, that slows writes\n>> down\n>>>     to the speed of the rotating disks.\n>>>\n>>>     -Kevin\n>>>\n>>> Thanks for the explanations that makes things clearer.  It still\n>> amazes\n>>> me that it would account for a 5x change in IO.\n>>\n>> It's not exactly a 5x change in I/O, rather it's a 5x change in\n>> *transactions*.  Without a BBU Postgres has to wait for each transaction\n>> to by physically written to the disk, which at 7200 RPM (or 10K or 15K)\n>> means a few hundred per second.  Most of the time Postgres is just\n>> sitting there waiting for the disk to say, \"OK, I did it.\"  With BBU,\n>> once the RAID card has the data, it's virtually guaranteed it will get\n>> to the disk even if the power fails, so the RAID controller says, \"OK, I\n>> did it\" even though the data is still in the controller's cache and not\n>> actually on the disk.\n>>\n>> It means there's no tight relationship between the disk's rotational\n>> speed and your transaction rate.\n>>\n>> Craig\n>>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nSomething was clearly wrong with my former raid card.  Frankly, I am\nnot sure if it was configuration or simply hardware failure.  The server is hosted so I only had so much access.  But the card\nwas swapped out with a new one and now performance is quite good.  I am just trying to tune the new card now.thanks for all the input", "msg_date": "Wed, 14 Jul 2010 23:18:10 -0700", "msg_from": "Ryan Wexler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "\nOn Jul 14, 2010, at 7:50 PM, Ben Chobot wrote:\n\n> On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:\n> \n>> But none of this explains why a 4-disk raid 10 is slower than a 1 disk system. If there is no write-back caching on the RAID, it should still be similar to the one disk setup.\n> \n> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n\nThis does not make sense.\nWrite caching on all hard drives in the last decade are safe because they support a write cache flush command properly. If the card is \"smart\" it would issue the drive's write cache flush command to fulfill an fsync() or barrier request with no BBU.\n \n> \n> Take away the controller, and most OS's by default enable the write cache on the drive. You can turn it off if you want, but if you know how to do that, then you're probably also the same kind of person that would have purchased a raid card with a BBU.\n\nSure, or you can use an OS/File System combination that respects fsync() which will call the drive's write cache flush. There are some issues with certain file systems and barriers for file system metadata, but for the WAL log, we're only dalking about fdatasync() equivalency, which most file systems do just fine even with a drive's write cache on.\n\n", "msg_date": "Thu, 15 Jul 2010 09:30:24 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:\n\n>> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n> \n> This does not make sense.\n> Write caching on all hard drives in the last decade are safe because they support a write cache flush command properly. If the card is \"smart\" it would issue the drive's write cache flush command to fulfill an fsync() or barrier request with no BBU.\n\nYou're missing the point. If the power dies suddenly, there's no time to flush any cache anywhere. That's the entire point of the BBU - it keeps the RAM powered up on the raid card. It doesn't keep the disks spinning long enough to flush caches.", "msg_date": "Thu, 15 Jul 2010 12:35:16 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Jul 15, 2010, at 12:40 PM, Ryan Wexler wrote:\n\n> On Wed, Jul 14, 2010 at 7:50 PM, Ben Chobot <[email protected]> wrote:\n> On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:\n> \n> > But none of this explains why a 4-disk raid 10 is slower than a 1 disk system. If there is no write-back caching on the RAID, it should still be similar to the one disk setup.\n> \n> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n> \n> Take away the controller, and most OS's by default enable the write cache on the drive. You can turn it off if you want, but if you know how to do that, then you're probably also the same kind of person that would have purchased a raid card with a BBU.\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> Ben I don't quite follow your message. Could you spell it out a little clearer for me?\n> thanks\n> -ryan\n\n\nMost (all?) hard drives have cache built into them. Many raid cards have cache built into them. When the power dies, all the data in any cache is lost, which is why it's dangerous to use it for write caching. For that reason, you can attach a BBU to a raid card which keeps the cache alive until the power is restored (hopefully). But no hard drive I am aware of lets you attach a battery, so using a hard drive's cache for write caching will always be dangerous.\n\nThat's why many raid cards will always disable write caching on the hard drives themselves, and only enable write caching using their own memory when a BBU is installed. \n\nDoes that make more sense?\n\n\nOn Jul 15, 2010, at 12:40 PM, Ryan Wexler wrote:On Wed, Jul 14, 2010 at 7:50 PM, Ben Chobot <[email protected]> wrote:\nOn Jul 14, 2010, at 6:57 PM, Scott Carey wrote:\n\n> But none of this explains why a 4-disk raid 10 is slower than a 1 disk system.  If there is no write-back caching on the RAID, it should still be similar to the one disk setup.\n\nMany raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n\nTake away the controller, and most OS's by default enable the write cache on the drive. You can turn it off if you want, but if you know how to do that, then you're probably also the same kind of person that would have purchased a raid card with a BBU.\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nBen I don't quite follow your message.   Could you spell it out a little clearer for me?thanks-ryan\nMost (all?) hard drives have cache built into them. Many raid cards have cache built into them. When the power dies, all the data in any cache is lost, which is why it's dangerous to use it for write caching. For that reason, you can attach a BBU to a raid card which keeps the cache alive until the power is restored (hopefully). But no hard drive I am aware of lets you attach a battery, so using a hard drive's cache for write caching will always be dangerous.That's why many raid cards will always disable write caching on the hard drives themselves, and only enable write caching using their own memory when a BBU is installed. Does that make more sense?", "msg_date": "Thu, 15 Jul 2010 12:49:38 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Thu, Jul 15, 2010 at 12:35 PM, Ben Chobot <[email protected]> wrote:\n\n> On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:\n>\n> >> Many raid controllers are smart enough to always turn off write caching\n> on the drives, and also disable the feature on their own buffer without a\n> BBU. Add a BBU, and the cache on the controller starts getting used, but\n> *not* the cache on the drives.\n> >\n> > This does not make sense.\n> > Write caching on all hard drives in the last decade are safe because they\n> support a write cache flush command properly. If the card is \"smart\" it\n> would issue the drive's write cache flush command to fulfill an fsync() or\n> barrier request with no BBU.\n>\n> You're missing the point. If the power dies suddenly, there's no time to\n> flush any cache anywhere. That's the entire point of the BBU - it keeps the\n> RAM powered up on the raid card. It doesn't keep the disks spinning long\n> enough to flush caches.\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nSo you are saying write caching is a dangerous proposition on a raid card\nwith or without BBU?\n\nOn Thu, Jul 15, 2010 at 12:35 PM, Ben Chobot <[email protected]> wrote:\nOn Jul 15, 2010, at 9:30 AM, Scott Carey wrote:\n\n>> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n\n>\n> This does not make sense.\n> Write caching on all hard drives in the last decade are safe because they support a write cache flush command properly.  If the card is \"smart\" it would issue the drive's write cache flush command to fulfill an fsync() or barrier request with no BBU.\n\nYou're missing the point. If the power dies suddenly, there's no time to flush any cache anywhere. That's the entire point of the BBU - it keeps the RAM powered up on the raid card. It doesn't keep the disks spinning long enough to flush caches.\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nSo you are saying write caching is a dangerous proposition on a raid card with or without BBU?", "msg_date": "Thu, 15 Jul 2010 14:40:20 -0700", "msg_from": "Ryan Wexler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Jul 15, 2010, at 2:40 PM, Ryan Wexler wrote:\n\n> On Thu, Jul 15, 2010 at 12:35 PM, Ben Chobot <[email protected]> wrote:\n> On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:\n> \n> >> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n> >\n> > This does not make sense.\n> > Write caching on all hard drives in the last decade are safe because they support a write cache flush command properly. If the card is \"smart\" it would issue the drive's write cache flush command to fulfill an fsync() or barrier request with no BBU.\n> \n> You're missing the point. If the power dies suddenly, there's no time to flush any cache anywhere. That's the entire point of the BBU - it keeps the RAM powered up on the raid card. It doesn't keep the disks spinning long enough to flush caches.\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> So you are saying write caching is a dangerous proposition on a raid card with or without BBU?\n\n\nEr, no, sorry, I am not being very clear it seems. \n\n\nUsing a cache for write caching is dangerous, unless you protect it with a battery. Caches on a raid card can be protected by a BBU, so, when you use a BBU, write caching on the raid card is safe. (Just don't read the firmware changelog for your raid card or you will always be paranoid.) If you don't have a BBU, many raid cards default to disabling caching. You can still enable it, but the card will often tell you it's a bad idea.\n\nThere are also caches on all your disk drives. Write caching there is always dangerous, which is why almost all raid cards always disable the hard drive write caching, with or without a BBU. I'm not even sure how many raid cards let you enable the write cache on a drive... hopefully, not many.\nOn Jul 15, 2010, at 2:40 PM, Ryan Wexler wrote:On Thu, Jul 15, 2010 at 12:35 PM, Ben Chobot <[email protected]> wrote:\nOn Jul 15, 2010, at 9:30 AM, Scott Carey wrote:\n\n>> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n\n>\n> This does not make sense.\n> Write caching on all hard drives in the last decade are safe because they support a write cache flush command properly.  If the card is \"smart\" it would issue the drive's write cache flush command to fulfill an fsync() or barrier request with no BBU.\n\nYou're missing the point. If the power dies suddenly, there's no time to flush any cache anywhere. That's the entire point of the BBU - it keeps the RAM powered up on the raid card. It doesn't keep the disks spinning long enough to flush caches.\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nSo you are saying write caching is a dangerous proposition on a raid card with or without BBU?\nEr, no, sorry, I am not being very clear it seems. Using a cache for write caching is dangerous, unless you protect it with a battery. Caches on a raid card can be protected by a BBU, so, when you use a BBU, write caching on the raid card is safe. (Just don't read the firmware changelog for your raid card or you will always be paranoid.) If you don't have a BBU, many raid cards default to disabling caching. You can still enable it, but the card will often tell you it's a bad idea.There are also caches on all your disk drives. Write caching there is always dangerous, which is why almost all raid cards always disable the hard drive write caching, with or without a BBU. I'm not even sure how many raid cards let you enable the write cache on a drive... hopefully, not many.", "msg_date": "Thu, 15 Jul 2010 15:18:45 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "\n> Most (all?) hard drives have cache built into them. Many raid cards have \n> cache built into them. When the power dies, all the data in any cache is \n> lost, which is why it's dangerous to use it for write caching. For that \n> reason, you can attach a BBU to a raid card which keeps the cache alive \n> until the power is restored (hopefully). But no hard drive I am aware of \n> lets you attach a battery, so using a hard drive's cache for write \n> caching will always be dangerous.\n>\n> That's why many raid cards will always disable write caching on the hard \n> drives themselves, and only enable write caching using their own memory \n> when a BBU is installed.\n>\n> Does that make more sense?\n>\n\nActually write cache is only dangerous if the OS and postgres think some \nstuff is written to the disk when in fact it is only in the cache and not \nwritten yet. When power is lost, cache contents are SUPPOSED to be lost. \nIn a normal situation, postgres and the OS assume nothing is written to \nthe disk (ie, it may be in cache not on disk) until a proper cache flush \nis issued and responded to by the hardware. That's what xlog and journals \nare for. If the hardware doesn't lie, and the kernel/FS doesn't have any \nbugs, no problem. You can't get decent write performance on rotating media \nwithout a write cache somewhere...\n\n", "msg_date": "Fri, 16 Jul 2010 00:57:28 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Thu, Jul 15, 2010 at 10:30 AM, Scott Carey <[email protected]> wrote:\n>\n> On Jul 14, 2010, at 7:50 PM, Ben Chobot wrote:\n>\n>> On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:\n>>\n>>> But none of this explains why a 4-disk raid 10 is slower than a 1 disk system.  If there is no write-back caching on the RAID, it should still be similar to the one disk setup.\n>>\n>> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n>\n> This does not make sense.\n\nBasically, you can have cheap, fast and dangerous (drive with write\ncache enabled, which responds positively to fsync even when it hasn't\nactually fsynced the data. You can have cheap, slow and safe with a\ndrive that has a cache but since it'll be fsyncing it all the the time\nthe write cache won't actually get used, or fast, expensive, and safe,\nwhich is what a BBU RAID card gets by saying the data is fsynced when\nit's actually just in cache, but a safe cache that won't get lost on\npower down.\n\nI don't find it that complicated.\n", "msg_date": "Thu, 15 Jul 2010 19:22:58 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "\nOn Jul 15, 2010, at 12:35 PM, Ben Chobot wrote:\n\n> On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:\n> \n>>> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n>> \n>> This does not make sense.\n>> Write caching on all hard drives in the last decade are safe because they support a write cache flush command properly. If the card is \"smart\" it would issue the drive's write cache flush command to fulfill an fsync() or barrier request with no BBU.\n> \n> You're missing the point. If the power dies suddenly, there's no time to flush any cache anywhere. That's the entire point of the BBU - it keeps the RAM powered up on the raid card. It doesn't keep the disks spinning long enough to flush caches.\n\nIf the power dies suddenly, then the data that is in the OS RAM will also be lost. What about that? \n\nWell it doesn't matter because the DB is only relying on data being persisted to disk that it thinks has been persisted to disk via fsync().\n\nThe data in the disk cache is the same thing as RAM. As long as fsync() works _properly_ which is true for any file system + disk combination with a damn (not HFS+ on OSX, not FAT, not a few other things), then it will tell the drive to flush its cache _before_ fsync() returns. There is NO REASON for a raid card to turn off a drive cache unless it does not trust the drive cache. In write-through mode, it should not return to the OS with a fsync, direct write, or other \"the OS thinks this data is persisted now\" call until it has flushed the disk cache. That does not mean it has to turn off the disk cache.", "msg_date": "Thu, 15 Jul 2010 20:16:10 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "\nOn Jul 15, 2010, at 6:22 PM, Scott Marlowe wrote:\n\n> On Thu, Jul 15, 2010 at 10:30 AM, Scott Carey <[email protected]> wrote:\n>> \n>> On Jul 14, 2010, at 7:50 PM, Ben Chobot wrote:\n>> \n>>> On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:\n>>> \n>>>> But none of this explains why a 4-disk raid 10 is slower than a 1 disk system. If there is no write-back caching on the RAID, it should still be similar to the one disk setup.\n>>> \n>>> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n>> \n>> This does not make sense.\n> \n> Basically, you can have cheap, fast and dangerous (drive with write\n> cache enabled, which responds positively to fsync even when it hasn't\n> actually fsynced the data. You can have cheap, slow and safe with a\n> drive that has a cache but since it'll be fsyncing it all the the time\n> the write cache won't actually get used, or fast, expensive, and safe,\n> which is what a BBU RAID card gets by saying the data is fsynced when\n> it's actually just in cache, but a safe cache that won't get lost on\n> power down.\n> \n> I don't find it that complicated.\n\nIt doesn't make sense that a raid 10 will be slower than a 1-disk setup unless the former respects fsync() and the latter does not. Individual drive write cache does not explain the situation. That is what does not make sense.\n\nWhen in _write-through_ mode, there is no reason to turn off the drive's write cache unless the drive does not properly respect its cache-flush command, or the RAID card is too dumb to issue cache-flush commands. The RAID card simply has to issue its writes, then issue the flush commands, then return to the OS when those complete. With drive write caches on, this is perfectly safe. The only way it is unsafe is if the drive lies and returns from a cache flush before the data from its cache is actually flushed.\n\nSome SSD's on the market currently lie. A handful of the thousands of all hard drive models in the server, desktop, and laptop space in the last decade did not respect the cache flush command properly, and none of them in the SAS/SCSI or 'enterprise SATA' space lie to my knowledge. Information on this topic has come across this list several times.\n\nThe explanation why one setup respects fsync() and another does not almost always lies in the FS + OS combination. HFS+ on OSX does not respect fsync. ext3 until recently only did fdatasync() when you told it to fsync() (which is fine for postgres' transaction log anyway).\n\nA raid card, especially with any SAS/SCSI drives has no reason to turn off the drive's write cache unless it _wants_ to return to the OS before the data is on the drive. That condition occurs in write-back cache mode when the RAID card's cache is safe via a battery or some other mechanism. In that case, it should turn off the drive's write cache so that it can be sure that data is on disk when a power fails without having to call the cache-flush command on every write. That way, it can remove data from its RAM as soon as the drive returns from the write.\nIn write-through mode it should turn the caches back on and rely on the flush command to pass through direct writes, cache flush demands, and barrier requests. It could optionally turn the caches off, but that won't improve data safety unless the drive cannot faithfully flush its cache.\n\n\n", "msg_date": "Thu, 15 Jul 2010 20:42:20 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On Jul 15, 2010, at 8:16 PM, Scott Carey wrote:\n\n> On Jul 15, 2010, at 12:35 PM, Ben Chobot wrote:\n> \n>> On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:\n>> \n>>>> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n>>> \n>>> This does not make sense.\n>>> Write caching on all hard drives in the last decade are safe because they support a write cache flush command properly. If the card is \"smart\" it would issue the drive's write cache flush command to fulfill an fsync() or barrier request with no BBU.\n>> \n>> You're missing the point. If the power dies suddenly, there's no time to flush any cache anywhere. That's the entire point of the BBU - it keeps the RAM powered up on the raid card. It doesn't keep the disks spinning long enough to flush caches.\n> \n> If the power dies suddenly, then the data that is in the OS RAM will also be lost. What about that? \n> \n> Well it doesn't matter because the DB is only relying on data being persisted to disk that it thinks has been persisted to disk via fsync().\n\nRight, we agree that only what has been fsync()'d has a chance to be safe....\n\n> The data in the disk cache is the same thing as RAM. As long as fsync() works _properly_ which is true for any file system + disk combination with a damn (not HFS+ on OSX, not FAT, not a few other things), then it will tell the drive to flush its cache _before_ fsync() returns. There is NO REASON for a raid card to turn off a drive cache unless it does not trust the drive cache. In write-through mode, it should not return to the OS with a fsync, direct write, or other \"the OS thinks this data is persisted now\" call until it has flushed the disk cache. That does not mean it has to turn off the disk cache.\n\n...and here you are also right in that a write-through write cache is safe, with or without a battery. A write-through cache is a win for things that don't often fsync, but my understanding is that with a database, you end up fsyncing all the time, which makes a write-through cache not worth very much. The only good way to get good *database* performance out of spinning media is with a write-back cache, and the only way to make that safe is to hook up a BBU.\n\n", "msg_date": "Thu, 15 Jul 2010 21:30:31 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On 16/07/10 06:18, Ben Chobot wrote:\n\n> There are also caches on all your disk drives. Write caching there is always dangerous, which is why almost all raid cards always disable the hard drive write caching, with or without a BBU. I'm not even sure how many raid cards let you enable the write cache on a drive... hopefully, not many.\n\nAFAIK Disk drive caches can be safe to leave in write-back mode (ie\nwrite cache enabled) *IF* the OS uses write barriers (properly) and the\ndrive understands them.\n\nBig if.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 16 Jul 2010 19:17:53 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "On 16/07/10 09:22, Scott Marlowe wrote:\n> On Thu, Jul 15, 2010 at 10:30 AM, Scott Carey <[email protected]> wrote:\n>>\n>> On Jul 14, 2010, at 7:50 PM, Ben Chobot wrote:\n>>\n>>> On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:\n>>>\n>>>> But none of this explains why a 4-disk raid 10 is slower than a 1 disk system. If there is no write-back caching on the RAID, it should still be similar to the one disk setup.\n>>>\n>>> Many raid controllers are smart enough to always turn off write caching on the drives, and also disable the feature on their own buffer without a BBU. Add a BBU, and the cache on the controller starts getting used, but *not* the cache on the drives.\n>>\n>> This does not make sense.\n> \n> Basically, you can have cheap, fast and dangerous (drive with write\n> cache enabled, which responds positively to fsync even when it hasn't\n> actually fsynced the data. You can have cheap, slow and safe with a\n> drive that has a cache but since it'll be fsyncing it all the the time\n> the write cache won't actually get used, or fast, expensive, and safe,\n> which is what a BBU RAID card gets by saying the data is fsynced when\n> it's actually just in cache, but a safe cache that won't get lost on\n> power down.\n\nSpeaking of BBUs... do you ever find yourself wishing you could use\nsoftware RAID with battery backup?\n\nI tend to use software RAID quite heavily on non-database servers, as\nit's cheap, fast, portable from machine to machine, and (in the case of\nLinux 'md' raid) reliable. Alas, I can't really use it for DB servers\ndue to the need for write-back caching.\n\nThere's no technical reason I know of why sw raid couldn't write-cache\nto some non-volatile memory on the host. A dedicated a battery-backed\npair of DIMMS on a PCI-E card mapped into memory would be ideal. Failing\nthat, a PCI-E card with onboard RAM+BATT or fast flash that presents an\nAHCI interface so it can be used as a virtual HDD would do pretty well.\nEven one of those SATA \"RAM Drive\" units would do the job, though\nforcing everything though the SATA2 bus would be a performance downside.\n\nThe only issue I see with sw raid write caching is that it probably\ncouldn't be done safely on the root file system. The OS would have to\ncome up, init software raid, and find the caches before it'd be safe to\nread or write volumes with s/w raid write caching enabled. It's not the\nsort of thing that'd be practical to implement in GRUB's raid support.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 16 Jul 2010 19:36:10 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" }, { "msg_contents": "Scott Carey wrote:\n> As long as fsync() works _properly_ which is true for any file system + disk combination with a damn (not HFS+ on OSX, not FAT, not a few other things), then it will tell the drive to flush its cache _before_ fsync() returns. There is NO REASON for a raid card to turn off a drive cache unless it does not trust the drive cache. In write-through mode, it should not return to the OS with a fsync, direct write, or other \"the OS thinks this data is persisted now\" call until it has flushed the disk cache. That does not mean it has to turn off the disk cache.\n> \n\nAssuming that the operating system will pass through fsync calls to \nflush data all the way to drive level in all situations is an extremely \ndangerous assumption. Most RAID controllers don't know how to force \nthings out of the individual drive caches; that's why they turn off \nwrite caching on them. Few filesystems get the details right to handle \nindividual drive cache flushing correctly. On Linux, XFS and ext4 are \nthe only two with any expectation that will happen, and of those two \next4 is still pretty new and therefore should still be presumed to be buggy.\n\nPlease don't advise people about what is safe based on theoretical \ngrounds here, in practice there are way too many bugs in the \nimplementation of things like drive barriers to trust them most of the \ntime. There is no substitute for a pull the plug test using something \nthat looks for bad cache flushes, i.e. diskchecker.pl: \nhttp://brad.livejournal.com/2116715.html If you do that you'll discover \nyou must turn off the individual drive caches when using a \nbattery-backed RAID controller, and you can't ever trust barriers on \next3 because of bugs that were only fixed in ext4.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 16 Jul 2010 12:55:29 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance on new linux box" } ]
[ { "msg_contents": "Hi,\n\n\nI want to fine tune my postgresql to increase number of connects it\ncan handle in a minutes time.\nDecrease the response time per request etc.\nThe exact case will be to handle around 100 concurrent requests.\n\nCan any one please help me in this.\nAny hardware suggestions are also welcomed.\n\n\nRegards\nHarpreet\n", "msg_date": "Fri, 9 Jul 2010 00:50:30 +0530", "msg_from": "Harpreet singh Wadhwa <[email protected]>", "msg_from_op": true, "msg_subject": "Need help in performance tuning." }, { "msg_contents": "On 9/07/2010 3:20 AM, Harpreet singh Wadhwa wrote:\n> Hi,\n>\n>\n> I want to fine tune my postgresql to increase number of connects it\n> can handle in a minutes time.\n> Decrease the response time per request etc.\n> The exact case will be to handle around 100 concurrent requests.\n\nIf you're not using a connection pool, start using one.\n\nDo you really need 100 *active* working query threads at one time? \nBecause if you do, you're going to need a scary-big disk subsystem and a \nlot of processors.\n\nMost people actually only need a few queries executing simultaneously, \nthey just need lots of connections to the database open concurrently \nand/or lots of queries queued up for processing. For that purpose, a \nconnection pool is ideal.\n\nYou will get BETTER performance from postgresql with FEWER connections \nto the \"real\" database that're all doing active work. If you need lots \nand lots of connections you should use a connection pool to save the \nmain database the overhead of managing that.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 09 Jul 2010 11:11:02 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer\n<[email protected]> wrote:\n> If you're not using a connection pool, start using one.\n>\n> Do you really need 100 *active* working query threads at one time? Because\n> if you do, you're going to need a scary-big disk subsystem and a lot of\n> processors.\n\nI see this issue and subsequent advice cross this list awfully\nfrequently. Is there in architectural reason why postgres itself\ncannot pool incoming connections in order to eliminate the requirement\nfor an external pool?\n", "msg_date": "Thu, 8 Jul 2010 21:19:22 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Samuel Gendler <[email protected]> writes:\n> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer\n> <[email protected]> wrote:\n>> If you're not using a connection pool, start using one.\n\n> I see this issue and subsequent advice cross this list awfully\n> frequently. Is there in architectural reason why postgres itself\n> cannot pool incoming connections in order to eliminate the requirement\n> for an external pool?\n\nPerhaps not, but there's no obvious benefit either. Since there's\nMore Than One Way To Do It, it seems more practical to keep that as a\nseparate problem that can be solved by a choice of add-on packages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Jul 2010 00:42:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning. " }, { "msg_contents": "On Fri, 2010-07-09 at 00:42 -0400, Tom Lane wrote:\n> Samuel Gendler <[email protected]> writes:\n> > On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer\n> > <[email protected]> wrote:\n> >> If you're not using a connection pool, start using one.\n> \n> > I see this issue and subsequent advice cross this list awfully\n> > frequently. Is there in architectural reason why postgres itself\n> > cannot pool incoming connections in order to eliminate the requirement\n> > for an external pool?\n> \n> Perhaps not, but there's no obvious benefit either. Since there's\n> More Than One Way To Do It, it seems more practical to keep that as a\n> separate problem that can be solved by a choice of add-on packages.\n\nOne example where you need a separate connection pool is pooling really\nlarge number of connections, which you may want to do on another host\nthan the database itself is running.\n\nFor example pgbouncer had to add option to use incoming unix sockets,\nbecause they run into the IP socket port number limit (a little above\n31k, or more exactly 63k/2.\nAnd unix sockets can be used only on local host .\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Fri, 09 Jul 2010 08:48:49 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "On 09/07/10 12:42, Tom Lane wrote:\n> Samuel Gendler <[email protected]> writes:\n>> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer\n>> <[email protected]> wrote:\n>>> If you're not using a connection pool, start using one.\n> \n>> I see this issue and subsequent advice cross this list awfully\n>> frequently. Is there in architectural reason why postgres itself\n>> cannot pool incoming connections in order to eliminate the requirement\n>> for an external pool?\n> \n> Perhaps not, but there's no obvious benefit either. Since there's\n> More Than One Way To Do It, it seems more practical to keep that as a\n> separate problem that can be solved by a choice of add-on packages.\n\nAdmittedly I'm relatively ignorant of the details, but I increasingly\nthink PostgreSQL will need a different big architectural change in the\ncoming years, as the typical performance characteristics of machines\nchange:\n\nIt'll need to separate \"running queries\" from \"running processes\", or\nstart threading backends, so that one way or the other a single query\ncan benefit from the capabilities of multiple CPUs. The same separation,\nor a move to async I/O, might be needed to get one query to concurrently\nread multiple partitions of a table, or otherwise get maximum benefit\nfrom high-capacity I/O subsystems when running just a few big, expensive\nqueries.\n\nOtherwise I'm wondering if PostgreSQL will begin really suffering in\nperformance on workloads where queries are big and expensive but there\nare relatively few of them running at a time.\n\nMy point? *if* I'm not full of hot air and there's some truth to my\nblather above, any change like that might be accompanied by a move to\nseparate query execution state from connection state, so that idle\nconnections have a much lower resource cost.\n\nOK, that's my hand-waving for the day done.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 09 Jul 2010 15:52:23 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Thanx you all for the replies.\nI got a gist on where should I head towards\nlike I should rely a bit on postgres for performance and rest on my\ntomcat and application.\nAnd will try connection pooling on postgres part.\n\nAnd if I come back for any query (related to this topic) then this\ntime it will be more precise (with real time data of my testing). ;-)\n\nRegards\nhaps\n\nOn Fri, Jul 9, 2010 at 1:22 PM, Craig Ringer\n<[email protected]> wrote:\n> On 09/07/10 12:42, Tom Lane wrote:\n>> Samuel Gendler <[email protected]> writes:\n>>> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer\n>>> <[email protected]> wrote:\n>>>> If you're not using a connection pool, start using one.\n>>\n>>> I see this issue and subsequent advice cross this list awfully\n>>> frequently.  Is there in architectural reason why postgres itself\n>>> cannot pool incoming connections in order to eliminate the requirement\n>>> for an external pool?\n>>\n>> Perhaps not, but there's no obvious benefit either.  Since there's\n>> More Than One Way To Do It, it seems more practical to keep that as a\n>> separate problem that can be solved by a choice of add-on packages.\n>\n> Admittedly I'm relatively ignorant of the details, but I increasingly\n> think PostgreSQL will need a different big architectural change in the\n> coming years, as the typical performance characteristics of machines\n> change:\n>\n> It'll need to separate \"running queries\" from \"running processes\", or\n> start threading backends, so that one way or the other a single query\n> can benefit from the capabilities of multiple CPUs. The same separation,\n> or a move to async I/O, might be needed to get one query to concurrently\n> read multiple partitions of a table, or otherwise get maximum benefit\n> from high-capacity I/O subsystems when running just a few big, expensive\n> queries.\n>\n> Otherwise I'm wondering if PostgreSQL will begin really suffering in\n> performance on workloads where queries are big and expensive but there\n> are relatively few of them running at a time.\n>\n> My point? *if* I'm not full of hot air and there's some truth to my\n> blather above, any change like that might be accompanied by a move to\n> separate query execution state from connection state, so that idle\n> connections have a much lower resource cost.\n>\n> OK, that's my hand-waving for the day done.\n>\n> --\n> Craig Ringer\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 9 Jul 2010 14:25:08 +0530", "msg_from": "Harpreet singh Wadhwa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Thanx you all for the replies.\nI got a gist on where should I head towards\nlike I should rely a bit on postgres for performance and rest on my\ntomcat and application.\nAnd will try connection pooling on postgres part.\n\nAnd if I come back for any query (related to this topic) then this\ntime it will be more precise (with real time data of my testing). ;-)\n\nRegards\nhaps\n\nOn Fri, Jul 9, 2010 at 1:22 PM, Craig Ringer\n<[email protected]> wrote:\n> On 09/07/10 12:42, Tom Lane wrote:\n>> Samuel Gendler <[email protected]> writes:\n>>> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer\n>>> <[email protected]> wrote:\n>>>> If you're not using a connection pool, start using one.\n>>\n>>> I see this issue and subsequent advice cross this list awfully\n>>> frequently.  Is there in architectural reason why postgres itself\n>>> cannot pool incoming connections in order to eliminate the requirement\n>>> for an external pool?\n>>\n>> Perhaps not, but there's no obvious benefit either.  Since there's\n>> More Than One Way To Do It, it seems more practical to keep that as a\n>> separate problem that can be solved by a choice of add-on packages.\n>\n> Admittedly I'm relatively ignorant of the details, but I increasingly\n> think PostgreSQL will need a different big architectural change in the\n> coming years, as the typical performance characteristics of machines\n> change:\n>\n> It'll need to separate \"running queries\" from \"running processes\", or\n> start threading backends, so that one way or the other a single query\n> can benefit from the capabilities of multiple CPUs. The same separation,\n> or a move to async I/O, might be needed to get one query to concurrently\n> read multiple partitions of a table, or otherwise get maximum benefit\n> from high-capacity I/O subsystems when running just a few big, expensive\n> queries.\n>\n> Otherwise I'm wondering if PostgreSQL will begin really suffering in\n> performance on workloads where queries are big and expensive but there\n> are relatively few of them running at a time.\n>\n> My point? *if* I'm not full of hot air and there's some truth to my\n> blather above, any change like that might be accompanied by a move to\n> separate query execution state from connection state, so that idle\n> connections have a much lower resource cost.\n>\n> OK, that's my hand-waving for the day done.\n>\n> --\n> Craig Ringer\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 9 Jul 2010 15:01:05 +0530", "msg_from": "Harpreet singh Wadhwa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "> Otherwise I'm wondering if PostgreSQL will begin really suffering in\n> performance on workloads where queries are big and expensive but there\n> are relatively few of them running at a time.\n\nOh, I should note at this point that I'm *not* whining that \"someone\" \nshould volunteer to do this, or that \"the postgresql project\" should \njust \"make it happen\".\n\nI'm fully aware that Pg is a volunteer project and that even if these \nspeculations were in a vaguely reasonable direction, that doesn't mean \nanyone has the time/skills/knowledge/interest to undertake such major \narchitectural change. I certainly know I have zero right to ask/expect \nanyone to - I'm very, very grateful to all those who already spend time \nhelping out and enhancing Pg. With particular props to Tom Lane for \npatience on the -general list and heroic bug-fixing persistence.\n\nSorry for the rely-to-self, I just realized my post could've been taken \nas a whine about Pg's architecture and some kind of demand that someone \ndo something about it. That couldn't be further from my intent.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 09 Jul 2010 17:53:47 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "On Fri, 2010-07-09 at 00:42 -0400, Tom Lane wrote:\n> Samuel Gendler <[email protected]> writes:\n> > On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer\n> > <[email protected]> wrote:\n> >> If you're not using a connection pool, start using one.\n> \n> > I see this issue and subsequent advice cross this list awfully\n> > frequently. Is there in architectural reason why postgres itself\n> > cannot pool incoming connections in order to eliminate the requirement\n> > for an external pool?\n> \n> Perhaps not, but there's no obvious benefit either. Since there's\n> More Than One Way To Do It, it seems more practical to keep that as a\n> separate problem that can be solved by a choice of add-on packages.\n\nThis sounds similar to the approach to taken with Replication for years\nbefore being moved into core.\n\nJust like replication, pooling has different approaches. I do think\nthat in both cases, having a solution that works, easily, out of the\n\"box\" will meet the needs of most users.\n\nThere is also the issue of perception/adoption here as well. One of my\ncolleagues mentioned that at PG East that he repeatedly heard people\ntalking (negatively) about the over reliance on add-on packages to deal\nwith core DB functionality.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Fri, 09 Jul 2010 08:56:25 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Brad Nicholson <[email protected]> wrote:\n \n> Just like replication, pooling has different approaches. I do\n> think that in both cases, having a solution that works, easily,\n> out of the \"box\" will meet the needs of most users.\n \nAny thoughts on the \"minimalist\" solution I suggested a couple weeks\nago?:\n \nhttp://archives.postgresql.org/pgsql-hackers/2010-06/msg01385.php\nhttp://archives.postgresql.org/pgsql-hackers/2010-06/msg01387.php\n \nSo far, there has been no comment by anyone....\n \n-Kevin\n", "msg_date": "Fri, 09 Jul 2010 10:09:53 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "On Fri, 9 Jul 2010, Kevin Grittner wrote:\n> Any thoughts on the \"minimalist\" solution I suggested a couple weeks\n> ago?:\n>\n> http://archives.postgresql.org/pgsql-hackers/2010-06/msg01385.php\n> http://archives.postgresql.org/pgsql-hackers/2010-06/msg01387.php\n>\n> So far, there has been no comment by anyone....\n\nInteresting idea. As far as I can see, you are suggesting solving the too \nmany connections problem by allowing lots of connections, but only \nallowing a certain number to do anything at a time?\n\nA proper connection pool provides the following advantages over this:\n\n1. Pool can be on a separate machine or machines, spreading load.\n2. Pool has a lightweight footprint per connection, whereas Postgres\n doesn't.\n3. A large amount of the overhead is sometimes connection setup, which\n this would not solve. A pool has cheap setup.\n4. This could cause Postgres backends to be holding onto large amounts of\n memory while being prevented from doing anything, which is a bad use of\n resources.\n5. A fair amount of the overhead is caused by context-switching between\n backends. The more backends, the less useful any CPU caches.\n6. There are some internal workings of Postgres that involve keeping all\n the backends informed about something going on. The more backends, the\n greater this overhead is. (This was pretty bad with the sinval queue\n overflowing a while back, but a bit better now. It still causes some\n overhead).\n7. That lock would have a metric *($!-load of contention.\n\nMatthew\n\n-- \n Unfortunately, university regulations probably prohibit me from eating\n small children in front of the lecture class.\n -- Computer Science Lecturer\n", "msg_date": "Fri, 9 Jul 2010 16:30:26 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "In case there's any doubt, the questions below aren't rhetorical.\n \nMatthew Wakeling <[email protected]> wrote:\n \n> Interesting idea. As far as I can see, you are suggesting solving\n> the too many connections problem by allowing lots of connections,\n> but only allowing a certain number to do anything at a time?\n \nRight.\n \n> A proper connection pool provides the following advantages over\n> this:\n> \n> 1. Pool can be on a separate machine or machines, spreading load.\n \nSure, but how would you do that with a built-in implementation?\n \n> 2. Pool has a lightweight footprint per connection, whereas\n> Postgres doesn't.\n \nI haven't compared footprint of, say, a pgpool connection on the\ndatabase server to that of an idle PostgreSQL connection. Do you\nhave any numbers?\n \n> 3. A large amount of the overhead is sometimes connection setup,\n> which this would not solve. A pool has cheap setup.\n \nThis would probably be most useful where the client held a\nconnection for a long time, not for the \"login for each database\ntransaction\" approach. I'm curious how often you think application\nsoftware uses that approach.\n \n> 4. This could cause Postgres backends to be holding onto large\n> amounts of memory while being prevented from doing anything,\n> which is a bad use of resources.\n \nIsn't this point 2 again? If not, what are you getting at? Again,\ndo you have numbers for the comparison, assuming the connection\npooler is running on the database server?\n \n> 5. A fair amount of the overhead is caused by context-switching\n> between backends. The more backends, the less useful any CPU\n> caches.\n \nWould this be true while a backend was blocked? Would this not be\ntrue for a connection pool client-side connection?\n \n> 6. There are some internal workings of Postgres that involve\n> keeping all the backends informed about something going on. The\n> more backends, the greater this overhead is. (This was pretty\n> bad with the sinval queue overflowing a while back, but a bit\n> better now. It still causes some overhead).\n \nHmmm... I hadn't thought about that. Again, any numbers (e.g.,\nprofile information) on this?\n \n> 7. That lock would have a metric *($!-load of contention.\n \nHere I doubt you. It would be held for such short periods that I\nsuspect that collisions would be relatively infrequent compared to\nsome of the other locks we use. As noted in the email, it may\nactually normally be an \"increment and test\" within an existing\nlocked block. Also, assuming that any \"built in\" connection pool\nwould run on the database server, why would you think the contention\nfor this would be worse than for whatever is monitoring connection\ncount in the pooler?\n \n-Kevin\n", "msg_date": "Fri, 09 Jul 2010 10:49:17 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "If your app is running under Tomcat, connection pooling is extremely easy to set up from there: It has connection pooling mechanisms built in. Request your db connections using said mechanisms, instead of doing it manually, make a couple of changes to server.xml, and the problem goes away. Hundreds, if not thousands of concurrent users might end up running with less than 10 connections.\n \n\n\n>>> Harpreet singh Wadhwa <[email protected]> 7/9/2010 3:55 AM >>>\nThanx you all for the replies.\nI got a gist on where should I head towards\nlike I should rely a bit on postgres for performance and rest on my\ntomcat and application.\nAnd will try connection pooling on postgres part.\n\nAnd if I come back for any query (related to this topic) then this\ntime it will be more precise (with real time data of my testing). ;-)\n\nRegards\nhaps\n\nOn Fri, Jul 9, 2010 at 1:22 PM, Craig Ringer\n<[email protected]> wrote:\n> On 09/07/10 12:42, Tom Lane wrote:\n>> Samuel Gendler <[email protected]> writes:\n>>> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer\n>>> <[email protected]> wrote:\n>>>> If you're not using a connection pool, start using one.\n>>\n>>> I see this issue and subsequent advice cross this list awfully\n>>> frequently. Is there in architectural reason why postgres itself\n>>> cannot pool incoming connections in order to eliminate the requirement\n>>> for an external pool?\n>>\n>> Perhaps not, but there's no obvious benefit either. Since there's\n>> More Than One Way To Do It, it seems more practical to keep that as a\n>> separate problem that can be solved by a choice of add-on packages.\n>\n> Admittedly I'm relatively ignorant of the details, but I increasingly\n> think PostgreSQL will need a different big architectural change in the\n> coming years, as the typical performance characteristics of machines\n> change:\n>\n> It'll need to separate \"running queries\" from \"running processes\", or\n> start threading backends, so that one way or the other a single query\n> can benefit from the capabilities of multiple CPUs. The same separation,\n> or a move to async I/O, might be needed to get one query to concurrently\n> read multiple partitions of a table, or otherwise get maximum benefit\n> from high-capacity I/O subsystems when running just a few big, expensive\n> queries.\n>\n> Otherwise I'm wondering if PostgreSQL will begin really suffering in\n> performance on workloads where queries are big and expensive but there\n> are relatively few of them running at a time.\n>\n> My point? *if* I'm not full of hot air and there's some truth to my\n> blather above, any change like that might be accompanied by a move to\n> separate query execution state from connection state, so that idle\n> connections have a much lower resource cost.\n>\n> OK, that's my hand-waving for the day done.\n>\n> --\n> Craig Ringer\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance \n>\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\nIf your app is running under Tomcat, connection pooling is extremely easy to set up from there: It has connection pooling mechanisms built in. Request your db connections using said mechanisms, instead of doing it manually, make a couple of changes to server.xml, and the problem goes away. Hundreds, if not thousands of concurrent users might end up running with less than 10 connections.\n \n>>> Harpreet singh Wadhwa <[email protected]> 7/9/2010 3:55 AM >>>Thanx you all for the replies.I got a gist on where should I head towardslike I should rely a bit on postgres for performance and rest on mytomcat and application.And will try connection pooling on postgres part.And if I come back for any query (related to this topic) then thistime it will be more precise (with real time data of my testing). ;-)RegardshapsOn Fri, Jul 9, 2010 at 1:22 PM, Craig Ringer<[email protected]> wrote:> On 09/07/10 12:42, Tom Lane wrote:>> Samuel Gendler <[email protected]> writes:>>> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer>>> <[email protected]> wrote:>>>> If you're not using a connection pool, start using one.>>>>> I see this issue and subsequent advice cross this list awfully>>> frequently.  Is there in architectural reason why postgres itself>>> cannot pool incoming connections in order to eliminate the requirement>>> for an external pool?>>>> Perhaps not, but there's no obvious benefit either.  Since there's>> More Than One Way To Do It, it seems more practical to keep that as a>> separate problem that can be solved by a choice of add-on packages.>> Admittedly I'm relatively ignorant of the details, but I increasingly> think PostgreSQL will need a different big architectural change in the> coming years, as the typical performance characteristics of machines> change:>> It'll need to separate \"running queries\" from \"running processes\", or> start threading backends, so that one way or the other a single query> can benefit from the capabilities of multiple CPUs. The same separation,> or a move to async I/O, might be needed to get one query to concurrently> read multiple partitions of a table, or otherwise get maximum benefit> from high-capacity I/O subsystems when running just a few big, expensive> queries.>> Otherwise I'm wondering if PostgreSQL will begin really suffering in> performance on workloads where queries are big and expensive but there> are relatively few of them running at a time.>> My point? *if* I'm not full of hot air and there's some truth to my> blather above, any change like that might be accompanied by a move to> separate query execution state from connection state, so that idle> connections have a much lower resource cost.>> OK, that's my hand-waving for the day done.>> --> Craig Ringer>> --> Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance>-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 09 Jul 2010 11:44:12 -0500", "msg_from": "\"Jorge Montero\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "On Fri, 9 Jul 2010, Kevin Grittner wrote:\n>> Interesting idea. As far as I can see, you are suggesting solving\n>> the too many connections problem by allowing lots of connections,\n>> but only allowing a certain number to do anything at a time?\n>\n> Right.\n\nI think in some situations, this arrangement would be an advantage. \nHowever, I do not think it will suit the majority of situations, and could \nreduce the performance when the user doesn't need the functionality, \neither because they have a pool already, or they don't have many \nconnections.\n\nNo, I don't have any numbers.\n\n>> 1. Pool can be on a separate machine or machines, spreading load.\n>\n> Sure, but how would you do that with a built-in implementation?\n\nThat's my point exactly. If you have an external pool solution, you can \nput it somewhere else - maybe on multiple somewhere elses.\n\n>> 3. A large amount of the overhead is sometimes connection setup,\n>> which this would not solve. A pool has cheap setup.\n>\n> This would probably be most useful where the client held a\n> connection for a long time, not for the \"login for each database\n> transaction\" approach. I'm curious how often you think application\n> software uses that approach.\n\nWhat you say is true. I don't know how often that is, but it seems to be \nthose times that people come crying to the mailing list.\n\n>> 4. This could cause Postgres backends to be holding onto large\n>> amounts of memory while being prevented from doing anything,\n>> which is a bad use of resources.\n>\n> Isn't this point 2 again?\n\nKind of. Yes. Point 2 was simple overhead. This point was that the backend \nmay have done a load of query-related allocation, and then been stopped.\n\n>> 7. That lock would have a metric *($!-load of contention.\n>\n> Here I doubt you. It would be held for such short periods that I\n> suspect that collisions would be relatively infrequent compared to\n> some of the other locks we use. As noted in the email, it may\n> actually normally be an \"increment and test\" within an existing\n> locked block.\n\nFair enough. It may be much less of a problem than I had previously \nthought.\n\nMatthew\n\n-- \n Change is inevitable, except from vending machines.\n", "msg_date": "Fri, 9 Jul 2010 18:03:51 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Matthew Wakeling <[email protected]> wrote:\n> On Fri, 9 Jul 2010, Kevin Grittner wrote:\n>>> Interesting idea. As far as I can see, you are suggesting\n>>> solving the too many connections problem by allowing lots of\n>>> connections, but only allowing a certain number to do anything\n>>> at a time?\n>>\n>> Right.\n> \n> I think in some situations, this arrangement would be an\n> advantage. However, I do not think it will suit the majority of\n> situations, and could reduce the performance when the user doesn't\n> need the functionality, either because they have a pool already,\n> or they don't have many connections.\n \nOh, totally agreed, except that I think we can have essentially nil\nimpact if they don't exceed a configured limit. In my experience,\npooling is more effective the closer you put it to the client. I\nsuppose the strongest argument that could be made against building\nin some sort of pooling is that it doesn't encourage people to look\nfor client-side solutions. However, we seem to get a lot of posts\nfrom people who don't do this, are not able to easily manage it, and\nwho would benefit from even a simple solution like this.\n \n-Kevin\n", "msg_date": "Fri, 09 Jul 2010 12:52:18 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "If anything was built in the database to handle such connections, I'd recommend a big, bold warning, recommending the use of client-side pooling if available. For something like, say, a web-server, pooling connections to the database provides a massive performance advantage regardless of how good the database is at handling way more active queries than the hardware can handle: The assignment of a connection to a thread tends to be at least an order of magnitude cheaper than establishing a new connection for each new thread, and destroying it when it dies. This is especially true if the client architecture relies in relatively short lived threads.\n \nWhile there are a few cases where pooling is counter productive, this only happens in relatively few scenarios. This is why every java application server out there wil strongly recommend using its own facilities to connect to a database: The performance is almost always better, and it provides less headaches to the DBAs.\n \nNow, if remote clients are accessing your database directly, setting up a pool inbetween might not be as straightforward or give you the same gains across the board, and that might be the only case where letting the db do its own pooling makes sense.\n\n>>> \"Kevin Grittner\" <[email protected]> 7/9/2010 12:52 PM >>>\nMatthew Wakeling <[email protected]> wrote:\n> On Fri, 9 Jul 2010, Kevin Grittner wrote:\n>>> Interesting idea. As far as I can see, you are suggesting\n>>> solving the too many connections problem by allowing lots of\n>>> connections, but only allowing a certain number to do anything\n>>> at a time?\n>>\n>> Right.\n> \n> I think in some situations, this arrangement would be an\n> advantage. However, I do not think it will suit the majority of\n> situations, and could reduce the performance when the user doesn't\n> need the functionality, either because they have a pool already,\n> or they don't have many connections.\n\nOh, totally agreed, except that I think we can have essentially nil\nimpact if they don't exceed a configured limit. In my experience,\npooling is more effective the closer you put it to the client. I\nsuppose the strongest argument that could be made against building\nin some sort of pooling is that it doesn't encourage people to look\nfor client-side solutions. However, we seem to get a lot of posts\nfrom people who don't do this, are not able to easily manage it, and\nwho would benefit from even a simple solution like this.\n\n-Kevin\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\nIf anything was built in the database to handle such connections, I'd recommend a big, bold warning, recommending the use of client-side pooling if available. For something like, say, a web-server, pooling connections to the database provides a massive performance advantage regardless of how good the database is at handling way more active queries than the hardware can handle: The assignment of a connection to a thread tends to be at least an order of magnitude cheaper than establishing a new connection for each new thread, and destroying it when it dies. This is especially true if the client architecture relies in relatively short lived threads.\n \nWhile there are a few cases where pooling is counter productive, this only happens in relatively few scenarios. This is why every java application server out there wil strongly recommend using its own facilities to connect to a database: The performance is almost always better, and it provides less headaches to the DBAs.\n \nNow, if remote clients are accessing your database directly, setting up a pool inbetween might not be as straightforward or give you the same gains across the board, and that might be the only case where letting the db do its own pooling makes sense.>>> \"Kevin Grittner\" <[email protected]> 7/9/2010 12:52 PM >>>Matthew Wakeling <[email protected]> wrote:> On Fri, 9 Jul 2010, Kevin Grittner wrote:>>> Interesting idea. As far as I can see, you are suggesting>>> solving the too many connections problem by allowing lots of>>> connections, but only allowing a certain number to do anything>>> at a time?>>>> Right.> > I think in some situations, this arrangement would be an> advantage.  However, I do not think it will suit the majority of> situations, and could reduce the performance when the user doesn't> need the functionality, either because they have a pool already,> or they don't have many connections.Oh, totally agreed, except that I think we can have essentially nilimpact if they don't exceed a configured limit.  In my experience,pooling is more effective the closer you put it to the client.  Isuppose the strongest argument that could be made against buildingin some sort of pooling is that it doesn't encourage people to lookfor client-side solutions.  However, we seem to get a lot of postsfrom people who don't do this, are not able to easily manage it, andwho would benefit from even a simple solution like this.-Kevin-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 09 Jul 2010 14:27:44 -0500", "msg_from": "\"Jorge Montero\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "\"Jorge Montero\" <[email protected]> wrote:\n \n> If anything was built in the database to handle such connections,\n> I'd recommend a big, bold warning, recommending the use of client-\n> side pooling if available.\n \n+1\n \n-Kevin\n", "msg_date": "Fri, 09 Jul 2010 14:31:47 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "On Fri, Jul 9, 2010 at 12:42 AM, Tom Lane <[email protected]> wrote:\n> Samuel Gendler <[email protected]> writes:\n>> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer\n>> <[email protected]> wrote:\n>>> If you're not using a connection pool, start using one.\n>\n>> I see this issue and subsequent advice cross this list awfully\n>> frequently.  Is there in architectural reason why postgres itself\n>> cannot pool incoming connections in order to eliminate the requirement\n>> for an external pool?\n>\n> Perhaps not, but there's no obvious benefit either.  Since there's\n> More Than One Way To Do It, it seems more practical to keep that as a\n> separate problem that can be solved by a choice of add-on packages.\n\nI'm not buying it. A separate connection pooler increases overhead\nand management complexity, and, I believe, limits our ability to\nimplement optimizations like parallel query execution. I'm glad there\nare good ones available, but the fact that they're absolutely\nnecessary for good performance in some environments is not a feature.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 9 Jul 2010 16:35:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Matthew Wakeling wrote:\n> If you have an external pool solution, you can put it somewhere else - \n> maybe on multiple somewhere elses.\n\nThis is the key point to observe: if you're at the point where you have \nso many connections that you need a pool, the last place you want to put \nthat is on the overloaded database server itself. Therefore, it must be \nan external piece of software to be effective, rather than being part of \nthe server itself. Database servers are relatively expensive computing \nhardware due to size/quantity/quality of disks required. You can throw \na pooler (or poolers) on any cheap 1U server. This is why a built-in \npooler, while interesting, is not particularly functional for how people \nnormally scale up real-world deployments.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 09 Jul 2010 23:59:13 +0100", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Greg Smith <[email protected]> wrote:\n \n> if you're at the point where you have so many connections that you\n> need a pool, the last place you want to put that is on the\n> overloaded database server itself. Therefore, it must be an\n> external piece of software to be effective, rather than being part\n> of the server itself.\n \nIt *is* the last place you want to put it, but putting it there can\nbe much better than not putting it *anywhere*, which is what we've\noften seen.\n \n-Kevin\n", "msg_date": "Fri, 09 Jul 2010 19:31:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "On 10/07/10 00:56, Brad Nicholson wrote:\n> On Fri, 2010-07-09 at 00:42 -0400, Tom Lane wrote:\n> \n>>\n>> Perhaps not, but there's no obvious benefit either. Since there's\n>> More Than One Way To Do It, it seems more practical to keep that as a\n>> separate problem that can be solved by a choice of add-on packages.\n>> \n> This sounds similar to the approach to taken with Replication for years\n> before being moved into core.\n>\n> Just like replication, pooling has different approaches. I do think\n> that in both cases, having a solution that works, easily, out of the\n> \"box\" will meet the needs of most users.\n>\n> There is also the issue of perception/adoption here as well. One of my\n> colleagues mentioned that at PG East that he repeatedly heard people\n> talking (negatively) about the over reliance on add-on packages to deal\n> with core DB functionality.\n>\n> \n\nIt would be interesting to know more about what they thought an 'over \nreliance' was and which packages they meant.\n\nWhile clearly in the case of replication something needed to be done to \nmake it better and easier, it is not obvious that the situation with \nconnection pools is analogous. For instance we make extensive use of \nPgBouncer, and it seems to do the job fine and is ridiculously easy to \ninstall and setup. So would having (something like) this in core be an \nimprovement? Clearly if the 'in core' product is better then it is \ndesirable... similarly if the packaged product is better... well let's \nhave that then!\n\nI've certainly observed a 'fear of package installation' on the part of \nsome folk, which is often a hangover from the 'Big IT shop' mentality \nwhere it requires blood signatures and child sacrifice to get anything \nnew installed.\n\nregards\n\nMark\n\nP.s Also note that Database Vendors like pooling integrated in the core \nof *their* product because it is another thing to charge a license for. \nUnfortunately this can also become an entrenched mentality of 'must be \nin core' on the part of consultants etc!\n", "msg_date": "Sat, 10 Jul 2010 13:01:44 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "\n> It *is* the last place you want to put it, but putting it there can\n> be much better than not putting it *anywhere*, which is what we've\n> often seen.\n\nWell, what you proposed is an admission control mechanism, which is\n*different* from a connection pool, although the two overlap. A\nconnection pool solves 4 problems when it's working:\n\na) limiting the number of database server processes\nb) limiting the number of active concurrent queries\nc) reducing response times for allocating a new connection\nd) allowing management of connection routes to the database\n(redirection, failover, etc.)\n\nWhat you were proposing is only (b). While (b) alone is better than\nnothing, it only solves some kinds of problems. Database backend\nprocesses are *not* free, and in general when I see users with \"too many\nconnections\" failures they are not because of too many concurrent\nqueries, but rather because of too many idle connections (I've seen up\nto 1800 on a server). Simply adding (b) for crappy applications would\nmake the problem worse, not better, because of the large number of\npending queries which the developer would fail to deal with, or monitor.\n\nSo while adding (b) to core alone would be very useful for some users,\nironically it's generally for the more advanced users which are not the\nones we're trying to help on this thread.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 09 Jul 2010 18:25:15 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "\n\nSent from my iPhone\n\nOn Jul 9, 2010, at 18:25, Josh Berkus <[email protected]> wrote:\n\n>\n> So while adding (b) to core alone would be very useful for some users,\n> ironically it's generally for the more advanced users which are not \n> the\n> ones we're trying to help on this thread.\n\nIt would seem from evidence presented on this thread that the more \nappropriate conversation would maybe be with package maintainers, to \nperhaps get them to include a connection pool or provide a package \nthat comes with a pool preconfigured and installed, along with \nimproving existing documentation so that it encourages the use of a \npool as a first class installation choice since it seems to be \nsomething of a first class problem for a lot of novice users.\n\nJust to give some background on my perspective - my prior familiarity \nwith a connection pool was entirely on the client side, where I've \nbeen using them for years go keep resource consumption down on the \nclient.. But it never occurred to me to consider one on the other end \nof those connections, despite the fact that I usually have a cluster \nof app hosts all talking to the same db. I assumed low connection \ncount was desirable, but not mandatory, since surely the db server \nlimited its own resource consumption, much the way a well written \nclient app will. I basically assumed that the postgres devs used the \nsame logic I did when I pooled my connections at the client side in \norder to minimize resource consumption there. I've got no truck with \nthe reasons presented against doing so, since they make perfectly good \nsense to me.\n\nHowever, I suspect there are lots of engineers like myself - folks \nworking without the benefit of a dedicated dba or a dba who is new to \nthe postgres platform - who make naive assumptions that aren't \nimmediately or obviously corrected by the docs (I may be sticking my \nfoot in my mouth here. I haven't read the standard docs in a very long \ntime). With this issue in particular, the fix is fairly trivial and \nbrings other benefits as well. But it sucks to discover it only after \nyou've started to get errors on a running app, no matter how easy the \nfix.\n\nSo probably this is really only a bug in communication and can be \nfixed there. That's great. Easier to fix bugs are hard to find. I have \nyet to contribute to postgres development, so I guess, if no one \nobjects, I'll see what I can do about improving the documentation of \nthis issue, both in the official docs and just making sure it gets \nbetter billing in other sources of postgres documentation. But you'll \nhave to bear with me, as I do have a more-than-full-time other job, \nand no experience with the pg developer community other than a couple \nof weeks on the mailing lists. But I do like to contribute to projects \nI use. It always winds up making me a more proficient user.\n\n(for the record, if I wasn't limited to my phone at the moment I would \nactually check the state of existing documentation before sending \nthis, so if I'm talking out of my ass on the lack of documentation, \nplease go easy on me. I mean no offense)\n\n--sam\n\n\n>\n", "msg_date": "Fri, 9 Jul 2010 20:29:59 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On 10/07/2010 9:25 AM, Josh Berkus wrote:\n>\n>> It *is* the last place you want to put it, but putting it there can\n>> be much better than not putting it *anywhere*, which is what we've\n>> often seen.\n>\n> Well, what you proposed is an admission control mechanism, which is\n> *different* from a connection pool, although the two overlap. A\n> connection pool solves 4 problems when it's working:\n>\n> a) limiting the number of database server processes\n> b) limiting the number of active concurrent queries\n> c) reducing response times for allocating a new connection\n> d) allowing management of connection routes to the database\n> (redirection, failover, etc.)\n\nI agree with you: for most Pg users (a) is really, really important. As \nyou know, in PostgreSQL each connection maintains not only general \nconnection state (GUC settings, etc) and if in a transaction, \ntransaction state, but also a query executor (full backend). That gets \nnasty not only in memory use, but in impact on active query performance, \nas all those query executors have to participate in global signalling \nfor lock management etc.\n\nSo an in-server pool that solved (b) but not (a) would IMO not be \nparticularly useful for the majority of users.\n\nThat said, I don't think it follows that (a) cannot be solved in-core. \nHow much architectural change would be required to do it efficiently \nenough, though...\n\n--\nCraig Ringer\n", "msg_date": "Sat, 10 Jul 2010 11:33:49 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Fri, Jul 9, 2010 at 11:33 PM, Craig Ringer\n<[email protected]> wrote:\n> On 10/07/2010 9:25 AM, Josh Berkus wrote:\n>>\n>>> It *is* the last place you want to put it, but putting it there can\n>>> be much better than not putting it *anywhere*, which is what we've\n>>> often seen.\n>>\n>> Well, what you proposed is an admission control mechanism, which is\n>> *different* from a connection pool, although the two overlap.  A\n>> connection pool solves 4 problems when it's working:\n>>\n>> a) limiting the number of database server processes\n>> b) limiting the number of active concurrent queries\n>> c) reducing response times for allocating a new connection\n>> d) allowing management of connection routes to the database\n>> (redirection, failover, etc.)\n>\n> I agree with you: for most Pg users (a) is really, really important. As you\n> know, in PostgreSQL each connection maintains not only general connection\n> state (GUC settings, etc) and if in a transaction, transaction state, but\n> also a query executor (full backend). That gets nasty not only in memory\n> use, but in impact on active query performance, as all those query executors\n> have to participate in global signalling for lock management etc.\n>\n> So an in-server pool that solved (b) but not (a) would IMO not be\n> particularly useful for the majority of users.\n>\n> That said, I don't think it follows that (a) cannot be solved in-core. How\n> much architectural change would be required to do it efficiently enough,\n> though...\n\nRight, let's not confuse Kevin's argument that we should have\nconnection pooling in core with advocacy for any particular patch or\nfeature suggestion that he may have offered on some other thread. A\nvery simple in-core connection pooler might look something like this:\nwhen a session terminates, the backend doesn't exit. Instead, it\nwaits for the postmaster to reassign it to a new connection, which the\npostmaster does in preference to starting new backends when possible.\nBut if a backend doesn't get assigned a new connection within a\ncertain period of time, then it goes ahead and exits anyway.\n\nYou might argue that this is not really a connection pooler at all\nbecause there's no admission control, but the point is you're avoiding\nthe overhead of creating and destroying backends unnecessarily. Of\ncourse, I'm also relying on the unsubstantiated assumption that it's\npossible to pass a socket connection between processes.\n\nAnother approach to the performance problem is to try to find ways of\nreducing the overhead associated with having a large number of\nbackends in the system. That's not a connection pooler either, but it\nmight reduce the need for one.\n\nStill another approach is admission control based on transactions,\nbackends, queries, memory usage, I/O, or what have you.\n\nNone of these things are mutually exclusive.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 9 Jul 2010 23:47:36 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On 2010-07-10 00:59, Greg Smith wrote:\n> Matthew Wakeling wrote:\n> > If you have an external pool solution, you can put it somewhere\n> > else - maybe on multiple somewhere elses.\n>\n> This is the key point to observe: if you're at the point where you\n> have so many connections that you need a pool, the last place you\n> want to put that is on the overloaded database server itself.\n> Therefore, it must be an external piece of software to be effective,\n> rather than being part of the server itself. Database servers are\n> relatively expensive computing hardware due to size/quantity/quality\n> of disks required. You can throw a pooler (or poolers) on any cheap\n> 1U server. This is why a built-in pooler, while interesting, is not\n> particularly functional for how people normally scale up real-world\n> deployments.\n\nThat may be totally correct for the 10% of the userbase\nthat are in a squeezed situation, but for the 90% that isn't (or isn't aware\nof being there), the build-in would be a direct benefit. For the 20%\nliving near the \"edge\" it may be the difference between \"just working\" and\nextra hassle.\n\nI think it is a fair assumption that the majority of PG's users solves\nthe problems without an connection pooler, and the question\nis if it is beneficial to let them scale better without doing anything?\n\nI have also provided a case where Kevin proposal \"might\" be a\nbenefit but a connection pooler cannot solve it:\n\nhttp://archives.postgresql.org/pgsql-hackers/2010-06/msg01438.php\n(at least as I see it, but I'm fully aware that there is stuff I dont \nknow of)\n\nI dont think a build-in connection-poller (or similiar) would in any\nway limit the actions and abillities of an external one?\n\n* Both numbers wildly guessed..\n-- \nJesper\n\n\n\n\n\n\n\n\n\nOn 2010-07-10 00:59, Greg Smith wrote:\n> Matthew Wakeling wrote:\n>> If you have an external pool solution, you can put it somewhere\n>> else - maybe on multiple somewhere elses.\n> \n> This is the key point to observe:  if you're at the point where you\n> have so many connections that you need a pool, the last place you\n> want to put that is on the overloaded database server itself.\n> Therefore, it must be an external piece of software to be\neffective,\n> rather than being part of the server itself.  Database servers are\n> relatively expensive computing hardware due to\nsize/quantity/quality\n> of disks required.  You can throw a pooler (or poolers) on any\ncheap\n> 1U server.  This is why a built-in pooler, while interesting, is\nnot\n> particularly functional for how people normally scale up real-world\n> deployments.\n\nThat may be totally correct for the 10% of the userbase \nthat are in a squeezed situation, but for the 90% that isn't (or isn't\naware\nof being there), the build-in would be a direct benefit. For the 20% \nliving near the \"edge\" it may be the difference between \"just working\"\nand\nextra hassle. \n\nI think it is a fair assumption that the majority of PG's users solves \nthe problems without an connection pooler, and the question\nis if it is beneficial to let them scale better without doing anything?\n\n\nI have also provided a case where Kevin proposal \"might\" be a \nbenefit but a connection pooler cannot solve it:\n \nhttp://archives.postgresql.org/pgsql-hackers/2010-06/msg01438.php\n(at least as I see it, but I'm fully aware that there is stuff I dont\nknow of) \n\nI dont think a build-in connection-poller (or similiar) would in any \nway limit the actions and abillities of an external one? \n\n* Both numbers wildly guessed.. \n-- \nJesper", "msg_date": "Sat, 10 Jul 2010 08:33:32 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Jesper Krogh wrote:\n> I dont think a build-in connection-poller (or similiar) would in any\n> way limit the actions and abillities of an external one?\n\nTwo problems to recognize. First is that building something in has the \npotential to significantly limit use and therefore advancement of work \non external pools, because of the \"let's use the built in one instead of \ninstalling something extra\" mentality. I'd rather have a great external \nproject (which is what we have with pgBouncer) than a mediocre built-in \none that becomes the preferred way just by nature of being in the core. \nIf work on a core pooler was started right now, it would be years before \nthat reached feature/performance parity, and during that time its \nexistence would be a net loss compared to the current status quo for \nmany who used it.\n\nThe second problem is the limited amount of resources to work on \nimprovements to PostgreSQL. If you want to improve the reach of \nPostgreSQL, I consider projects like materialized views and easy \nbuilt-in partitioning to be orders of magnitude more useful things to \nwork on than the marginal benefit of merging the features of the \nexternal pool software inside the database. I consider the whole topic \na bit of a distraction compared to instead working on *any* of the \nhighly rated ideas at http://postgresql.uservoice.com/forums/21853-general\n\nAs a random technical note, I would recommend that anyone who is \nthinking about a pooler in core take a look at how pgBouncer uses \nlibevent to respond to requests, a design model inspired by that of \nmemcached. I haven't looked at it deeply yet, but my gut guess is that \nthis proven successful model would be hard to graft on top of the \nexisting PostgreSQL process design, and doing anything but that is \nunlikely to perform as well.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 10 Jul 2010 13:30:21 +0100", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "\n> Right, let's not confuse Kevin's argument that we should have\n> connection pooling in core with advocacy for any particular patch or\n> feature suggestion that he may have offered on some other thread. A\n> very simple in-core connection pooler might look something like this:\n> when a session terminates, the backend doesn't exit. Instead, it\n> waits for the postmaster to reassign it to a new connection, which the\n> postmaster does in preference to starting new backends when possible.\n> But if a backend doesn't get assigned a new connection within a\n> certain period of time, then it goes ahead and exits anyway.\n\nThis would, in my opinion, be an excellent option for PostgreSQL and \nwould save a LOT of newbie pain. Going back to my list, it would help \nwith both problems (a) and (c). It wouldn't be as good as pgbouncer, \nbut it would be \"good enough\" for a lot of users.\n\nHOWEVER, there is the issue that such a mechanism isn't \"free\". There \nare issue with sharing backends around GUCs, user configuration, \nsecurity, and prepared plans -- all issues which presently cause people \ndifficulty with pgbouncer. I think the costs are worth it, but we'd \nhave to make some attempt to tackle those issues as well. And, of \ncourse, we'd need to let DBAs turn the pooling off.\n\nI'd envision parameters:\n\npool_connections true/false\npool_connection_timeout 60s\n\n> I'm also relying on the unsubstantiated assumption that it's\n> possible to pass a socket connection between processes.\n\nDoesn't pgpool do this?\n\n> Still another approach is admission control based on transactions,\n> backends, queries, memory usage, I/O, or what have you.\n\nThat's a different problem, and on its own doesn't help newbies. It's \ncomplimetary to pooling, though, so would be nice to have.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Sat, 10 Jul 2010 11:42:11 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> I'm also relying on the unsubstantiated assumption that it's\n>> possible to pass a socket connection between processes.\n\n> Doesn't pgpool do this?\n\nNo, and in fact that's exactly why the proposed implementation isn't\never going to be in core: it's not possible to do it portably.\n(And no, I'm not interested in hearing about how you can do it on\nplatforms X, Y, and/or Z.)\n\nI agree with the comments to the effect that this is really a packaging\nand documentation problem. There is no need for us to re-invent the\nexisting solutions, but there is a need for making sure that they are\nreadily available and people know when to use them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Jul 2010 14:51:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning. " }, { "msg_contents": "\n> Two problems to recognize. First is that building something in has the \n> potential to significantly limit use and therefore advancement of work \n> on external pools, because of the \"let's use the built in one instead of \n> installing something extra\" mentality. I'd rather have a great external \n> project (which is what we have with pgBouncer) than a mediocre built-in \n> one that becomes the preferred way just by nature of being in the core.\n\nI would prefer having supplier A build a great product that seamlessly \ninterfaces with supplier B's great product, rather than having supplier M$ \nbuy A, develop a half-working brain-dead version of B into A and market it \nas the new hot stuff, sinking B in the process. Anyway, orthogonal feature \nsets (like database and pooler) implemented in separate applications fit \nthe open source development model quite well I think. Merge everything in, \nyou get PHP.\n", "msg_date": "Mon, 12 Jul 2010 00:47:05 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "On Sat, 10 Jul 2010, Tom Lane wrote:\n>> Doesn't pgpool do this?\n>\n> No, and in fact that's exactly why the proposed implementation isn't\n> ever going to be in core: it's not possible to do it portably.\n\nI'm surprised. Doesn't apache httpd do this? Does it have to do a whole \nload of non-portable stuff? It seems to work on a whole load of platforms.\n\nMatthew\n\n-- \n I would like to think that in this day and age people would know better than\n to open executables in an e-mail. I'd also like to be able to flap my arms\n and fly to the moon. -- Tim Mullen\n", "msg_date": "Mon, 12 Jul 2010 10:45:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On 12/07/10 17:45, Matthew Wakeling wrote:\n> \n> I'm surprised. Doesn't apache httpd do this? Does it have to do a whole\n> load of non-portable stuff? It seems to work on a whole load of platforms.\n\nA lot of what Apache HTTPd does is handled via the Apache Portable\nRuntime (APR). It contains a lot of per-platform handlers for various\nfunctionality.\n\nhttp://apr.apache.org/docs/apr/1.4/modules.html\n\nI don't know if the socket passing is provided as part of APR or is part\nof Apache HTTPd its self, but I wouldn't be at all surprised if it was\nin APR.\n\nPersonally I'm now swayed by arguments presented here that trying to\npush pooling into core isn't really desirable, and that better\npackaging/bundling of existing solutions would be better.\n\nPerhaps documenting the pluses/minuses of the current pooling options\nand providing clear recommendations on which to use for different use\ncases would help, since half the trouble is users not knowing they need\na pool or being confused as to which to select.\n\nThis discussion reminds me a bit of Hibernate's built-in client-side\nconnection pool. It has one, but it's a unloved stepchild that even the\nHibernate devs suggest should be avoided in favour of a couple of\nexternal 3rd party options.\n\nA built-in pool seems like a great idea, but there are multiple existing\nones because they solve different problems in different ways. Unless a\nbuilt-in one could solve ALL those needs, or be so vastly simpler (due\nto code re-use, easier configuration, etc) that it's worth building one\nthat won't fit everyone's needs, then it's best to stick to the existing\nexternal options.\n\nSo rather than asking \"should core have a connection pool\" perhaps\nwhat's needed is to ask \"what can an in-core pool do that an external\npool cannot do?\"\n\nAdmission control / resource limit features would be great to have in\ncore, and can't really be done fully in external modules ... but could\nbe designed in ways that would allow external poolers to add\nfunctionality on top. Josh Berkus has made some good points on why this\nisn't as easy as it looks, though:\n\n\nhttp://it.toolbox.com/blogs/database-soup/admission-control-and-its-discontents-39895\n\n--\nCraig Ringer\n", "msg_date": "Mon, 12 Jul 2010 18:58:44 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Mon, 2010-07-12 at 18:58 +0800, Craig Ringer wrote:\n> On 12/07/10 17:45, Matthew Wakeling wrote:\n> > \n> > I'm surprised. Doesn't apache httpd do this? Does it have to do a whole\n> > load of non-portable stuff? It seems to work on a whole load of platforms.\n> \n> A lot of what Apache HTTPd does is handled via the Apache Portable\n> Runtime (APR). It contains a lot of per-platform handlers for various\n> functionality.\n> \n> http://apr.apache.org/docs/apr/1.4/modules.html\n> \n> I don't know if the socket passing is provided as part of APR or is part\n> of Apache HTTPd its self, but I wouldn't be at all surprised if it was\n> in APR.\n> \n> Personally I'm now swayed by arguments presented here that trying to\n> push pooling into core isn't really desirable, and that better\n> packaging/bundling of existing solutions would be better.\n\n\"better packaging/bundling of existing solutions\" is good in it's own\nright,weather there will eventually be some support for pooling in core\nor not.\n\n> Perhaps documenting the pluses/minuses of the current pooling options\n> and providing clear recommendations on which to use for different use\n> cases would help, since half the trouble is users not knowing they need\n> a pool or being confused as to which to select.\n>\n> This discussion reminds me a bit of Hibernate's built-in client-side\n> connection pool. It has one, but it's a unloved stepchild that even the\n> Hibernate devs suggest should be avoided in favour of a couple of\n> external 3rd party options.\n\nYes, pooling _is_ often better handled as a (set of) separate options,\njust because of the reason that here one size does definitely not fit\nall;\n\nAnd efficient in-core pooler probably will look very much like pgbouncer\nrunning in a separate thread spawned by postmaster anyway.\n\nLet's hope there will be some support in core for having user defined\nhelper processes soon(ish), so tweaking pgbouncer to run as one will be\nreasonably easy :)\n\n> A built-in pool seems like a great idea, but there are multiple existing\n> ones because they solve different problems in different ways. Unless a\n> built-in one could solve ALL those needs, or be so vastly simpler (due\n> to code re-use, easier configuration, etc) that it's worth building one\n> that won't fit everyone's needs, then it's best to stick to the existing\n> external options.\n> \n> So rather than asking \"should core have a connection pool\" perhaps\n> what's needed is to ask \"what can an in-core pool do that an external\n> pool cannot do?\"\n\nProbably nothing. OTOH there are some things that an external pool can\ndo that a built-in one can't, like running on a separate host and\npooling more than 32000 client connections there.\n\nCascaded pooling seems also impossible with built-in pooling\n\n> Admission control / resource limit features would be great to have in\n> core, and can't really be done fully in external modules ... but could\n> be designed in ways that would allow external poolers to add\n> functionality on top. Josh Berkus has made some good points on why this\n> isn't as easy as it looks, though:\n> \n> \n> http://it.toolbox.com/blogs/database-soup/admission-control-and-its-discontents-39895\n> \n> --\n> Craig Ringer\n> \n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Mon, 12 Jul 2010 15:22:18 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Craig Ringer <[email protected]> wrote:\n \n> So rather than asking \"should core have a connection pool\" perhaps\n> what's needed is to ask \"what can an in-core pool do that an\n> external pool cannot do?\"\n \n(1) It can prevent the most pessimal performance problems resulting\nfrom lack of an external connection pool (or a badly configured one)\nby setting a single GUC. Configuration tools could suggest a good\nvalue during initial setup.\n \n(2) It can be used without installing and configuring a more\nsophisticated and complex product.\n \n(3) It might reduce latency because it avoids having to receive,\nparse, and resend data in both directions -- eliminating one \"hop\". \nI know the performance benefit would usually accrue to the external\nconnection pooler, but there might be some circumstances where a\nbuilt-in pool could win.\n \n(4) It's one more checkbox which can be ticked off on some RFPs.\n \nThat said, I fully agree that if we can include good documentation\non the external poolers and we can get packagers to include poolers\nin their distribution, that gets us a much bigger benefit. A\nbuilt-in solution would only be worthwhile if it was simple enough\nand lightweight enough not to be a burden on execution time or\nmaintenance. Maybe that's too big an if.\n \n-Kevin\n", "msg_date": "Mon, 12 Jul 2010 07:41:36 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance\n\t tuning." }, { "msg_contents": "Craig Ringer wrote:\n> It'll need to separate \"running queries\" from \"running processes\", or\n> start threading backends, so that one way or the other a single query\n> can benefit from the capabilities of multiple CPUs. The same separation,\n> or a move to async I/O, might be needed to get one query to concurrently\n> read multiple partitions of a table, or otherwise get maximum benefit\n> from high-capacity I/O subsystems when running just a few big, expensive\n> queries.\n> \n> Otherwise I'm wondering if PostgreSQL will begin really suffering in\n> performance on workloads where queries are big and expensive but there\n> are relatively few of them running at a time.\n\nAgreed. We certainly are going to have to go in that direction someday.\nWe have TODO items for these.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Mon, 12 Jul 2010 12:04:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> I agree with the comments to the effect that this is really a packaging\n> and documentation problem. There is no need for us to re-invent the\n> existing solutions, but there is a need for making sure that they are\n> readily available and people know when to use them.\n\nOn this topic, I think we're getting back to the idea of having non-core\ndaemon helpers that should get \"supervised\" the way postmaster already\ndoes with backends wrt starting and stoping them at the right time.\n\nSo a supervisor daemon with a supervisor API that would have to support\nautovacuum as a use case, then things like pgagent, PGQ and pgbouncer,\nwould be very welcome.\n\nWhat about starting a new thread about that? Or you already know you\nwon't want to push the extensibility of PostgreSQL there?\n\nRegards,\n-- \ndim\n", "msg_date": "Tue, 13 Jul 2010 16:42:23 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Tue, Jul 13, 2010 at 16:42, Dimitri Fontaine <[email protected]> wrote:\n> Tom Lane <[email protected]> writes:\n>> I agree with the comments to the effect that this is really a packaging\n>> and documentation problem.  There is no need for us to re-invent the\n>> existing solutions, but there is a need for making sure that they are\n>> readily available and people know when to use them.\n>\n> On this topic, I think we're getting back to the idea of having non-core\n> daemon helpers that should get \"supervised\" the way postmaster already\n> does with backends wrt starting and stoping them at the right time.\n>\n> So a supervisor daemon with a supervisor API that would have to support\n> autovacuum as a use case, then things like pgagent, PGQ and pgbouncer,\n> would be very welcome.\n>\n> What about starting a new thread about that? Or you already know you\n> won't want to push the extensibility of PostgreSQL there?\n\n+1 on this idea in general, if we can think up a good API - this seems\nvery useful to me, and you have some good examples there of cases\nwhere it'd definitely be a help.\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Tue, 13 Jul 2010 16:44:13 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Thu, Jul 8, 2010 at 11:48 PM, Hannu Krosing <[email protected]> wrote:\n> One example where you need a separate connection pool is pooling really\n> large number of connections, which you may want to do on another host\n> than the database itself is running.\n\nDefinitely. Often it's best placed on the individual webservers that\nare making requests, each running its own pool.\n", "msg_date": "Wed, 14 Jul 2010 00:44:53 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Scott Marlowe <[email protected]> wrote:\n> Hannu Krosing <[email protected]> wrote:\n>> One example where you need a separate connection pool is pooling\n>> really large number of connections, which you may want to do on\n>> another host than the database itself is running.\n> \n> Definitely. Often it's best placed on the individual webservers\n> that are making requests, each running its own pool.\n \nEach running its own pool? You've just made a case for an\nadmissions policy based on active database transactions or active\nqueries (or both) on the server having a benefit when used with this\npooling arrangement. This collection of pools can't know when the\nCPUs have enough to keep them busy and adding more will degrade\nperformance.\n \n-Kevin\n", "msg_date": "Wed, 14 Jul 2010 08:58:23 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "On Wed, 2010-07-14 at 08:58 -0500, Kevin Grittner wrote:\n> Scott Marlowe <[email protected]> wrote:\n> > Hannu Krosing <[email protected]> wrote:\n> >> One example where you need a separate connection pool is pooling\n> >> really large number of connections, which you may want to do on\n> >> another host than the database itself is running.\n> > \n> > Definitely. Often it's best placed on the individual webservers\n> > that are making requests, each running its own pool.\n> \n> Each running its own pool? You've just made a case for an\n> admissions policy based on active database transactions or active\n> queries (or both) on the server having a benefit when used with this\n> pooling arrangement. This collection of pools can't know when the\n> CPUs have enough to keep them busy and adding more will degrade\n> performance.\n\nI guess this setup is for OLTP load (read \"lots of short transactions\nwith low timeout limits\"), where you can just open 2-5 connections per\nCPU for mostly-in-memory database, maybe a little more when disk\naccesses are involved. If you have more, then they just wait a few\nmilliseconds, if you have less, you don't have anything else to run\nanyway.\n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Wed, 14 Jul 2010 17:54:51 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help in performance tuning." }, { "msg_contents": "Hi,\n\nSorry, if posting here was not proper instead of starting new thread\n(I am really not sure if its bad thing to do)\n\nI would like to share my recent experience on implementation of\nclient side pooling using pgbouncer. By client side i mean that\nthe the pgbouncer process in not on same machine as postgresql server.\nIn first trial pgbouncer and postgresql were in same machine & phbouncer\nwas connecting to postgresql using unix domain sockets. But i shifted it\nlaters owing to high CPU usage > 50%. ( using top)\nNow i have shifted pgbouncer into a virtual machine (openvz container)\nin the application server hardware and all my applications on other virtual\nmachines\n(web applications) connect to pgbouncer on this virtual machine.\n\nI tested the setup with pgbench in two scenarios\n\n1. connecting to DB server directly\n2. connecting to DB via pgbouncer\n\nthe no of clients was 10 ( -c 10) carrying out 10000 transactions each (-t\n10000) .\npgbench db was initilised with scaling factor -s 100.\n\nsince client count was less there was no queuing of requests in pgbouncer\ni would prefer to say it was in 'passthrough' mode.\n\nthe result was that\n\n1. direct ~ 2000 tps\n2. via pgbouncer ~ 1200 tps\n\n----------------------------------------------------------------------------------------------------------------------------------------------\nExperience on deploying to production environment with real world load/usage\npattern\n----------------------------------------------------------------------------------------------------------------------------------------------\n\nPgbouncer was put in same machine as postgresql connecting via unix domain\nto server and tcp sockets with clients.\n\n1. There was drastic reduction in CPU loads from 30 to 10 ldavg\n2. There were no clients waiting, pool size was 150 and number of active\n connections was 100-120.\n3. Application performance was worse (inspite of 0 clients waiting )\n\n\nI am still waiting to see what is the effect of shifting out pgbounce from\ndbserver\nto appserver, but with pgbench results i am not very hopeful. I am curious\nwhy\ninspite of 0 clients waiting pgbounce introduces a drop in tps.\n\nWarm Regds\nRajesh Kumar Mallah.\nCTO - tradeindia.com.\n\n\n\nKeywords: pgbouncer performance\n\n\n\n\n\n\n\n\n\n\nOn Mon, Jul 12, 2010 at 6:11 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Craig Ringer <[email protected]> wrote:\n>\n> > So rather than asking \"should core have a connection pool\" perhaps\n> > what's needed is to ask \"what can an in-core pool do that an\n> > external pool cannot do?\"\n>\n> (1) It can prevent the most pessimal performance problems resulting\n> from lack of an external connection pool (or a badly configured one)\n> by setting a single GUC. Configuration tools could suggest a good\n> value during initial setup.\n>\n> (2) It can be used without installing and configuring a more\n> sophisticated and complex product.\n>\n> (3) It might reduce latency because it avoids having to receive,\n> parse, and resend data in both directions -- eliminating one \"hop\".\n> I know the performance benefit would usually accrue to the external\n> connection pooler, but there might be some circumstances where a\n> built-in pool could win.\n>\n> (4) It's one more checkbox which can be ticked off on some RFPs.\n>\n> That said, I fully agree that if we can include good documentation\n> on the external poolers and we can get packagers to include poolers\n> in their distribution, that gets us a much bigger benefit. A\n> built-in solution would only be worthwhile if it was simple enough\n> and lightweight enough not to be a burden on execution time or\n> maintenance. Maybe that's too big an if.\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi,Sorry, if posting here was not proper instead of starting new thread(I am really not sure if its bad thing to do)I would like to share my recent experience on implementation of  client side pooling using  pgbouncer. By client side i mean that \nthe the pgbouncer process in not on same machine as postgresql server.In first trial pgbouncer and postgresql were in same machine & phbouncerwas connecting to postgresql using unix domain sockets. But i shifted it\nlaters owing to high CPU usage > 50%. ( using top)Now i have shifted pgbouncer into a virtual machine (openvz container)in the application server hardware and all my applications on other virtual machines(web applications) connect to pgbouncer on this virtual machine.\nI tested the setup with pgbench in two scenarios 1. connecting to DB server directly2. connecting to DB via pgbouncerthe no of clients was 10 ( -c 10)  carrying out 10000 transactions each (-t 10000) .\npgbench db was initilised with scaling  factor -s 100. since client count was less there was no queuing of requests in pgbounceri would prefer to say  it was in 'passthrough' mode. the result was that \n1. direct ~ 2000 tps2. via pgbouncer ~ 1200 tps----------------------------------------------------------------------------------------------------------------------------------------------Experience on deploying to production environment with real world load/usage pattern\n----------------------------------------------------------------------------------------------------------------------------------------------Pgbouncer was put in same machine as postgresql connecting via unix domain\nto server and tcp sockets with clients.1. There was drastic reduction in CPU loads  from  30 to 10 ldavg2. There were no clients waiting, pool size was 150 and number of active    connections was 100-120.\n3. Application performance was worse (inspite of 0 clients waiting ) I am still waiting to see what is the effect of shifting out pgbounce from dbserverto appserver, but with pgbench results i am not very hopeful. I am curious why\ninspite of 0 clients waiting pgbounce introduces a drop in tps.Warm RegdsRajesh Kumar Mallah.CTO - tradeindia.com.\n\nKeywords: pgbouncer performance On Mon, Jul 12, 2010 at 6:11 PM, Kevin Grittner <[email protected]> wrote:\nCraig Ringer <[email protected]> wrote:\n\n> So rather than asking \"should core have a connection pool\" perhaps\n> what's needed is to ask \"what can an in-core pool do that an\n> external pool cannot do?\"\n\n(1)  It can prevent the most pessimal performance problems resulting\nfrom lack of an external connection pool (or a badly configured one)\nby setting a single GUC.  Configuration tools could suggest a good\nvalue during initial setup.\n\n(2)  It can be used without installing and configuring a more\nsophisticated and complex product.\n\n(3)  It might reduce latency because it avoids having to receive,\nparse, and resend data in both directions -- eliminating one \"hop\".\nI know the performance benefit would usually accrue to the external\nconnection pooler, but there might be some circumstances where a\nbuilt-in pool could win.\n\n(4)  It's one more checkbox which can be ticked off on some RFPs.\n\nThat said, I fully agree that if we can include good documentation\non the external poolers and we can get packagers to include poolers\nin their distribution, that gets us a much bigger benefit.  A\nbuilt-in solution would only be worthwhile if it was simple enough\nand lightweight enough not to be a burden on execution time or\nmaintenance.  Maybe that's too big an if.\n\n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 18 Jul 2010 21:48:05 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "note: my postgresql server & pgbouncer were not in virtualised environment\nin the first setup. Only application server has many openvz containers.\n\nnote: my postgresql server & pgbouncer were not in virtualised environmentin the first setup. Only application server has many openvz containers.", "msg_date": "Sun, 18 Jul 2010 21:56:35 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Sun, 2010-07-18 at 21:48 +0530, Rajesh Kumar Mallah wrote:\n> Hi,\n> \n> Sorry, if posting here was not proper instead of starting new thread\n> (I am really not sure if its bad thing to do)\n> \n> I would like to share my recent experience on implementation of \n> client side pooling using pgbouncer. By client side i mean that \n> the the pgbouncer process in not on same machine as postgresql server.\n> In first trial pgbouncer and postgresql were in same machine &\n> phbouncer\n> was connecting to postgresql using unix domain sockets. But i shifted\n> it\n> laters owing to high CPU usage > 50%. ( using top)\n> Now i have shifted pgbouncer into a virtual machine (openvz container)\n> in the application server hardware \n\nWhy in VM (openvz container) ?\n\nDid you also try it in the same OS as your appserver ?\n\nPerhaps even connecting from appserver via unix seckets ?\n\n> and all my applications on other virtual machines\n> (web applications) connect to pgbouncer on this virtual machine.\n> \n> I tested the setup with pgbench in two scenarios \n> \n> 1. connecting to DB server directly\n> 2. connecting to DB via pgbouncer\n> \n> the no of clients was 10 ( -c 10) carrying out 10000 transactions\n> each (-t 10000) .\n> pgbench db was initilised with scaling factor -s 100. \n> \n> since client count was less there was no queuing of requests in\n> pgbouncer\n> i would prefer to say it was in 'passthrough' mode. \n> \n> the result was that \n> \n> 1. direct ~ 2000 tps\n> 2. via pgbouncer ~ 1200 tps\n\nAre you sure you are not measuring how much sunning pgbouncer slows down\npgbench directly, by competing for CPU resources and not by adding\nlatency to requests ?\n\n\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Experience on deploying to production environment with real world\n> load/usage pattern\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> \n> Pgbouncer was put in same machine as postgresql connecting via unix\n> domain\n> to server and tcp sockets with clients.\n> \n> 1. There was drastic reduction in CPU loads from 30 to 10 ldavg\n> 2. There were no clients waiting, pool size was 150 and number of\n> active\n> connections was 100-120.\n> 3. Application performance was worse (inspite of 0 clients waiting ) \n> \n> \n> I am still waiting to see what is the effect of shifting out pgbounce\n> from dbserver\n> to appserver, but with pgbench results i am not very hopeful. I am\n> curious why inspite of 0 clients waiting pgbounce introduces a drop in\n> tps.\n\nIf you have less clients than pgbouncer connections, you can't have any\nclients waiting in pgbouncer, as each of them is allocated it's own\nconnection right away.\n\nWhat you were measuring was \n\n1. pgbench and pgbouncer competeing for the same CPU\n2. overhead from 2 hops to db (app-proxy-db) instead of 1 (app-db)\n\n> Warm Regds\n> Rajesh Kumar Mallah.\n> CTO - tradeindia.com.\n> \n> \n> \n> Keywords: pgbouncer performance\n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> On Mon, Jul 12, 2010 at 6:11 PM, Kevin Grittner\n> <[email protected]> wrote:\n> Craig Ringer <[email protected]> wrote:\n> \n> \n> > So rather than asking \"should core have a connection pool\"\n> perhaps\n> > what's needed is to ask \"what can an in-core pool do that an\n> > external pool cannot do?\"\n> \n> \n> (1) It can prevent the most pessimal performance problems\n> resulting\n> from lack of an external connection pool (or a badly\n> configured one)\n> by setting a single GUC. Configuration tools could suggest a\n> good\n> value during initial setup.\n> \n> (2) It can be used without installing and configuring a more\n> sophisticated and complex product.\n> \n> (3) It might reduce latency because it avoids having to\n> receive,\n> parse, and resend data in both directions -- eliminating one\n> \"hop\".\n> I know the performance benefit would usually accrue to the\n> external\n> connection pooler, but there might be some circumstances where\n> a\n> built-in pool could win.\n> \n> (4) It's one more checkbox which can be ticked off on some\n> RFPs.\n> \n> That said, I fully agree that if we can include good\n> documentation\n> on the external poolers and we can get packagers to include\n> poolers\n> in their distribution, that gets us a much bigger benefit. A\n> built-in solution would only be worthwhile if it was simple\n> enough\n> and lightweight enough not to be a burden on execution time or\n> maintenance. Maybe that's too big an if.\n> \n> -Kevin\n> \n> \n> --\n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n\n", "msg_date": "Sun, 18 Jul 2010 19:54:28 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Nice suggestion to try ,\nI will put pgbouncer on raw hardware and run pgbench from same hardware.\n\nregds\nrajesh kumar mallah.\n\n\n\n> Why in VM (openvz container) ?\n>\n> Did you also try it in the same OS as your appserver ?\n>\n> Perhaps even connecting from appserver via unix seckets ?\n>\n> > and all my applications on other virtual machines\n>\n>\n\nNice suggestion to try ,I will put pgbouncer on raw hardware and run pgbench from same hardware. regdsrajesh kumar mallah.\n\nWhy in VM (openvz container) ?\n\nDid you also try it in the same OS as your appserver ?\n\nPerhaps even connecting from appserver via unix seckets ?\n\n> and all my applications on other virtual machines", "msg_date": "Sun, 18 Jul 2010 22:39:13 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Rajesh Kumar Mallah wrote:\n> the no of clients was 10 ( -c 10) carrying out 10000 transactions \n> each (-t 10000) .\n> pgbench db was initilised with scaling factor -s 100.\n>\n> since client count was less there was no queuing of requests in pgbouncer\n> i would prefer to say it was in 'passthrough' mode.\n\nOf course pgbouncer is going decrease performance in this situation. \nYou've added a whole layer to things that all traffic has to pass \nthrough, without a setup that gains any benefit from the connection \npooling. Try making the client count 1000 instead if you want a useful \ntest.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sun, 18 Jul 2010 13:25:47 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "ok ,\nnow the question is , is it possible to dig out from from postgresql\ndatabase\nserver if connection pooling is needed ? In our case eg i have kept\nmax_connections = 300 if i reduce below 250 i get error \"max connection\nreached.....\"\non connecting to db directly, if i put pgbouncer i get less performance\n(even if no clients waiting)\n\nwithout pooling the dbserver CPU usage increases but performance of apps\nis also become good.\n\nRegds\nRajesh Kumar Mallah.\n\nOn Sun, Jul 18, 2010 at 10:55 PM, Greg Smith <[email protected]> wrote:\n\n> Rajesh Kumar Mallah wrote:\n>\n>> the no of clients was 10 ( -c 10) carrying out 10000 transactions each\n>> (-t 10000) .\n>> pgbench db was initilised with scaling factor -s 100.\n>>\n>> since client count was less there was no queuing of requests in pgbouncer\n>> i would prefer to say it was in 'passthrough' mode.\n>>\n>\n> Of course pgbouncer is going decrease performance in this situation.\n> You've added a whole layer to things that all traffic has to pass through,\n> without a setup that gains any benefit from the connection pooling. Try\n> making the client count 1000 instead if you want a useful test.\n>\n> --\n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n>\n>\n\nok ,now the question is , is it possible to dig out from from postgresql databaseserver if connection pooling is needed ? In our case eg i have kept max_connections = 300  if i reduce below 250 i get error \"max connection reached.....\" \non connecting to db directly,  if i put pgbouncer i get less performance (even if no clients waiting)without pooling the dbserver CPU usage increases but performance of appsis also become good.Regds\nRajesh Kumar Mallah.On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith <[email protected]> wrote:\nRajesh Kumar Mallah wrote:\n\nthe no of clients was 10 ( -c 10)  carrying out 10000 transactions each (-t 10000) .\npgbench db was initilised with scaling  factor -s 100.\n\nsince client count was less there was no queuing of requests in pgbouncer\ni would prefer to say  it was in 'passthrough' mode.\n\n\nOf course pgbouncer is going decrease performance in this situation.  You've added a whole layer to things that all traffic has to pass through, without a setup that gains any benefit from the connection pooling.  Try making the client count 1000 instead if you want a useful test.\n\n\n-- \nGreg Smith  2ndQuadrant US  Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected]   www.2ndQuadrant.us", "msg_date": "Sun, 18 Jul 2010 23:17:58 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith <[email protected]> wrote:\n\n> Rajesh Kumar Mallah wrote:\n>\n>> the no of clients was 10 ( -c 10) carrying out 10000 transactions each\n>> (-t 10000) .\n>> pgbench db was initilised with scaling factor -s 100.\n>>\n>> since client count was less there was no queuing of requests in pgbouncer\n>> i would prefer to say it was in 'passthrough' mode.\n>>\n>\n> Of course pgbouncer is going decrease performance in this situation.\n> You've added a whole layer to things that all traffic has to pass through,\n> without a setup that gains any benefit from the connection pooling. Try\n> making the client count 1000 instead if you want a useful test.\n>\n\nDear Greg,\n\nmy max_client is 300 shall i test with client count 250 ?\nif so what should be the scaling factor while initializing\nthe pgbench db?\n\n\n> --\n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n>\n>\n\nOn Sun, Jul 18, 2010 at 10:55 PM, Greg Smith <[email protected]> wrote:\nRajesh Kumar Mallah wrote:\n\nthe no of clients was 10 ( -c 10)  carrying out 10000 transactions each (-t 10000) .\npgbench db was initilised with scaling  factor -s 100.\n\nsince client count was less there was no queuing of requests in pgbouncer\ni would prefer to say  it was in 'passthrough' mode.\n\n\nOf course pgbouncer is going decrease performance in this situation.  You've added a whole layer to things that all traffic has to pass through, without a setup that gains any benefit from the connection pooling.  Try making the client count 1000 instead if you want a useful test.\nDear Greg,my  max_client is 300 shall i test  with client count 250 ?if so what should be the scaling factor while initializingthe pgbench db?\n\n\n-- \nGreg Smith  2ndQuadrant US  Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected]   www.2ndQuadrant.us", "msg_date": "Sun, 18 Jul 2010 23:38:01 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Looks like ,\n\npgbench cannot be used for testing with pgbouncer if number of\npgbench clients exceeds pool_size + reserve_pool_size of pgbouncer.\npgbench keeps waiting doing nothing. I am using pgbench of postgresql 8.1.\nAre there changes to pgbench in this aspect ?\n\nregds\nRajesh Kumar Mallah.\n\nOn Sun, Jul 18, 2010 at 11:38 PM, Rajesh Kumar Mallah <\[email protected]> wrote:\n\n>\n>\n> On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith <[email protected]> wrote:\n>\n>> Rajesh Kumar Mallah wrote:\n>>\n>>> the no of clients was 10 ( -c 10) carrying out 10000 transactions each\n>>> (-t 10000) .\n>>> pgbench db was initilised with scaling factor -s 100.\n>>>\n>>> since client count was less there was no queuing of requests in pgbouncer\n>>> i would prefer to say it was in 'passthrough' mode.\n>>>\n>>\n>> Of course pgbouncer is going decrease performance in this situation.\n>> You've added a whole layer to things that all traffic has to pass through,\n>> without a setup that gains any benefit from the connection pooling. Try\n>> making the client count 1000 instead if you want a useful test.\n>>\n>\n> Dear Greg,\n>\n> my max_client is 300 shall i test with client count 250 ?\n> if so what should be the scaling factor while initializing\n> the pgbench db?\n>\n>\n>> --\n>> Greg Smith 2ndQuadrant US Baltimore, MD\n>> PostgreSQL Training, Services and Support\n>> [email protected] www.2ndQuadrant.us\n>>\n>>\n>\n\nLooks like , pgbench cannot be used for testing with pgbouncer if number ofpgbench clients exceeds pool_size + reserve_pool_size of pgbouncer.pgbench keeps waiting doing nothing. I am using pgbench  of postgresql 8.1. \nAre there changes to pgbench in this aspect ?regdsRajesh Kumar Mallah.On Sun, Jul 18, 2010 at 11:38 PM, Rajesh Kumar Mallah <[email protected]> wrote:\nOn Sun, Jul 18, 2010 at 10:55 PM, Greg Smith <[email protected]> wrote:\n\nRajesh Kumar Mallah wrote:\n\nthe no of clients was 10 ( -c 10)  carrying out 10000 transactions each (-t 10000) .\npgbench db was initilised with scaling  factor -s 100.\n\nsince client count was less there was no queuing of requests in pgbouncer\ni would prefer to say  it was in 'passthrough' mode.\n\n\nOf course pgbouncer is going decrease performance in this situation.  You've added a whole layer to things that all traffic has to pass through, without a setup that gains any benefit from the connection pooling.  Try making the client count 1000 instead if you want a useful test.\nDear Greg,my  max_client is 300 shall i test  with client count 250 ?if so what should be the scaling factor while initializingthe pgbench db?\n\n\n-- \nGreg Smith  2ndQuadrant US  Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected]   www.2ndQuadrant.us", "msg_date": "Mon, 19 Jul 2010 00:06:24 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "\nOn Jul 9, 2010, at 8:33 PM, Craig Ringer wrote:\n\n> On 10/07/2010 9:25 AM, Josh Berkus wrote:\n>> \n>>> It *is* the last place you want to put it, but putting it there can\n>>> be much better than not putting it *anywhere*, which is what we've\n>>> often seen.\n>> \n>> Well, what you proposed is an admission control mechanism, which is\n>> *different* from a connection pool, although the two overlap. A\n>> connection pool solves 4 problems when it's working:\n>> \n>> a) limiting the number of database server processes\n>> b) limiting the number of active concurrent queries\n>> c) reducing response times for allocating a new connection\n>> d) allowing management of connection routes to the database\n>> (redirection, failover, etc.)\n> \n> I agree with you: for most Pg users (a) is really, really important. As \n> you know, in PostgreSQL each connection maintains not only general \n> connection state (GUC settings, etc) and if in a transaction, \n> transaction state, but also a query executor (full backend). That gets \n> nasty not only in memory use, but in impact on active query performance, \n> as all those query executors have to participate in global signalling \n> for lock management etc.\n> \n> So an in-server pool that solved (b) but not (a) would IMO not be \n> particularly useful for the majority of users.\n> \n> That said, I don't think it follows that (a) cannot be solved in-core. \n> How much architectural change would be required to do it efficiently \n> enough, though...\n> \n\na, b, and c can all be handled in core. But that would be a radical re-architecture to do it right. Postgres assumes that the client connection, authentication, and query processing all happen in one place in one process on one thread. Most server software built and designed today avoids that model in order to decouple its critical resources from the # of client connections. Most server software designed today tries to control its resources and not let the behavior of clients dictate resource usage.\n\nEven Apache HTTPD is undergoing a radical re-design so that it can handle more connections and more easily decouple connections from concurrent processing to keep up with competitors.\n\nI'm not saying that Postgres core should change -- again thats a radical re-architecture. But it should be recognized that it is not like most other server applications -- it can't control its resources very well and needs help to do so. From using a connection pool to manually setting work_mem differently for different clients or workloads, resource management is not what it does well. It does a LOT of things very very well, just not that.\n\n\n> --\n> Craig Ringer\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Sun, 18 Jul 2010 12:00:11 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "> pgbench cannot be used for testing with pgbouncer if number of\n> pgbench clients exceeds pool_size + reserve_pool_size of pgbouncer.\n> pgbench keeps waiting doing nothing. I am using pgbench  of postgresql 8.1.\n> Are there changes to pgbench in this aspect ?\n\nPgbench won't start actual transaction until all connections to\nPostgreSQL (in this case pgbounser I guess) successfully\nestablished. IMO You sholud try other benchmark tools.\n\nBTW, I think you should use -C option with pgbench for this kind of\ntesting. -C establishes connection for each transaction, which is\npretty much similar to the real world application which do not use\nconnection pooling. You will be supprised how PostgreSQL connection\noverhead is large.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n", "msg_date": "Mon, 19 Jul 2010 11:16:04 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Thanks for the thought but it (-C) does not work .\n\n>\n>\n> BTW, I think you should use -C option with pgbench for this kind of\n> testing. -C establishes connection for each transaction, which is\n> pretty much similar to the real world application which do not use\n> connection pooling. You will be supprised how PostgreSQL connection\n> overhead is large.\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese: http://www.sraoss.co.jp\n>\n\n Thanks for the thought but it (-C) does not work .\n\nBTW, I think you should use -C option with pgbench for this kind of\ntesting. -C establishes connection for each transaction, which is\npretty much similar to the real world application which do not use\nconnection pooling. You will be supprised how PostgreSQL connection\noverhead is large.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp", "msg_date": "Mon, 19 Jul 2010 08:06:09 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "From: Rajesh Kumar Mallah <[email protected]>\nSubject: Re: [PERFORM] Pooling in Core WAS: Need help in performance tuning.\nDate: Mon, 19 Jul 2010 08:06:09 +0530\nMessage-ID: <[email protected]>\n\n>  Thanks for the thought but it (-C) does not work .\n\nStill you need:\n\npgbench's -c <= (pool_size + reserve_pool_size)\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n", "msg_date": "Mon, 19 Jul 2010 11:50:34 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n<[email protected]> wrote:\n> On 12/07/10 17:45, Matthew Wakeling wrote:\n>>\n>> I'm surprised. Doesn't apache httpd do this? Does it have to do a whole\n>> load of non-portable stuff? It seems to work on a whole load of platforms.\n>\n> A lot of what Apache HTTPd does is handled via the Apache Portable\n> Runtime (APR). It contains a lot of per-platform handlers for various\n> functionality.\n\nApache just has all of the worker processes call accept() on the\nsocket, and whichever one the OS hands it off to gets the job. The\nproblem is harder for us because a backend can't switch identities\nonce it's been assigned to a database. I haven't heard an adequate\nexplanation of why that couldn't be changed, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 22 Jul 2010 14:33:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n<[email protected]> wrote:\n> So rather than asking \"should core have a connection pool\" perhaps\n> what's needed is to ask \"what can an in-core pool do that an external\n> pool cannot do?\"\n\nAvoid sending every connection through an extra hop.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 22 Jul 2010 14:36:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote: \n> On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n> <[email protected]> wrote:\n> > So rather than asking \"should core have a connection pool\" perhaps\n> > what's needed is to ask \"what can an in-core pool do that an external\n> > pool cannot do?\"\n> \n> Avoid sending every connection through an extra hop.\n\nLet's extend this shall we:\n\nAvoid adding yet another network hop\nRemove of a point of failure\nReduction of administrative overhead\nIntegration into our core authentication mechanisms\nGreater flexibility in connection control\n\nAnd, having connection pooling in core does not eliminate the use of an\nexternal pool where it makes since.\n\nJD\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n\n", "msg_date": "Thu, 22 Jul 2010 12:15:18 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote:\n> On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n> <[email protected]> wrote:\n> > So rather than asking \"should core have a connection pool\" perhaps\n> > what's needed is to ask \"what can an in-core pool do that an external\n> > pool cannot do?\"\n> \n> Avoid sending every connection through an extra hop.\n\nnot really. in-core != magically-in-right-backend-process\n\n\nthere will still be \"an extra hop\",only it will be local, between pooler\nand backend process.\n\nsimilar to what currently happens with pgbouncer when you deploy it on\nsame server and use unix sockets\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n> \n\n\n", "msg_date": "Thu, 22 Jul 2010 20:15:44 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Thu, 2010-07-22 at 12:15 -0700, Joshua D. Drake wrote:\n> On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote: \n> > On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n> > <[email protected]> wrote:\n> > > So rather than asking \"should core have a connection pool\" perhaps\n> > > what's needed is to ask \"what can an in-core pool do that an external\n> > > pool cannot do?\"\n> > \n> > Avoid sending every connection through an extra hop.\n> \n> Let's extend this shall we:\n> \n> Avoid adding yet another network hop\n\npostgreSQL is multi-process, so you either have a separate \"pooler\nprocess\" or need to put pooler functionality in postmaster, bothw ways\nyou still have a two-hop scenario for connect. you may be able to pass\nthe socket to child process and also keep it, but doing this for both\nclient and db sides seems really convoluted. \n\nOr is there a prortable way to pass sockets back and forth between\nparent and child processes ?\n\nIf so, then pgbouncer could use it as well.\n\n> Remove of a point of failure\n\nrather move the point of failure from external pooler to internal\npooler ;)\n\n> Reduction of administrative overhead\n\nPossibly. But once you start actually using it, you still need to\nconfigure and monitor it and do other administrator-y tasks.\n\n> Integration into our core authentication mechanisms\n\nTrue, although for example having SSL on client side connection will be\nso slow that it hides any performance gains from pooling, at least for\nshort-lived connections.\n\n> Greater flexibility in connection control\n\nYes, poolers can be much more flexible than default postgresql. See for\nexample pgbouncers PAUSE , RECONFIGURE and RESUME commands \n\n> And, having connection pooling in core does not eliminate the use of an\n> external pool where it makes since.\n\nProbably the easiest way to achieve \"pooling in core\" would be adding an\noption to start pgbouncer under postmaster control.\n\nYou probably can't get much leaner than pgbouncer.\n\n> -- \n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\n> Consulting, Training, Support, Custom Development, Engineering\n> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n> \n> \n> \n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Thu, 22 Jul 2010 20:56:06 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Thu, Jul 22, 2010 at 02:33:43PM -0400, Robert Haas wrote:\n> On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n> <[email protected]> wrote:\n> > On 12/07/10 17:45, Matthew Wakeling wrote:\n> >>\n> >> I'm surprised. Doesn't apache httpd do this? Does it have to do a whole\n> >> load of non-portable stuff? It seems to work on a whole load of platforms.\n> >\n> > A lot of what Apache HTTPd does is handled via the Apache Portable\n> > Runtime (APR). It contains a lot of per-platform handlers for various\n> > functionality.\n>\n> Apache just has all of the worker processes call accept() on the\n> socket, and whichever one the OS hands it off to gets the job.\nAs an inconsequential detail - afaik they keep the os from doing that\nby protecting it with a mutex for various reasons (speed - as some\nimplementations wake up and see theres nothing to do, multiple\nsockets, fairness)\n\n> The problem is harder for us because a backend can't switch identities\n> once it's been assigned to a database. I haven't heard an adequate\n> explanation of why that couldn't be changed, though.\nPossibly it might decrease the performance significantly enough by\nreducing the cache locality (syscache, prepared plans)?\n\nAndres\n", "msg_date": "Thu, 22 Jul 2010 23:29:54 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "\nOn Jul 22, 2010, at 11:36 AM, Robert Haas wrote:\n\n> On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n> <[email protected]> wrote:\n>> So rather than asking \"should core have a connection pool\" perhaps\n>> what's needed is to ask \"what can an in-core pool do that an external\n>> pool cannot do?\"\n> \n> Avoid sending every connection through an extra hop.\n> \n\nDynamically adjust settings based on resource usage of the DB.\n\n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 22 Jul 2010 14:44:04 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Thu, Jul 22, 2010 at 3:15 PM, Hannu Krosing <[email protected]> wrote:\n> On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote:\n>> On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n>> <[email protected]> wrote:\n>> > So rather than asking \"should core have a connection pool\" perhaps\n>> > what's needed is to ask \"what can an in-core pool do that an external\n>> > pool cannot do?\"\n>>\n>> Avoid sending every connection through an extra hop.\n>\n> not really. in-core != magically-in-right-backend-process\n\nWell, how about if we arrange it so it IS in the right backend\nprocess? I don't believe magic is required.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 22 Jul 2010 20:57:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Thu, Jul 22, 2010 at 5:29 PM, Andres Freund <[email protected]> wrote:\n>> The problem is harder for us because a backend can't switch identities\n>> once it's been assigned to a database.  I haven't heard an adequate\n>> explanation of why that couldn't be changed, though.\n> Possibly it might decrease the performance significantly enough by\n> reducing the cache locality (syscache, prepared plans)?\n\nThose things are backend-local. The worst case scenario is you've got\nto flush them all when you reinitialize, in which case you still save\nthe overhead of creating a new process. The best case scenario is\nthat you can keep some of them around, in which case, great.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 22 Jul 2010 21:00:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Thu, Jul 22, 2010 at 02:44:04PM -0700, Scott Carey wrote:\n> On Jul 22, 2010, at 11:36 AM, Robert Haas wrote:\n> > On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n> > <[email protected]> wrote:\n> >> So rather than asking \"should core have a connection pool\" perhaps\n> >> what's needed is to ask \"what can an in-core pool do that an external\n> >> pool cannot do?\"\n> > \n> > Avoid sending every connection through an extra hop.\n> > \n> \n> Dynamically adjust settings based on resource usage of the DB.\n> \n\nRelatively minor, but it would be convenient to avoid having to query\n$external_pooler to determine the client_addr of an incoming connection.\n\n--\nJoshua Tolley / eggyknap\nEnd Point Corporation\nhttp://www.endpoint.com", "msg_date": "Fri, 23 Jul 2010 09:55:35 -0600", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Thu, 2010-07-22 at 20:57 -0400, Robert Haas wrote:\n> On Thu, Jul 22, 2010 at 3:15 PM, Hannu Krosing <[email protected]> wrote:\n> > On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote:\n> >> On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n> >> <[email protected]> wrote:\n> >> > So rather than asking \"should core have a connection pool\" perhaps\n> >> > what's needed is to ask \"what can an in-core pool do that an external\n> >> > pool cannot do?\"\n> >>\n> >> Avoid sending every connection through an extra hop.\n> >\n> > not really. in-core != magically-in-right-backend-process\n> \n> Well, how about if we arrange it so it IS in the right backend\n> process? I don't believe magic is required.\n\nDo you have any design in mind, how you can make it so ?\n\n---------------\nHannu\n\n\n\n", "msg_date": "Fri, 23 Jul 2010 16:58:27 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Thu, 2010-07-22 at 20:56 +0100, Hannu Krosing wrote:\n> \n> > Let's extend this shall we:\n> > \n> > Avoid adding yet another network hop\n> \n> postgreSQL is multi-process, so you either have a separate \"pooler\n> process\" or need to put pooler functionality in postmaster, bothw ways\n> you still have a two-hop scenario for connect. you may be able to pass\n> the socket to child process and also keep it, but doing this for both\n> client and db sides seems really convoluted. \n\nWhich means, right now there is three hops. Reducing one is good.\n\n> Or is there a prortable way to pass sockets back and forth between\n> parent and child processes ?\n> \n> If so, then pgbouncer could use it as well.\n> \n> > Remove of a point of failure\n> \n> rather move the point of failure from external pooler to internal\n> pooler ;)\n\nYes but at that point, it doesn't matter. \n\n> \n> > Reduction of administrative overhead\n> \n> Possibly. But once you start actually using it, you still need to\n> configure and monitor it and do other administrator-y tasks.\n\nYes, but it is inclusive.\n\n> \n> > Integration into our core authentication mechanisms\n> \n> True, although for example having SSL on client side connection will be\n> so slow that it hides any performance gains from pooling, at least for\n> short-lived connections.\n\nYes, but right now you can't use *any* pooler with LDAP for example. We\ncould if pooling was in core. Your SSL argument doesn't really work\nbecause its true with or without pooling.\n\n> > Greater flexibility in connection control\n> \n> Yes, poolers can be much more flexible than default postgresql. See for\n> example pgbouncers PAUSE , RECONFIGURE and RESUME commands \n\n:D\n\n> \n> > And, having connection pooling in core does not eliminate the use of an\n> > external pool where it makes since.\n> \n> Probably the easiest way to achieve \"pooling in core\" would be adding an\n> option to start pgbouncer under postmaster control.\n\nYeah but that won't happen. Also I think we may have a libevent\ndependency that we have to work out.\n\n> \n> You probably can't get much leaner than pgbouncer.\n\nOh don't get me wrong. I love pgbouncer. It is my recommended pooler but\neven it has limitations (such as auth).\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n", "msg_date": "Fri, 23 Jul 2010 09:52:37 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Fri, Jul 23, 2010 at 11:58 AM, Hannu Krosing <[email protected]> wrote:\n> On Thu, 2010-07-22 at 20:57 -0400, Robert Haas wrote:\n>> On Thu, Jul 22, 2010 at 3:15 PM, Hannu Krosing <[email protected]> wrote:\n>> > On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote:\n>> >> On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n>> >> <[email protected]> wrote:\n>> >> > So rather than asking \"should core have a connection pool\" perhaps\n>> >> > what's needed is to ask \"what can an in-core pool do that an external\n>> >> > pool cannot do?\"\n>> >>\n>> >> Avoid sending every connection through an extra hop.\n>> >\n>> > not really. in-core != magically-in-right-backend-process\n>>\n>> Well, how about if we arrange it so it IS in the right backend\n>> process?  I don't believe magic is required.\n>\n> Do you have any design in mind, how you can make it so ?\n\nWell, if we could change the backends so that they could fully\nreinitialize themselves (disconnect from a database to which they are\nbound, etc.), I don't see why we couldn't use the Apache approach.\nThere's a danger of memory leaks but that's why Apache has\nMaxRequestsPerChild, and it works pretty darn well. Of course,\npassing file descriptors would be even nicer (you could pass the\nconnection off to a child that was already bound to the correct\ndatabase, perhaps) but has pointed out more than once, that's not\nportable.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 23 Jul 2010 13:28:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Fri, Jul 23, 2010 at 01:28:53PM -0400, Robert Haas wrote:\n> On Fri, Jul 23, 2010 at 11:58 AM, Hannu Krosing <[email protected]> wrote:\n> > On Thu, 2010-07-22 at 20:57 -0400, Robert Haas wrote:\n> >> On Thu, Jul 22, 2010 at 3:15 PM, Hannu Krosing <[email protected]> wrote:\n> >> > On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote:\n> >> >> On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer\n> >> >> <[email protected]> wrote:\n> >> >> > So rather than asking \"should core have a connection pool\" perhaps\n> >> >> > what's needed is to ask \"what can an in-core pool do that an external\n> >> >> > pool cannot do?\"\n> >> >>\n> >> >> Avoid sending every connection through an extra hop.\n> >> >\n> >> > not really. in-core != magically-in-right-backend-process\n> >>\n> >> Well, how about if we arrange it so it IS in the right backend\n> >> process? �I don't believe magic is required.\n> >\n> > Do you have any design in mind, how you can make it so ?\n>\n> Well, if we could change the backends so that they could fully\n> reinitialize themselves (disconnect from a database to which they are\n> bound, etc.), I don't see why we couldn't use the Apache approach.\n> There's a danger of memory leaks but that's why Apache has\n> MaxRequestsPerChild, and it works pretty darn well. Of course,\n> passing file descriptors would be even nicer (you could pass the\n> connection off to a child that was already bound to the correct\n> database, perhaps) but has pointed out more than once, that's not\n> portable.\nIts not *that bad* though. To my knowledge its 2 or 3 implementations that\none would need to implement to support most if not all platforms.\n\n- sendmsg/cmsg/SCM_RIGHTS based implementation (most if not all *nixes\n including solaris, linux, (free|open|net)bsd, OSX, AIX, HPUX, others)\n- WSADuplicateSocket (windows)\n- if needed: STREAMS based stuff (I_SENDFD) (at least solaris, hpux, aix, tru64,\n irix, unixware allow this)\n\n\nNote that I am still not convinced that its a good idea...\n\nAndres\n", "msg_date": "Sat, 24 Jul 2010 07:13:50 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Joshua Tolley wrote:\n> Relatively minor, but it would be convenient to avoid having to query\n> $external_pooler to determine the client_addr of an incoming connection.\n> \n\nYou suggest this as a minor concern, but I consider it to be one of the \nmost compelling arguments in favor of in-core pooling. A constant pain \nwith external poolers is the need to then combine two sources of data in \norder to track connections fully, which is something that everyone runs \ninto eventually and finds annoying. It's one of the few things that \ndoesn't go away no matter how much fiddling you do with pgBouncer, it's \nalways getting in the way a bit. And it seems to seriously bother \nsystems administrators and developers, not just the DBAs.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 24 Jul 2010 01:23:08 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Sat, Jul 24, 2010 at 01:23:08AM -0400, Greg Smith wrote:\n> Joshua Tolley wrote:\n> >Relatively minor, but it would be convenient to avoid having to query\n> >$external_pooler to determine the client_addr of an incoming connection.\n>\n> You suggest this as a minor concern, but I consider it to be one of\n> the most compelling arguments in favor of in-core pooling. A\n> constant pain with external poolers is the need to then combine two\n> sources of data in order to track connections fully, which is\n> something that everyone runs into eventually and finds annoying.\n> It's one of the few things that doesn't go away no matter how much\n> fiddling you do with pgBouncer, it's always getting in the way a\n> bit. And it seems to seriously bother systems administrators and\n> developers, not just the DBAs.\nBut you have to admit that this problem won't vanish as people will\ncontinue to use poolers on other machines for resource reasons.\nSo providing a capability to do something sensible here seems to be\nuseful independent of in-core pooling.\n\nAndres\n", "msg_date": "Sat, 24 Jul 2010 07:32:22 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On 24/07/10 01:28, Robert Haas wrote:\n\n> Well, if we could change the backends so that they could fully\n> reinitialize themselves (disconnect from a database to which they are\n> bound, etc.), I don't see why we couldn't use the Apache approach.\n\nThis would offer the bonus on the side that it'd be more practical to\nimplement database changes for a connection, akin to MySQL's \"USE\".\nInefficient, sure, but possible.\n\nI don't care about that current limitation very much. I think anyone\nchanging databases all the time probably has the wrong design and should\nbe using schema. I'm sure there are times it'd be good to be able to\nswitch databases on one connection, though.\n\n\nMy question with all this remains: is it worth the effort when external\npoolers already solve the problem. Can PostgreSQL offer tools and\ninterfaces to permit external poolers to solve the problems they have,\nrather than trying to bring them in-core? For example, with auth, can\nthe Pg server offer features to help poolers implement passthrough\nauthentication against the real Pg server?\n\nPerhaps Pg could expose auth features over SQL, permitting appropriately\nprivileged users to verify credentials with SQL-level calls. Poolers\ncould pass supplied user credentials through to the real Pg server for\nverification. For bonus points, an SQL interface could be provided that\nlets the super-priveleged auth managing connection be used to change the\nlogin role of another running backend/connection, so the pooler could\nhand out connections with different login user ids without having to\nmaintain a pool per user id.\n\n( That'd probably also permit implementation of a \"CHANGE USER\" command,\nwhere the client changed login roles on the fly by passing the\ncredentials of the new role. That'd be *awesome* for application server\nconnection pools. )\n\n--\nCraig Ringer\n", "msg_date": "Sat, 24 Jul 2010 14:23:01 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On 24/07/10 13:23, Greg Smith wrote:\n> Joshua Tolley wrote:\n>> Relatively minor, but it would be convenient to avoid having to query\n>> $external_pooler to determine the client_addr of an incoming connection.\n>> \n> \n> You suggest this as a minor concern, but I consider it to be one of the\n> most compelling arguments in favor of in-core pooling. A constant pain\n> with external poolers is the need to then combine two sources of data in\n> order to track connections fully, which is something that everyone runs\n> into eventually and finds annoying. It's one of the few things that\n> doesn't go away no matter how much fiddling you do with pgBouncer, it's\n> always getting in the way a bit. And it seems to seriously bother\n> systems administrators and developers, not just the DBAs.\n\n\nPutting a pooler in core won't inherently fix this, and won't remove the\nneed to solve it for cases where the pooler can't be on the same machine.\n\n9.0 has application_name to let apps identify themselves. Perhaps a\n\"pooled_client_ip\", to be set by a pooler rather than the app, could be\nadded to address this problem in a way that can be used by all poolers\nnew and existing, not just any new in-core pooling system.\n\nIf a privileged set of pooler functions is was considered, as per my\nother recent mail, the pooler could use a management connection to set\nthe client ip before handing the connection to the client, so the client\ncouldn't change pooled_client_ip its self by accident or through malice.\nBut even without that, it'd be awfully handy.\n\n--\nCraig Ringer\n", "msg_date": "Sat, 24 Jul 2010 14:36:19 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Fri, 2010-07-23 at 09:52 -0700, Joshua D. Drake wrote:\n> On Thu, 2010-07-22 at 20:56 +0100, Hannu Krosing wrote:\n> > \n> > > Let's extend this shall we:\n> > > \n> > > Avoid adding yet another network hop\n> > \n> > postgreSQL is multi-process, so you either have a separate \"pooler\n> > process\" or need to put pooler functionality in postmaster, bothw ways\n> > you still have a two-hop scenario for connect. you may be able to pass\n> > the socket to child process and also keep it, but doing this for both\n> > client and db sides seems really convoluted. \n> \n> Which means, right now there is three hops. Reducing one is good.\n\nNo, it is still two, as postmaster passes the socket to spwaned child\npostgresql process after login. \n\nthe process is as follows\n\nClient --connects--> postmaster --spawns--> postgreSQL server process\n\nthen socket is passed to be used directly so the use is\n\n\nClient --talks-to---> postgreSQL server process\n\nwhen using spooler it becomes\n\n\nClient --connects-to--> Spooler --passes-requests-to--> postgreSQL \n\nI see no way to have spooler select the postgreSQL process, pass the\nclient connection in a way that taks directly to postgrSQL server\nprocess AND be able to get the server connection back once the client is\nfinishe with either the request, transaction or connection (depending on\npooling mode).\n\n\n\n> \n> > Or is there a prortable way to pass sockets back and forth between\n> > parent and child processes ?\n> > \n> > If so, then pgbouncer could use it as well.\n> > \n> > > Remove of a point of failure\n> > \n> > rather move the point of failure from external pooler to internal\n> > pooler ;)\n> \n> Yes but at that point, it doesn't matter. \n> \n> > \n> > > Reduction of administrative overhead\n> > \n> > Possibly. But once you start actually using it, you still need to\n> > configure and monitor it and do other administrator-y tasks.\n> \n> Yes, but it is inclusive.\n> \n> > \n> > > Integration into our core authentication mechanisms\n> > \n> > True, although for example having SSL on client side connection will be\n> > so slow that it hides any performance gains from pooling, at least for\n> > short-lived connections.\n> \n> Yes, but right now you can't use *any* pooler with LDAP for example. We\n> could if pooling was in core. Your SSL argument doesn't really work\n> because its true with or without pooling.\n\nAs main slowdown in SSL is connection setup, so you can get the network\nsecurity and pooling speedup if you run pool on client side and make the\npooler-server connection over SSL.\n\n\n> > > Greater flexibility in connection control\n> > \n> > Yes, poolers can be much more flexible than default postgresql. See for\n> > example pgbouncers PAUSE , RECONFIGURE and RESUME commands \n> \n> :D\n> \n> > \n> > > And, having connection pooling in core does not eliminate the use of an\n> > > external pool where it makes since.\n> > \n> > Probably the easiest way to achieve \"pooling in core\" would be adding an\n> > option to start pgbouncer under postmaster control.\n> \n> Yeah but that won't happen. \n\nI guess it could happen as part of opening up the \"postgresql controlled\nprocess\" part to be configurable and able to run third party stuff. \n\nAnother thing to run under postmaster control would be pgqd . \n\n> Also I think we may have a libevent\n> dependency that we have to work out.\n> \n> > \n> > You probably can't get much leaner than pgbouncer.\n> \n> Oh don't get me wrong. I love pgbouncer. It is my recommended pooler but\n> even it has limitations (such as auth).\n\nAs pgbouncer is single-threaded and the main goal has been performance\nthere is not much enthusiasm about having _any_ auth method included\nwhich cant be completed in a few cpu cycles. It may be possible to add\nthreads to wait for LDAP/Kerberos/... response or do SSL handshakes, but\ni have not seen any interest from Marko to do it himself.\n\nMaybe there is a way to modularise the auth part of postmaster in a way\nthat could be used from third party products through some nice API which\npostmaster-controlled pgbouncer can start using.\n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Sat, 24 Jul 2010 10:52:44 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Sat, 2010-07-24 at 14:36 +0800, Craig Ringer wrote:\n> On 24/07/10 13:23, Greg Smith wrote:\n> > Joshua Tolley wrote:\n> >> Relatively minor, but it would be convenient to avoid having to query\n> >> $external_pooler to determine the client_addr of an incoming connection.\n> >> \n> > \n> > You suggest this as a minor concern, but I consider it to be one of the\n> > most compelling arguments in favor of in-core pooling. A constant pain\n> > with external poolers is the need to then combine two sources of data in\n> > order to track connections fully, which is something that everyone runs\n> > into eventually and finds annoying. It's one of the few things that\n> > doesn't go away no matter how much fiddling you do with pgBouncer, it's\n> > always getting in the way a bit. And it seems to seriously bother\n> > systems administrators and developers, not just the DBAs.\n> \n> \n> Putting a pooler in core won't inherently fix this, and won't remove the\n> need to solve it for cases where the pooler can't be on the same machine.\n> \n> 9.0 has application_name to let apps identify themselves. Perhaps a\n> \"pooled_client_ip\", to be set by a pooler rather than the app, could be\n> added to address this problem in a way that can be used by all poolers\n> new and existing, not just any new in-core pooling system.\n> \n> If a privileged set of pooler functions is was considered, as per my\n> other recent mail, the pooler could use a management connection to set\n> the client ip before handing the connection to the client, so the client\n> couldn't change pooled_client_ip its self by accident or through malice.\n> But even without that, it'd be awfully handy.\n\nOr maybe we can add some command extensions to the protocol for passing\nextra info, so that instead of sending just the \"(run_query:query)\"\ncommand over socket we could send both the extra info and execute\n\"(set_params:(proxy_client_ip:a.b.c.d)(proxy_client_post:n)(something\nelse))(run_query:query)\" in one packet (for performance) and have these\nthings be available in logging and pg_stat_activity\n\nI see no need to try to somehow restrict these if you can always be sure\nthat they are set by the direct client. proxy can decide to pass some of\nthese from the real client but it would be a decision made by proxy, not\nmandated by some proxying rules.\n\n\n\n\n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Sat, 24 Jul 2010 11:07:22 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> 9.0 has application_name to let apps identify themselves. Perhaps a\n> \"pooled_client_ip\", to be set by a pooler rather than the app, could be\n> added to address this problem in a way that can be used by all poolers\n> new and existing, not just any new in-core pooling system.\n\nX-Forwarded-For ?\n\n-- \ndim\n", "msg_date": "Sun, 25 Jul 2010 10:40:19 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Sat, Jul 24, 2010 at 2:23 AM, Craig Ringer\n<[email protected]> wrote:\n> On 24/07/10 01:28, Robert Haas wrote:\n>\n>> Well, if we could change the backends so that they could fully\n>> reinitialize themselves (disconnect from a database to which they are\n>> bound, etc.), I don't see why we couldn't use the Apache approach.\n>\n> This would offer the bonus on the side that it'd be more practical to\n> implement database changes for a connection, akin to MySQL's \"USE\".\n> Inefficient, sure, but possible.\n\nYep.\n\n> I don't care about that current limitation very much. I think anyone\n> changing databases all the time probably has the wrong design and should\n> be using schema. I'm sure there are times it'd be good to be able to\n> switch databases on one connection, though.\n\nI pretty much agree with this. I think this is merely slightly nice\non its own, but I think it might be a building-block to other things\nthat we might want to do down the road. Markus Wanner's Postgres-R\nreplication uses worker processes; autovacuum does as well; and then\nthere's parallel query. I can't help thinking that not needing to\nfork a new backend every time you want to connect to a new database\nhas got to be useful.\n\n> My question with all this remains: is it worth the effort when external\n> poolers already solve the problem.\n\nWhether it's worth the effort is something anyone who is thinking\nabout working on this will have to decide for themselves.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Sun, 25 Jul 2010 18:09:06 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Jul 22, 2010 at 5:29 PM, Andres Freund <[email protected]> wrote:\n>>> The problem is harder for us because a backend can't switch identities\n>>> once it's been assigned to a database. �I haven't heard an adequate\n>>> explanation of why that couldn't be changed, though.\n\n>> Possibly it might decrease the performance significantly enough by\n>> reducing the cache locality (syscache, prepared plans)?\n\n> Those things are backend-local. The worst case scenario is you've got\n> to flush them all when you reinitialize, in which case you still save\n> the overhead of creating a new process.\n\n\"Flushing them all\" is not zero-cost; it's not too hard to believe that\nit could actually be slower than forking a clean new backend.\n\nWhat's much worse, it's not zero-bug. We've got little bitty caches\nall over the backend, including (no doubt) some caching behavior in\nthird-party code that wouldn't get the word about whatever API you\ninvented to deal with this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 2010 16:40:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning. " }, { "msg_contents": "On Tue, Jul 27, 2010 at 4:40 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Thu, Jul 22, 2010 at 5:29 PM, Andres Freund <[email protected]> wrote:\n>>>> The problem is harder for us because a backend can't switch identities\n>>>> once it's been assigned to a database.  I haven't heard an adequate\n>>>> explanation of why that couldn't be changed, though.\n>\n>>> Possibly it might decrease the performance significantly enough by\n>>> reducing the cache locality (syscache, prepared plans)?\n>\n>> Those things are backend-local.  The worst case scenario is you've got\n>> to flush them all when you reinitialize, in which case you still save\n>> the overhead of creating a new process.\n>\n> \"Flushing them all\" is not zero-cost; it's not too hard to believe that\n> it could actually be slower than forking a clean new backend.\n\nI'm not so sure I believe that. Is a sinval overrun slower than\nforking a clean new backend? Is DISCARD ALL slower that forking a\nclean new backend? How much white space is there between either of\nthose and what would be needed here? I guess it could be slower, but\nI wouldn't want to assume that without evidence.\n\n> What's much worse, it's not zero-bug.  We've got little bitty caches\n> all over the backend, including (no doubt) some caching behavior in\n> third-party code that wouldn't get the word about whatever API you\n> invented to deal with this.\n\nI think this is probably the biggest issue with the whole idea, and I\nagree there would be some pain involved.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 27 Jul 2010 21:44:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Jul 27, 2010 at 4:40 PM, Tom Lane <[email protected]> wrote:\n>> \"Flushing them all\" is not zero-cost; it's not too hard to believe that\n>> it could actually be slower than forking a clean new backend.\n\n> I'm not so sure I believe that.\n\nI'm not asserting it's true, just suggesting it's entirely possible.\nOther than the fork() cost itself and whatever authentication activity\nthere might be, practically all the startup cost of a new backend can be\nseen as cache-populating activities. You'd have to redo all of that,\n*plus* pay the costs of getting rid of the previous cache entries.\nMaybe the latter costs less than a fork(), or maybe not. fork() is\npretty cheap on modern Unixen.\n\n>> What's much worse, it's not zero-bug.\n\n> I think this is probably the biggest issue with the whole idea, and I\n> agree there would be some pain involved.\n\nYeah, if it weren't for that I'd say \"sure let's try it\". But I'm\nafraid we'd be introducing significant headaches in return for a gain\nthat's quite speculative.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 2010 21:56:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning. " }, { "msg_contents": "On Tue, Jul 27, 2010 at 9:56 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Tue, Jul 27, 2010 at 4:40 PM, Tom Lane <[email protected]> wrote:\n>>> \"Flushing them all\" is not zero-cost; it's not too hard to believe that\n>>> it could actually be slower than forking a clean new backend.\n>\n>> I'm not so sure I believe that.\n>\n> I'm not asserting it's true, just suggesting it's entirely possible.\n> Other than the fork() cost itself and whatever authentication activity\n> there might be, practically all the startup cost of a new backend can be\n> seen as cache-populating activities.  You'd have to redo all of that,\n> *plus* pay the costs of getting rid of the previous cache entries.\n> Maybe the latter costs less than a fork(), or maybe not.  fork() is\n> pretty cheap on modern Unixen.\n>\n>>> What's much worse, it's not zero-bug.\n>\n>> I think this is probably the biggest issue with the whole idea, and I\n>> agree there would be some pain involved.\n>\n> Yeah, if it weren't for that I'd say \"sure let's try it\".  But I'm\n> afraid we'd be introducing significant headaches in return for a gain\n> that's quite speculative.\n\nI agree that the gain is minimal of itself; after all, how often do\nyou need to switch databases, and what's the big deal if the\npostmaster has to fork a new backend? Where I see it as a potentially\nbig win is when it comes to things like parallel query. I can't help\nthinking that's going to be a lot less efficient if you're forever\nforking new backends. Perhaps the point here is that you'd actually\nsort of like to NOT flush all those caches unless it turns out that\nyou're switching databases - many installations probably operate with\nessentially one big database, so chances are good that even if you\ndistributed connections / parallel queries to backends round-robin,\nyou'd potentially save quite a bit of overhead. Of course, for the\nguy who has TWO big databases, you might hope to be a little smarter,\nbut that's another problem altogether.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 27 Jul 2010 22:05:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Jul 27, 2010 at 9:56 PM, Tom Lane <[email protected]> wrote:\n>> Other than the fork() cost itself and whatever authentication activity\n>> there might be, practically all the startup cost of a new backend can be\n>> seen as cache-populating activities. �You'd have to redo all of that,\n>> *plus* pay the costs of getting rid of the previous cache entries.\n>> Maybe the latter costs less than a fork(), or maybe not. �fork() is\n>> pretty cheap on modern Unixen.\n\n> I agree that the gain is minimal of itself; after all, how often do\n> you need to switch databases, and what's the big deal if the\n> postmaster has to fork a new backend? Where I see it as a potentially\n> big win is when it comes to things like parallel query. I can't help\n> thinking that's going to be a lot less efficient if you're forever\n> forking new backends.\n\nColor me unconvinced. To do parallel queries with pre-existing worker\nprocesses, you'd need to hook up with a worker that was (at least) in\nyour own database, and then somehow feed it the query plan that it needs\nto execute. I'm thinking fork() could easily be cheaper. But obviously\nthis is all speculation (... and Windows is going to be a whole 'nother\nstory in any case ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 2010 01:09:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning. " }, { "msg_contents": "On 28/07/10 04:40, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> On Thu, Jul 22, 2010 at 5:29 PM, Andres Freund <[email protected]> wrote:\n>>>> The problem is harder for us because a backend can't switch identities\n>>>> once it's been assigned to a database. I haven't heard an adequate\n>>>> explanation of why that couldn't be changed, though.\n> \n>>> Possibly it might decrease the performance significantly enough by\n>>> reducing the cache locality (syscache, prepared plans)?\n> \n>> Those things are backend-local. The worst case scenario is you've got\n>> to flush them all when you reinitialize, in which case you still save\n>> the overhead of creating a new process.\n> \n> \"Flushing them all\" is not zero-cost; it's not too hard to believe that\n> it could actually be slower than forking a clean new backend.\n> \n> What's much worse, it's not zero-bug. We've got little bitty caches\n> all over the backend, including (no doubt) some caching behavior in\n> third-party code that wouldn't get the word about whatever API you\n> invented to deal with this.\n\nIn addition to caches, there may be some places where memory is just\nexpected to leak. Since it's a one-off allocation, nobody really cares;\nwhy bother cleaning it up when it's quicker to just let the OS do it\nwhen the backend terminates?\n\nBeing able to reset a backend for re-use would require that per-session\nmemory use be as neatly managed as per-query and per-transaction memory,\nwith no leaked stragglers left lying around.\n\nSuch cleanup (and management) has its own costs. Plus, you have a\npotentially increasingly fragmented memory map to deal with the longer\nthe backend lives. Overall, there are plenty of advantages to letting\nthe OS clean it up.\n\n... however, if the requirement is introduced that a given backend may\nonly be re-used for connections to the same database, lots of things get\nsimpler. You have to be able to change the current user (which would be\na bonus anyway), reset GUCs, etc, but how much else is there to do?\n\nThat way you can maintain per-database pools of idle workers (apache\nprefork style) with ageing-out of backends that're unused. Wouldn't this\ndo the vast majority of what most pools are needed for anyway? And\nwouldn't it potentially save quite a bit of load by avoiding having\nbackends constantly switching databases, flushing caches, etc?\n\n-- \nCraig Ringer\n\nTech-related writing: http://soapyfrogs.blogspot.com/\n", "msg_date": "Wed, 28 Jul 2010 13:23:29 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "28.07.10 04:56, Tom Lane написав(ла):\n>\n> I'm not asserting it's true, just suggesting it's entirely possible.\n> Other than the fork() cost itself and whatever authentication activity\n> there might be, practically all the startup cost of a new backend can be\n> seen as cache-populating activities. You'd have to redo all of that,\n> *plus* pay the costs of getting rid of the previous cache entries.\n> Maybe the latter costs less than a fork(), or maybe not. fork() is\n> pretty cheap on modern Unixen.\n>\n> \nActually as for me, the problem is that one can't raise number of \ndatabase connections high without overloading CPU/memory/disk, so \nexternal pooling is needed. If postgresql had something like \nmax_active_queries setting that limit number of connections that are not \nin IDLE [in transaction] state, one could raise max connections high \n(and I don't think idle process itself has much overhead) and limit \nmax_active_queries to get maximum performance and won't use external \npooling. Of course this won't help if the numbers are really high, but \ncould work out the most common cases.\n\nBest regards, Vitalii Tymchyshyn\n", "msg_date": "Wed, 28 Jul 2010 14:03:59 +0300", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On 7/27/10 6:56 PM, Tom Lane wrote:\n> Yeah, if it weren't for that I'd say \"sure let's try it\". But I'm\n> afraid we'd be introducing significant headaches in return for a gain\n> that's quite speculative.\n\nWell, the *gain* isn't speculative. For example, I am once again\ndealing with the issue that PG backend processes on Solaris never give\nup their RAM, resulting in pathological swapping situations if you have\nmany idle connections. This requires me to install pgpool, which is\noverkill (since it has load balancing, replication, and more) just to\nmake sure that connections get recycled so that I don't have 300 idle\nconnections eating up 8GB of RAM.\n\nRelative to switching databases, I'd tend to say that, like pgbouncer\nand pgpool, we don't need to support that. Each user/database combo can\nhave their own \"pool\". While not ideal, this would be good enough for\n90% of users.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Wed, 28 Jul 2010 12:44:49 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Wed, Jul 28, 2010 at 3:44 PM, Josh Berkus <[email protected]> wrote:\n> On 7/27/10 6:56 PM, Tom Lane wrote:\n>> Yeah, if it weren't for that I'd say \"sure let's try it\".  But I'm\n>> afraid we'd be introducing significant headaches in return for a gain\n>> that's quite speculative.\n>\n> Well, the *gain* isn't speculative.  For example, I am once again\n> dealing with the issue that PG backend processes on Solaris never give\n> up their RAM, resulting in pathological swapping situations if you have\n> many idle connections.  This requires me to install pgpool, which is\n> overkill (since it has load balancing, replication, and more) just to\n> make sure that connections get recycled so that I don't have 300 idle\n> connections eating up 8GB of RAM.\n>\n> Relative to switching databases, I'd tend to say that, like pgbouncer\n> and pgpool, we don't need to support that.  Each user/database combo can\n> have their own \"pool\".  While not ideal, this would be good enough for\n> 90% of users.\n\nHowever, if we don't support that, we can't do any sort of pooling-ish\nthing without the ability to pass file descriptors between processes;\nand Tom seems fairly convinced there's no portable way to do that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 28 Jul 2010 15:52:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> However, if we don't support that, we can't do any sort of pooling-ish\n> thing without the ability to pass file descriptors between processes;\n> and Tom seems fairly convinced there's no portable way to do that.\n\nWell, what it would come down to is: are we prepared to not support\npooling on platforms without such a capability? It's certainly possible\nto do it on many modern platforms, but I don't believe we can make it\nhappen everywhere. Generally we've tried to avoid having major features\nthat don't work everywhere ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 2010 16:10:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning. " }, { "msg_contents": "On Wed, Jul 28, 2010 at 04:10:08PM -0400, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n> > However, if we don't support that, we can't do any sort of pooling-ish\n> > thing without the ability to pass file descriptors between processes;\n> > and Tom seems fairly convinced there's no portable way to do that.\n>\n> Well, what it would come down to is: are we prepared to not support\n> pooling on platforms without such a capability? It's certainly possible\n> to do it on many modern platforms, but I don't believe we can make it\n> happen everywhere. Generally we've tried to avoid having major features\n> that don't work everywhere ...\nWhich platforms do you have in mind here? All of the platforms I found\ndocumented to be supported seem to support at least one of SCM_RIGHTS,\nWSADuplicateSocket or STREAMS/FD_INSERT.\nMost if not all beside windows support SCM_RIGHTS. The ones I am\ndubious about support FD_INSERT...\n\nAndres\n", "msg_date": "Wed, 28 Jul 2010 22:19:22 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "On Wed, Jul 28, 2010 at 4:10 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> However, if we don't support that, we can't do any sort of pooling-ish\n>> thing without the ability to pass file descriptors between processes;\n>> and Tom seems fairly convinced there's no portable way to do that.\n>\n> Well, what it would come down to is: are we prepared to not support\n> pooling on platforms without such a capability?  It's certainly possible\n> to do it on many modern platforms, but I don't believe we can make it\n> happen everywhere.  Generally we've tried to avoid having major features\n> that don't work everywhere ...\n\nI suppose it depends on the magnitude of the benefit. And how many\nplatforms aren't covered. And how much code is required. In short,\nuntil someone writes a patch, who knows? I think the core question we\nshould be thinking about is what would be the cleanest method of\nresetting a backend - either for the same database or for a different\none, whichever seems easier. And by cleanest, I mean least likely to\nintroduce bugs. If we can get to the point where we have something to\nplay around with, even if it's kind of kludgey or doesn't quite work,\nit'll give us some idea of whether further effort is worthwhile and\nhow it should be directed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 28 Jul 2010 23:16:22 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." }, { "msg_contents": "\n> introduce bugs. If we can get to the point where we have something to\n> play around with, even if it's kind of kludgey or doesn't quite work,\n> it'll give us some idea of whether further effort is worthwhile and\n> how it should be directed.\n\nShould I put this on the TODO list, then, in hopes that someone steps \nforward?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 29 Jul 2010 10:39:14 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pooling in Core WAS: Need help in performance tuning." } ]
[ { "msg_contents": "Harpreet singh Wadhwa <[email protected]> wrote:\n \n> I want to fine tune my postgresql to increase number of connects\n> it can handle in a minutes time.\n> Decrease the response time per request etc.\n> The exact case will be to handle around 100 concurrent requests.\n \nI have found that connection pooling is crucial.\n \nThe \"concurrent requests\" phrase worries me a bit -- you should be\nfocusing more on \"concurrent connections\" and perhaps \"requests per\nsecond\". With most hardware, you will get faster response time and\nbetter overall throughput by funneling 100 connections through a\nconnection pool which limits the number of concurrent requests to\njust enough to keep all your hardware resources busy, queuing any\nrequests beyond that for submission when a pending request\ncompletes.\n \n> Any hardware suggestions are also welcomed.\n \nIf you don't have the hardware yet, you'd need to provide a bit more\ninformation to get advice on what hardware you need.\n \n-Kevin\n\n", "msg_date": "Thu, 08 Jul 2010 15:22:07 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help in performance tuning." } ]
[ { "msg_contents": "Hello all,\n\nI am learning PostgreSQL PITR. I have PostgreSQL running with the \nfollowing parameters \nset up for archiving/log switching:\n\narchive_mode = on \narchive_command = 'cp -i %p /home/postgres/archive/%f </dev/null' \narchive_timeout = 300\n\nI checked the archival process. It was running fine. I took a base backup \nas specified. All \nthis was done yesterday and by today, 200+ archived log files were there. \nNow I wanted to \nsee how I could drop a couple of tables and then restore the database with \nthose tables in tact.\nSo I did \n\npostgres=# select now()\npostgres-# ;\n now\n----------------------------------\n 2010-07-09 07:46:44.109004+05:30\n(1 row)\n\n\ntest=# \\d\n List of relations\n Schema | Name | Type | Owner\n--------+---------+-------+----------\n public | myt | table | postgres\n public | repthis | table | postgres\n public | testmyr | table | postgres\n(3 rows)\n\ntest=# select count(*) from repthis ;\n count\n-------\n 19002\n(1 row)\n\ntest=# select count(*) from testmyr ;\n count\n-------\n 2080\n(1 row)\n\ntest=# drop table repthis\ntest-# ;\nDROP TABLE\n-- The order in which I dropped the tables has significance if you see the \nfinal state of the \n-- db after recovery.\ntest=# drop table testmyr ;\nDROP TABLE\ntest=# \\d\n List of relations\n Schema | Name | Type | Owner\n--------+------+-------+----------\n public | myt | table | postgres\n(1 row)\n\nThen I stopped the server and started the recovery process as mentioned in \nthe document - \ni.e. cleaned out the directories (except pg_xlog), created a recovery.conf \nfile and did a \npg_ctl start.\nThe relevant parameters in recovery.conf were \n\nrestore_command = 'cp /home/postgres/archive/%f %p'\nrecovery_target_time = '2010-07-09 07:46:44'\n\nThe time '2010-07-09 07:46:44' is the time I got by executing select now() \nearlier in the process.\n ( 2010-07-09 07:46:44.109004+05:30). There were a few seconds gap after I \ngot this time \nand I dropped the tables.\n\nThe recovery ended with these lines - \nLOG: restored log file \"0000000100000000000000D4\" from archive\nLOG: restored log file \"0000000100000000000000D5\" from archive\nLOG: restored log file \"0000000100000000000000D6\" from archive\nLOG: recovery stopping before commit of transaction 676, time 2010-07-09 \n07:49:26.580518+05:30\nLOG: redo done at 0/D6006084\ncp: cannot stat `/home/postgres/archive/00000002.history': No such file or \ndirectory\nLOG: selected new timeline ID: 2\ncp: cannot stat `/home/postgres/archive/00000001.history': No such file or \ndirectory\nLOG: archive recovery complete\nLOG: autovacuum launcher started\nLOG: database system is ready to accept connections\n\nSo here goes my first question - \n\nWhy did it recover to time 2010-07-09 07:49:26 when I mentioned \n'2010-07-09 07:46:44' ?\n\nIf I login to the database and do a listing of tables,\ntest=# \\d\n List of relations\n Schema | Name | Type | Owner\n--------+---------+-------+----------\n public | myt | table | postgres\n public | testmyr | table | postgres\n(2 rows)\n\ntest=# select count(*) from testmyr ;\n count\n-------\n 2080\n(1 row)\n\nSo recovery happened to a point after I dropped the first table and before \nI dropped \nthe second table. Why ? Probably answer is the same as the one to my first \nquestion.\nIs there a way in which I can now go back a bit further, and ensure I am \nback to the \ntime line before I dropped either of the tables? From documentation, I \nthink the answer is 'No'.\nOf course, I could try the entire recovery process once more, and provide \na couple of minutes \nearlier time as recovery_target_time.\n\nRegards,\nJayadevan \n\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Fri, 9 Jul 2010 15:17:14 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": true, "msg_subject": "Queries about PostgreSQL PITR" }, { "msg_contents": "On Fri, Jul 9, 2010 at 6:47 PM, Jayadevan M\n<[email protected]> wrote:\n> So recovery happened to a point after I dropped the first table and before\n> I dropped\n> the second table. Why ?\n\nBecause you didn't disable recovery_target_inclusive, I guess.\nhttp://www.postgresql.org/docs/8.4/static/continuous-archiving.html#RECOVERY-TARGET-INCLUSIVE\n\n> Is there a way in which I can now go back a bit further, and ensure I am\n> back to the\n> time line before I dropped either of the tables? From documentation, I\n> think the answer is 'No'.\n> Of course, I could try the entire recovery process once more, and provide\n> a couple of minutes\n> earlier time as recovery_target_time.\n\nHow about setting recovery_target_timeline to the old timeline ID (= 1)?\nhttp://www.postgresql.org/docs/8.4/static/continuous-archiving.html#RECOVERY-TARGET-TIMELINE\n\nRegards,\n\n-- \nFujii Masao\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n", "msg_date": "Mon, 12 Jul 2010 15:25:08 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries about PostgreSQL PITR" }, { "msg_contents": "Hi,\n>Because you didn't disable recovery_target_inclusive, I guess.\n>\nhttp://www.postgresql.org/docs/8.4/static/continuous-archiving.html#RECOVERY-TARGET-INCLUSIVE\nThanks. I was almost sure this will fix it. But the issue seems to be \nsomething else. Even if I give a time that is a few more minutes before \nwhat I got from select now(), it is always moving upto/or just before \n(depending on the above parameter) transaction id 676. The ooutput reads \n LOG: recovery stopping before commit of transaction 676, time 2010-07-09 \n07:49:26.580518+05:30\n\nIs there a way to find out the transaction ids and corresponding SQLs, \ntimeline etc? May be doing the recovery in debug/logging mode or something \nlike that?\n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Mon, 12 Jul 2010 13:59:36 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries about PostgreSQL PITR" }, { "msg_contents": "On Mon, Jul 12, 2010 at 5:29 PM, Jayadevan M\n<[email protected]> wrote:\n> Hi,\n>>Because you didn't disable recovery_target_inclusive, I guess.\n>>\n> http://www.postgresql.org/docs/8.4/static/continuous-archiving.html#RECOVERY-TARGET-INCLUSIVE\n> Thanks. I was almost sure this will fix it. But the issue seems to be\n> something else. Even if I give a time that is a few more minutes before\n> what I got from select now(), it is always moving upto/or just before\n> (depending on the above parameter) transaction id 676. The ooutput reads\n>  LOG:  recovery stopping before commit of transaction 676, time 2010-07-09\n> 07:49:26.580518+05:30\n\nA recovery stops when the commit time > or >= recovery_target_time.\nSo, unless it moves up to the newer commit than recovery_target_time,\nit cannot stop.\n\n> Is there a way to find out the transaction ids and corresponding SQLs,\n> timeline etc? May be doing the recovery in debug/logging mode or something\n> like that?\n\nxlogviewer reads WAL files and displays the contents of them. But\nit's been inactive for several years, so I'm not sure if it's available now.\nhttp://pgfoundry.org/projects/xlogviewer/\n\nRegards,\n\n-- \nFujii Masao\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n", "msg_date": "Mon, 12 Jul 2010 19:05:34 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries about PostgreSQL PITR" }, { "msg_contents": "Hello all,\nOne doubt about how PostgreSQL PITR works. Let us say I have all the \narchived WALs for the past week with \narchive_command = 'cp -i %p /home/postgres/archive/%f </dev/null' \nI took a base backup last night. If I try to recover the server today \nafter \ncopying the base backup from yesterday and providing \nrestore_command = 'cp /home/postgres/archive/%f %p'\ndoes PostgreSQL go through all the past week's archived WALS or \nit can figure out that the base backup is from yesterday, so skip \na large number of archived WALs and start only from file xxx ?\nEither way, are there ways to speed up the restore process?\nRegards,\nJayadevan \n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Mon, 12 Jul 2010 16:53:19 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL PITR - more doubts" }, { "msg_contents": "On 2010-07-12 13:23, Jayadevan M wrote:\n> Hello all,\n> One doubt about how PostgreSQL PITR works. Let us say I have all the\n> archived WALs for the past week with\n> archive_command = 'cp -i %p /home/postgres/archive/%f</dev/null'\n> I took a base backup last night. If I try to recover the server today\n> after\n> copying the base backup from yesterday and providing\n> restore_command = 'cp /home/postgres/archive/%f %p'\n> does PostgreSQL go through all the past week's archived WALS or\n> it can figure out that the base backup is from yesterday, so skip\n> a large number of archived WALs and start only from file xxx ?\n> \n\nYes, It starts out form \"where it needs to\". Assuming you\ndid a pg_start_backup() before you did your base backup?\n\n-- \nJesper\n", "msg_date": "Mon, 12 Jul 2010 13:38:29 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL PITR - more doubts" }, { "msg_contents": "> Yes, It starts out form \"where it needs to\". Assuming you\n>did a pg_start_backup() before you did your base backup?\n\nThanks. I did. \nIt uses files like 0000000B00000000000000D9.00000020.backupto get the \nnecessary information?\n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Mon, 12 Jul 2010 17:34:01 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL PITR - more doubts" }, { "msg_contents": "Jayadevan M <[email protected]> wrote:\n \n>> Yes, It starts out form \"where it needs to\". Assuming you\n>> did a pg_start_backup() before you did your base backup?\n> \n> Thanks. I did. \n> It uses files like 0000000B00000000000000D9.00000020.backupto get\n> the necessary information?\n \nYeah, since it's a text file, you can easily have a look at what is\nstored there. It's based on when pg_start_backup and pg_stop_backup\nwere run.\n \n-Kevin\n", "msg_date": "Mon, 12 Jul 2010 07:28:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL PITR - more doubts" } ]
[ { "msg_contents": "I'm having trouble getting the query planner to use indexes. The situation occurs when writing a query that uses functions for defining the parameters for the conditions on the indexed columns. The system I'm running is Windows Server 2003, using version 8.4.2 of PostgreSQL.\n\nThis is the following table that I'm running my query against:\n\nCREATE TABLE crs_coordinate\n(\n id integer NOT NULL,\n nod_id integer NOT NULL,\n value1 numeric(22,12),\n value2 numeric(22,12),\n CONSTRAINT crs_coordinate_pkey PRIMARY KEY (id)\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX coo_value1 ON crs_coordinate USING btree (value1);\nCREATE INDEX coo_value2 ON crs_coordinate USING btree (value2);\n\nThis table has 23 million rows in it and was analysed just before planning my queries. \n\nThis is the query that does not use the indexes:\n\nSELECT\n coo.nod_id,\n 6400000*radians(sqrt((coo.value1 - -41.0618)^2+((coo.value2 - 175.58461)*cos(radians(-41.0618)))^2)) as distance\nFROM \n crs_coordinate coo\nWHERE\n coo.value1 between -41.0618-degrees(1200.0/6400000.0) and -41.0618+degrees(1200.0/6400000.0) and\n coo.value2 between 175.58461-degrees(1200.0/6400000.0)/(cos(radians(-41.0618))) and 175.58461+degrees(1200.0/6400000.0)/(cos(radians(-41.0618)));\n\nSeq Scan on crs_coordinate coo (cost=0.00..1039607.49 rows=592 width=28)\n Filter: (((value1)::double precision >= (-41.0725429586587)::double precision) AND ((value1)::double precision <= (-41.0510570413413)::double precision) AND ((value2)::double precision >= 175.570362072701::double precision) AND ((value2)::double precision <= 175.598857927299::double precision))\n\nHowever if I pre-evaluated the parameters for the where condition on the value1 and value2 columns, the planner chooses to use the indexes:\n\nSELECT\n coo.nod_id,\n 6400000*radians(sqrt((coo.value1 - -41.0618)^2+((coo.value2 - 175.58461)*cos(radians(-41.0618)))^2)) as distance\nFROM \n crs_coordinate coo\nWHERE\n coo.value1 BETWEEN -41.07254296 AND -41.05105704 AND\n coo.value2 BETWEEN 175.57036207 AND 175.59885792;\n\nBitmap Heap Scan on crs_coordinate coo (cost=5299.61..6705.41 rows=356 width=28)\n Recheck Cond: ((value1 >= (-41.07254296)) AND (value1 <= (-41.05105704)) AND (value2 >= 175.57036207) AND (value2 <= 175.59885792))\n -> BitmapAnd (cost=5299.61..5299.61 rows=356 width=0)\n -> Bitmap Index Scan on coo_value1 (cost=0.00..1401.12 rows=54923 width=0)\n Index Cond: ((value1 >= (-41.07254296)) AND (value1 <= (-41.05105704)))\n -> Bitmap Index Scan on coo_value2 (cost=0.00..3898.06 rows=153417 width=0)\n Index Cond: ((value2 >= 175.57036207) AND (value2 <= 175.59885792))\n\nSo why is the first query not using the indexes on the value1 and value2 columns? I'm assuming that both the COS and RAIDIANS functions are STRICT IMMUTABLE, so logically the evaluation of these functions in the where clause should be inlined. Looking at the query plan this inlining does seem to be happening...\n\nAt this stage I have a work around by putting the query into a plpgsql function and using dynamic SQL. But it is still frustrating why the planner seems to be working in a far from optimal fashion. Can anyone shed some light on this for me?\n\nThanks,\nJeremy\n\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Sat, 10 Jul 2010 10:36:15 +1200", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": true, "msg_subject": "Index usage with functions in where condition" }, { "msg_contents": "Jeremy Palmer <[email protected]> writes:\n> This is the query that does not use the indexes:\n\n> SELECT\n> coo.nod_id,\n> 6400000*radians(sqrt((coo.value1 - -41.0618)^2+((coo.value2 - 175.58461)*cos(radians(-41.0618)))^2)) as distance\n> FROM \n> crs_coordinate coo\n> WHERE\n> coo.value1 between -41.0618-degrees(1200.0/6400000.0) and -41.0618+degrees(1200.0/6400000.0) and\n> coo.value2 between 175.58461-degrees(1200.0/6400000.0)/(cos(radians(-41.0618))) and 175.58461+degrees(1200.0/6400000.0)/(cos(radians(-41.0618)));\n\nThose expressions yield float8, not numeric, and numeric vs float8 isn't\nan indexable operator for reasons we needn't get into here. You should\nprobably rethink whether numeric is really the best choice of datatype\nfor your columns, if this is the sort of value you expect to work with\n--- you're paying a considerable price in speed and space for\nperhaps-illusory precision gains. But if you insist on using numeric\nthen the solution is to cast the expression results to numeric\nexplicitly.\n\nBTW I wonder whether you ought not be looking into postgis rather than\nrolling-your-own coordinate arithmetic ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Jul 2010 19:19:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage with functions in where condition " }, { "msg_contents": "Hi Tom,\n\nThanks for the help - much appreciated.\n\nYes I'm using PostGIS, and with a simply join to a relating table I could get access to the geometry for these point positions. Is using the GIST r-tree index faster than using the 2 b-tree indexes on the lat and long values? I guess this is a question for the PostGIS guys and a quick test could tell me anyway! My memory is that the GIST r-tree index is slow for points at the moment, and that a good implementation of a kd-tree index over GIST is required for better speed.\n\nRegards,\n\nJeremy Palmer\nGeodetic Surveyor\nNational Geodetic Office\n \nLand Information New Zealand | Toitu te whenua\n160 Lambton Quay | Private Box 5501 | Wellington 6145\n \nDDI: 64 (0)4 498 3537 | Fax: 64 (0)4 498 3837 | www.linz.govt.nz\n\n \n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Saturday, 10 July 2010 11:20 a.m.\nTo: Jeremy Palmer\nCc: [email protected]\nSubject: Re: [PERFORM] Index usage with functions in where condition \n\nJeremy Palmer <[email protected]> writes:\n> This is the query that does not use the indexes:\n\n> SELECT\n> coo.nod_id,\n> 6400000*radians(sqrt((coo.value1 - -41.0618)^2+((coo.value2 - 175.58461)*cos(radians(-41.0618)))^2)) as distance\n> FROM \n> crs_coordinate coo\n> WHERE\n> coo.value1 between -41.0618-degrees(1200.0/6400000.0) and -41.0618+degrees(1200.0/6400000.0) and\n> coo.value2 between 175.58461-degrees(1200.0/6400000.0)/(cos(radians(-41.0618))) and 175.58461+degrees(1200.0/6400000.0)/(cos(radians(-41.0618)));\n\nThose expressions yield float8, not numeric, and numeric vs float8 isn't\nan indexable operator for reasons we needn't get into here. You should\nprobably rethink whether numeric is really the best choice of datatype\nfor your columns, if this is the sort of value you expect to work with\n--- you're paying a considerable price in speed and space for\nperhaps-illusory precision gains. But if you insist on using numeric\nthen the solution is to cast the expression results to numeric\nexplicitly.\n\nBTW I wonder whether you ought not be looking into postgis rather than\nrolling-your-own coordinate arithmetic ...\n\n\t\t\tregards, tom lane\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Sat, 10 Jul 2010 12:25:55 +1200", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index usage with functions in where condition " } ]
[ { "msg_contents": "-- \nHOSTIN Damien - Equipe R&D\nTel:+33(0)4 63 05 95 40\nSoci�t� Ax�ge\n23 rue Saint Simon\n63000 Clermont Ferrand\nwww.axege.com\n\n\nRobert Haas a �crit :\n> On Wed, Jul 7, 2010 at 10:39 AM, damien hostin <[email protected]> wrote:\n> \n>> Hello again,\n>>\n>> At last, I check the same query with the same data on my desktop computer.\n>> Just after loading the data, the queries were slow, I launch a vaccum\n>> analyse which collect good stats on the main table, the query became quick\n>> (~200ms). Now 1classic sata disk computer is faster than our little monster\n>> server !!\n>> \n>\n> Have you tried running ANALYZE on the production server?\n>\n> You might also want to try ALTER TABLE ... SET STATISTICS to a large\n> value on some of the join columns involved in the query.\n>\n> \nHello,\n\nBefore comparing the test case on the two machines, I run analyse on the \nwhole and look at pg_stats table to see if change occurs for the \ncolumns. but on the production server the stats never became as good as \non the desktop computer. I set statistic at 10000 on column used by the \njoin, run analyse which take a 3000000 row sample then look at the \nstats. The stats are not as good as on the desktop. Row number is nearly \nthe same but only 1 or 2 values are found.\n\nThe data are not balanced the same way on the two computer :\n- Desktop is 12000 rows with 6000 implicated in the query (50%),\n- \"Production\" (actually a dev/test server) is 6 million rows with 6000 \nimplicated in the query (0,1%).\nColumns used in the query are nullable, and in the 5994000 other rows \nthat are not implicated in the query these columns are null.\n\nI don't know if the statistic target is a % or a number of value to \nobtain, but event set at max (10000), it didn't managed to collect good \nstats (for this particular query).\nAs I don't know what more to do, my conclusion is that the data need to \nbe better balanced to allow the analyse gather better stats. But if \nthere is a way to improve the stats/query with this ugly balanced data, \nI'm open to it !\n\nI hope that in real production, data will never be loaded this way. If \nthis appened we will maybe set enable_nestloop to off, but I don't think \nit's a good solution, other query have a chance to get slower.\n\n\nThanks for helping\n\n-- \nHOSTIN Damien - Equipe R&D\nTel:+33(0)4 63 05 95 40\nSoci�t� Ax�ge\n23 rue Saint Simon\n63000 Clermont Ferrand\nwww.axege.com", "msg_date": "Mon, 12 Jul 2010 11:03:04 +0200", "msg_from": "damien hostin <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Re: Slow query with planner row strange estimation]" }, { "msg_contents": "-Ooops sorry for the spam-\n\n\n", "msg_date": "Mon, 12 Jul 2010 11:04:59 +0200", "msg_from": "damien hostin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: Re: Slow query with planner row strange estimation]" } ]
[ { "msg_contents": "Hi,\n\nI need to log the start and end time of the procedures in a table. But the start and end time are same. This is how I recreated the issue.\n\ncreate table test_time (time timestamp);\ndelete from test_time;\ninsert into test_time select now();\nSELECT pg_sleep(10);\ninsert into test_time select now();\nSELECT pg_sleep(10);\ninsert into test_time select now();\nSELECT pg_sleep(10);\ninsert into test_time select now();\nSELECT pg_sleep(10);\nselect * from test_time;\n\n\"2010-07-12 12:43:40.509746\"\n\"2010-07-12 12:43:40.509746\"\n\"2010-07-12 12:43:40.509746\"\n\"2010-07-12 12:43:40.509746\"\n\nAtul Goel\nSENIOR DEVELOPER\n\nGlobal DataPoint\nMiddlesex House, 34-42 Cleveland Street\nLondon W1T 4LB, UK\nT: +44 (0)20 7079 4827\nM: +44 (0)7846765098\nwww.globaldatapoint.com<http://www.globaldatapoint.com/>\n\nThis e-mail is confidential and should not be used by anyone who is not the original intended recipient. Global DataPoint Limited does not accept liability for any statements made which are clearly the sender's own and not expressly made on behalf of Global DataPoint Limited. No contracts may be concluded on behalf of Global DataPoint Limited by means of e-mail communication. Global DataPoint Limited Registered in England and Wales with registered number 3739752 Registered Office Middlesex House, 34-42 Cleveland Street, London W1T 4LB\n\n\n\n\n\n\n\n\n\nHi, \n \nI need to log the start and end time of the procedures in a table. But the start and end time are same. This is how I recreated the issue.\n\n \ncreate table test_time (time timestamp);\ndelete from  test_time;\ninsert into test_time select now();\nSELECT pg_sleep(10);\ninsert into test_time select now();\nSELECT pg_sleep(10);\ninsert into test_time select now();\nSELECT pg_sleep(10);\ninsert into test_time select now();\nSELECT pg_sleep(10);\nselect * from test_time;\n \n\"2010-07-12 12:43:40.509746\"\n\"2010-07-12 12:43:40.509746\"\n\"2010-07-12 12:43:40.509746\"\n\"2010-07-12 12:43:40.509746\"\n \nAtul Goel\nSENIOR DEVELOPER\n \nGlobal DataPoint\nMiddlesex House, 34-42 Cleveland Street\nLondon W1T 4LB, UK\nT: +44 (0)20\n 7079 4827\nM: +44 (0)7846765098\nwww.globaldatapoint.com\n \n\nThis e-mail is confidential and should not be used by anyone who is not the original intended recipient. Global DataPoint Limited does not accept liability for any statements made which are clearly the sender's own and not expressly made on behalf of Global\n DataPoint Limited. No contracts may be concluded on behalf of Global DataPoint Limited by means of e-mail communication. Global DataPoint Limited Registered in England and Wales with registered number 3739752 Registered Office Middlesex House, 34-42 Cleveland\n Street, London W1T 4LB", "msg_date": "Mon, 12 Jul 2010 11:00:00 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "now() gives same time within the session" }, { "msg_contents": "In response to [email protected] :\n> Hi,\n> \n> \n> \n> I need to log the start and end time of the procedures in a table. But the\n> start and end time are same. This is how I recreated the issue.\n> \n> \n> \n> create table test_time (time timestamp);\n> \n> delete from test_time;\n> \n> insert into test_time select now();\n\n\nUse timeofday() instead, now() returns the transaction starting time.\n\nBEGIN\ntest=*# select now();\n now\n-------------------------------\n 2010-07-12 13:13:28.907043+02\n(1 row)\n\ntest=*# select timeofday();\n timeofday\n--------------------------------------\n Mon Jul 12 13:13:36.187703 2010 CEST\n(1 row)\n\ntest=*# select now();\n now\n-------------------------------\n 2010-07-12 13:13:28.907043+02\n(1 row)\n\ntest=*#\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Mon, 12 Jul 2010 13:15:00 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: now() gives same time within the session" }, { "msg_contents": "Sure thanks a lot.\n\nRegards,\nAtul Goel\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of A. Kretschmer\nSent: 12 July 2010 12:15\nTo: [email protected]\nSubject: Re: [PERFORM] now() gives same time within the session\n\nIn response to [email protected] :\n> Hi,\n>\n>\n>\n> I need to log the start and end time of the procedures in a table. But the\n> start and end time are same. This is how I recreated the issue.\n>\n>\n>\n> create table test_time (time timestamp);\n>\n> delete from test_time;\n>\n> insert into test_time select now();\n\n\nUse timeofday() instead, now() returns the transaction starting time.\n\nBEGIN\ntest=*# select now();\n now\n-------------------------------\n 2010-07-12 13:13:28.907043+02\n(1 row)\n\ntest=*# select timeofday();\n timeofday\n--------------------------------------\n Mon Jul 12 13:13:36.187703 2010 CEST\n(1 row)\n\ntest=*# select now();\n now\n-------------------------------\n 2010-07-12 13:13:28.907043+02\n(1 row)\n\ntest=*#\n\n\nRegards, Andreas\n--\nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nThis e-mail is confidential and should not be used by anyone who is not the original intended recipient. Global DataPoint Limited does not accept liability for any statements made which are clearly the sender's own and not expressly made on behalf of Global DataPoint Limited. No contracts may be concluded on behalf of Global DataPoint Limited by means of e-mail communication. Global DataPoint Limited Registered in England and Wales with registered number 3739752 Registered Office Middlesex House, 34-42 Cleveland Street, London W1T 4LB\n", "msg_date": "Mon, 12 Jul 2010 12:23:40 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: now() gives same time within the session" }, { "msg_contents": "On Mon, Jul 12, 2010 at 4:15 AM, A. Kretschmer\n<[email protected]> wrote:\n> In response to [email protected] :\n>> Hi,\n>>\n>>\n>>\n>> I need to log the start and end time of the procedures in a table. But the\n>> start and end time are same. This is how I recreated the issue.\n>>\n>>\n>>\n>> create table test_time (time timestamp);\n>>\n>> delete from  test_time;\n>>\n>> insert into test_time select now();\n>\n>\n> Use timeofday() instead, now() returns the transaction starting time.\n\n\nIs this part of the SQL standard?\n\n-- \nRob Wultsch\[email protected]\n", "msg_date": "Mon, 12 Jul 2010 06:11:31 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: now() gives same time within the session" }, { "msg_contents": "On 12 July 2010 14:11, Rob Wultsch <[email protected]> wrote:\n> On Mon, Jul 12, 2010 at 4:15 AM, A. Kretschmer\n> <[email protected]> wrote:\n>> In response to [email protected] :\n>>> Hi,\n>>>\n>>>\n>>>\n>>> I need to log the start and end time of the procedures in a table. But the\n>>> start and end time are same. This is how I recreated the issue.\n>>>\n>>>\n>>>\n>>> create table test_time (time timestamp);\n>>>\n>>> delete from  test_time;\n>>>\n>>> insert into test_time select now();\n>>\n>>\n>> Use timeofday() instead, now() returns the transaction starting time.\n>\n>\n> Is this part of the SQL standard?\n>\n\nI don't believe it is. See\nhttp://www.postgresql.org/docs/current/static/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT\nfor more info.\n\nThom\n", "msg_date": "Mon, 12 Jul 2010 14:23:55 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: now() gives same time within the session" }, { "msg_contents": "In response to Rob Wultsch :\n> On Mon, Jul 12, 2010 at 4:15 AM, A. Kretschmer\n> <[email protected]> wrote:\n> > Use timeofday() instead, now() returns the transaction starting time.\n> \n> \n> Is this part of the SQL standard?\n\nDon't know, sorry.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Mon, 12 Jul 2010 15:24:56 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: now() gives same time within the session" }, { "msg_contents": "On Mon, Jul 12, 2010 at 06:11:31AM -0700, Rob Wultsch wrote:\n> On Mon, Jul 12, 2010 at 4:15 AM, A. Kretschmer\n> <[email protected]> wrote:\n> > In response to [email protected] :\n> >> Hi,\n> >>\n> >>\n> >>\n> >> I need to log the start and end time of the procedures in a table. But the\n> >> start and end time are same. This is how I recreated the issue.\n> >>\n> >>\n> >>\n> >> create table test_time (time timestamp);\n> >>\n> >> delete from ?test_time;\n> >>\n> >> insert into test_time select now();\n> >\n> >\n> > Use timeofday() instead, now() returns the transaction starting time.\n> \n> \n> Is this part of the SQL standard?\n> \nNo, see section 9.9.4 of the manual.\n\nCheers,\nKen\n", "msg_date": "Mon, 12 Jul 2010 08:26:52 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: now() gives same time within the session" }, { "msg_contents": "On 12/07/10 14:15, A. Kretschmer wrote:\n> Use timeofday() instead, now() returns the transaction starting time.\n\ntimeofday() is a legacy function kept only for backwards-compatibility. \nIt returns a string, which is quite awkward. Use clock_timestamp() instead.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 18 Jul 2010 11:54:05 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: now() gives same time within the session" } ]
[ { "msg_contents": "Hello.\n\nToday I've found out strange results for query below.\nselect version();\n version\n\n----------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.2 on amd64-portbld-freebsd8.0, compiled by GCC cc (GCC)\n4.2.1 20070719 [FreeBSD], 64-bit\n\n--Original query:\nexplain analyze select exists(select * from investor i where i.company_id =\nthis_.id) from COMPANY this_ order by this_.rank desc, this_.id asc limit\n10;\n Limit (cost=0.00..50.67 rows=10 width=16) (actual time=144.489..144.556\nrows=10 loops=1)\n -> Index Scan using comp_rank_id on company this_\n (cost=0.00..34616009.08 rows=6831169 width=16) (actual\ntime=144.484..144.524 rows=10 loops=1)\n SubPlan 1\n -> Index Scan using company_invs on investor i (cost=0.00..9.52\nrows=2 width=0) (never executed)\n Index Cond: ((company_id)::bigint = $0)\n SubPlan 2\n -> Seq Scan on investor i (cost=0.00..1836.17 rows=41717\nwidth=8) (actual time=0.006..72.364 rows=41722 loops=1)\n Total runtime: 144.975 ms\n(8 rows)\n\n--set enable_seqscan=false;\nexplain analyze select exists(select * from investor i where i.company_id =\nthis_.id) from COMPANY this_ order by this_.rank desc, this_.id asc limit\n10;\n Limit (cost=0.00..50.67 rows=10 width=16) (actual time=0.045..0.177\nrows=10 loops=1)\n -> Index Scan using comp_rank_id on company this_\n (cost=0.00..34616009.08 rows=6831169 width=16) (actual time=0.041..0.146\nrows=10 loops=1)\n SubPlan 1\n -> Index Scan using company_invs on investor i (cost=0.00..9.52\nrows=2 width=0) (actual time=0.007..0.007 rows=1 loops=10)\n Index Cond: ((company_id)::bigint = $0)\n SubPlan 2\n -> Seq Scan on investor i (cost=10000000000.00..10000001836.17\nrows=41717 width=8) (never executed)\n Total runtime: 0.253 ms\n(8 rows)\n\n--limit inside exists\nexplain analyze select exists(select * from investor i where i.company_id =\nthis_.id limit 1) from COMPANY this_ order by this_.rank desc, this_.id asc\nlimit 10;\n Limit (cost=0.00..50.67 rows=10 width=16) (actual time=0.052..0.219\nrows=10 loops=1)\n -> Index Scan using comp_rank_id on company this_\n (cost=0.00..34616009.08 rows=6831169 width=16) (actual time=0.049..0.189\nrows=10 loops=1)\n SubPlan 1\n -> Limit (cost=0.00..4.76 rows=1 width=422) (actual\ntime=0.011..0.011 rows=1 loops=10)\n -> Index Scan using company_invs on investor i\n (cost=0.00..9.52 rows=2 width=422) (actual time=0.007..0.007 rows=1\nloops=10)\n Index Cond: ((company_id)::bigint = $0)\n Total runtime: 0.291 ms\n(7 rows)\n\nSo, my Qs:\n1) Do we really have alternative plans for SubPlan that are selected at\nruntime? Wow.\n2) Why \"Seq scan\" plan is selected by default? Is it because of outer limit\nnot being applied when calculating costs for subplans at runtime?\n3) Why does limit inside exists helps? Is it simply because new\n\"alternative\" logic in not applied for \"complex case\"?\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nHello.Today I've found out strange results for query below.select version();                                                 version                                                  \n---------------------------------------------------------------------------------------------------------- PostgreSQL 8.4.2 on amd64-portbld-freebsd8.0, compiled by GCC cc (GCC) 4.2.1 20070719  [FreeBSD], 64-bit\n--Original query:explain analyze select exists(select * from investor i where i.company_id = this_.id) from COMPANY this_ order by this_.rank desc, this_.id asc limit 10; Limit  (cost=0.00..50.67 rows=10 width=16) (actual time=144.489..144.556 rows=10 loops=1)\n   ->  Index Scan using comp_rank_id on company this_  (cost=0.00..34616009.08 rows=6831169 width=16) (actual time=144.484..144.524 rows=10 loops=1)         SubPlan 1           ->  Index Scan using company_invs on investor i  (cost=0.00..9.52 rows=2 width=0) (never executed)\n                 Index Cond: ((company_id)::bigint = $0)         SubPlan 2           ->  Seq Scan on investor i  (cost=0.00..1836.17 rows=41717 width=8) (actual time=0.006..72.364 rows=41722 loops=1)\n Total runtime: 144.975 ms(8 rows)--set enable_seqscan=false;explain analyze select exists(select * from investor i where i.company_id = this_.id) from COMPANY this_ order by this_.rank desc, this_.id asc limit 10;\n Limit  (cost=0.00..50.67 rows=10 width=16) (actual time=0.045..0.177 rows=10 loops=1)   ->  Index Scan using comp_rank_id on company this_  (cost=0.00..34616009.08 rows=6831169 width=16) (actual time=0.041..0.146 rows=10 loops=1)\n         SubPlan 1           ->  Index Scan using company_invs on investor i  (cost=0.00..9.52 rows=2 width=0) (actual time=0.007..0.007 rows=1 loops=10)                 Index Cond: ((company_id)::bigint = $0)\n         SubPlan 2           ->  Seq Scan on investor i  (cost=10000000000.00..10000001836.17 rows=41717 width=8) (never executed) Total runtime: 0.253 ms(8 rows)\n--limit inside existsexplain analyze select exists(select * from investor i where i.company_id = this_.id limit 1) from COMPANY this_ order by this_.rank desc, this_.id asc limit 10; Limit  (cost=0.00..50.67 rows=10 width=16) (actual time=0.052..0.219 rows=10 loops=1)\n   ->  Index Scan using comp_rank_id on company this_  (cost=0.00..34616009.08 rows=6831169 width=16) (actual time=0.049..0.189 rows=10 loops=1)         SubPlan 1           ->  Limit  (cost=0.00..4.76 rows=1 width=422) (actual time=0.011..0.011 rows=1 loops=10)\n                 ->  Index Scan using company_invs on investor i  (cost=0.00..9.52 rows=2 width=422) (actual time=0.007..0.007 rows=1 loops=10)                       Index Cond: ((company_id)::bigint = $0)\n Total runtime: 0.291 ms(7 rows)So, my Qs:1) Do we really have alternative plans for SubPlan that are selected at runtime? Wow.2) Why \"Seq scan\" plan is selected by default? Is it because of outer limit not being applied when calculating costs for subplans at runtime?\n3) Why does limit inside exists helps? Is it simply because new \"alternative\" logic in not applied for \"complex case\"?-- Best regards, Vitalii Tymchyshyn", "msg_date": "Mon, 12 Jul 2010 16:29:07 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Exists, limit and alternate plans" }, { "msg_contents": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> So, my Qs:\n> 1) Do we really have alternative plans for SubPlan that are selected at\n> runtime? Wow.\n\nYup, see the AlternativeSubPlan stuff.\n\n> 2) Why \"Seq scan\" plan is selected by default? Is it because of outer limit\n> not being applied when calculating costs for subplans at runtime?\n\nIt's currently driven off the estimated rowcount for the parent plan\nnode --- 6831169 in your example. The subplan cannot see that the\nparent plan node will be terminated short of full execution, so it\nthinks that hashing the whole investor table will be a win.\nObviously it'd be nice to improve that for cases like upper LIMITs.\n\n> 3) Why does limit inside exists helps?\n\nI think it defeats the applicability of the hash-the-whole-subtable\napproach.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jul 2010 10:45:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exists, limit and alternate plans " } ]
[ { "msg_contents": "\n\n\nGreen Peace - Climate\n\n\n\n\n\n\n\n\n\nDear Eco-warrior,\n\nYour friend Angayarkanni requests your support for a few environmental causes. \n\nThanks to the use and throw culture, the Earth is reeling from the effect of mounting waste. The need of the hour is to reduce, reuse and recycle. This needs to be communicated to as many people as possible. For this purpose, we have started a few campaigns to promote different causes. We would like you to support our initiative by pledging to do your bit for the environment.\n\nSave the climate: Climate change is something that we need to take very seriously. It will lead to a rise in the sea-levels leading to loss of homes of millions of people. So we need to act today. Click here to support this cause.\n\nSave the seas: We don't realize where our non-degradable products go. But most of it ends up in the sea, resulting in destroying marine life. Let us take a pledge to save our seas. Click here to support this cause.\n\nSave your food: We would like the government to impose a ban on genetically modified food. Click here to support this cause.\n\nSay no to toxins: The amount of toxic e-waste being consigned to the dumping grounds is alarming. These waste end up entering our bodies through the food we eat and water we drink. Let's pledge to save our foods. Click here to support this cause.\n\nDo visit us on www.earthvote.org for more information. \n\n\n\nTeam Greenpeace\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 13 Jul 2010 12:12:36 +0580", "msg_from": "kangayarkanni <[email protected]>", "msg_from_op": true, "msg_subject": "Who are you voting for?" } ]
[ { "msg_contents": "Hi,\nI have table \"ARTICLE\" containing a String a field \"STATUS\" that \nrepresents a number in binary format (for ex: 10011101).\nMy application issues queries with where conditions that uses BITAND \noperator on this field (for ex: select * from article where status & 4 = 4).\nThus i'm facing performance problemes with these select queries: the \nqueries are too slow.\nSince i'm using the BITAND operator in my conditions, creating an index \non the status filed is useless\n and since the second operator variable (status & 4 = 4; status & 8 = \n8; status & 16 = 16...) a functional index is also usless (because a \nfunctional index require the use of a function that accept only table \ncolumn as input parameter: constants are not accepted).\nSo is there a way to enhance the performance of these queries?\nThanks,\nElias\n", "msg_date": "Tue, 13 Jul 2010 14:48:09 +0300", "msg_from": "Elias Ghanem <[email protected]>", "msg_from_op": true, "msg_subject": "Queries with conditions using bitand operator" }, { "msg_contents": "On 07/13/2010 06:48 AM, Elias Ghanem wrote:\n> Hi,\n> I have table \"ARTICLE\" containing a String a field \"STATUS\" that represents a number in binary format (for ex: 10011101).\n> My application issues queries with where conditions that uses BITAND operator on this field (for ex: select * from article where status & 4 = 4).\n> Thus i'm facing performance problemes with these select queries: the queries are too slow.\n> Since i'm using the BITAND operator in my conditions, creating an index on the status filed is useless\n> and since the second operator variable (status & 4 = 4; status & 8 = 8; status & 16 = 16...) a functional index is also usless (because a functional index require the use of a function that accept only table column as input parameter: constants are not accepted).\n> So is there a way to enhance the performance of these queries?\n> Thanks,\n> Elias\n>\n\nHow many flags are there? If its not too many you could make a separate column for each... but then that would be lots of indexes too...\n\nOne other thought I had was to make it a text column, turn the flags into words (space separated) and use full text indexes.\n\nI played around with int's and string's but I couldnt find a way using the & operator.\n\n-Andy\n", "msg_date": "Tue, 13 Jul 2010 18:19:20 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries with conditions using bitand operator" }, { "msg_contents": "On 07/13/2010 04:48 AM, Elias Ghanem wrote:\n> Hi,\n> I have table \"ARTICLE\" containing a String a field \"STATUS\" that\n> represents a number in binary format (for ex: 10011101).\n> My application issues queries with where conditions that uses BITAND\n> operator on this field (for ex: select * from article where status & 4 =\n> 4).\n> Thus i'm facing performance problemes with these select queries: the\n> queries are too slow.\n> Since i'm using the BITAND operator in my conditions, creating an index\n> on the status filed is useless\n> and since the second operator variable (status & 4 = 4; status & 8 = 8;\n> status & 16 = 16...) a functional index is also usless (because a\n> functional index require the use of a function that accept only table\n> column as input parameter: constants are not accepted).\n> So is there a way to enhance the performance of these queries?\n\nYou haven't given a lot of info to help us help you, but would something\nalong these lines be useful to you?\n\ndrop table if exists testbit;\ncreate table testbit(\n id serial primary key,\n article text,\n status int\n);\n\ninsert into testbit (article, status) select 'article ' ||\ngenerate_series::text, generate_series % 256 from\ngenerate_series(1,1000000);\n\ncreate index idx1 on testbit(article) where status & 1 = 1;\ncreate index idx2 on testbit(article) where status & 2 = 2;\ncreate index idx4 on testbit(article) where status & 4 = 4;\ncreate index idx8 on testbit(article) where status & 8 = 8;\ncreate index idx16 on testbit(article) where status & 16 = 16;\ncreate index idx32 on testbit(article) where status & 512 = 512;\n\nupdate testbit set status = status + 512 where id in (42, 4242, 424242);\nexplain analyze select * from testbit where status & 512 = 512;\n QUERY PLAN\n------------------------------------------------------------------\n Index Scan using idx32 on testbit (cost=0.00..4712.62 rows=5000\n width=22) (actual time=0.080..0.085 rows=3 loops=1)\n Total runtime: 0.170 ms\n\n\nHTH,\n\nJoe\n\n-- \nJoe Conway\ncredativ LLC: http://www.credativ.us\nLinux, PostgreSQL, and general Open Source\nTraining, Service, Consulting, & Support", "msg_date": "Tue, 13 Jul 2010 18:07:50 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries with conditions using bitand operator" }, { "msg_contents": "One of the possibilities would be to decompose your bitmap into an\narray of base integers and then create a GIN (or GIST) index on that\narray (intarray contrib package). This would make sense if your\narticles are distributed relatively equally and if do not do big ORDER\nBY and then LIMIT/OFFSET queries, that usually will need to sort the\nresults gotten from the GIN index.\nAs your are also probably doing some tsearch queries on the articles,\nyou can actually build combined (tverctor, intarray) GIN/GIST index to\noptimize your searches.\n\nA simple function, that can help you stripping your bitmap integer to\narray of positions could look like:\n\n-- DROP FUNCTION utils.bitmap_to_position_intarray(bitmap integer);\n\nCREATE OR REPLACE FUNCTION utils.bitmap_to_position_intarray(bitmap\ninteger)\n RETURNS integer[] AS\n$BODY$\n-- test\n-- select utils.bitmap_to_position_intarray(5);\n-- test performance\n-- select utils.bitmap_to_position_intarray(s.i) from\ngenerate_series(1, 10000) as s(i);\n--\n\nSELECT ARRAY(\n SELECT s.i + 1 -- here we do +1 to make the position of the first\nbit 1\n FROM generate_series(0, 31) as s(i)\n WHERE $1 & ( 1 << s.i ) > 0\n );\n$BODY$\n LANGUAGE SQL IMMUTABLE STRICT;\n\nYou can create a GIN index directly using this function over your\nbitmap field and then using array set operations will make the planner\nto use the GIN index (more information about these indexes here:\nhttp://www.postgresql.org/docs/8.4/interactive/textsearch-indexes.html):\n\nCREATE INDEX idx_article_status_gin ON article USING\ngin( (utils.bitmap_to_position_intarray(STATUS) ) );\n\nand then you can do:\n\nSELECT * FROM article WHERE utils.bitmap_to_position_intarray(STATUS)\n&& ARRAY[1,5];\n\nor\n\nSELECT * FROM article WHERE utils.bitmap_to_position_intarray(STATUS)\n&& utils.bitmap_to_position_intarray(5);\n\nHave a look on the possible array set operations in\nhttp://www.postgresql.org/docs/8.4/interactive/intarray.html.\n\nOtherwise a solution from Jeo Conway to create separate indexes for\neach bit also is worth to be looked up. This has actually drawbacks,\nthat you cannot look up combinations of bits efficiently. As an\nadvantage in the example from Jeo, you can efficiently do ORDER BY\narticle (or any other field, that you add into these limited\nindexes).\n\n", "msg_date": "Wed, 14 Jul 2010 07:49:50 -0700 (PDT)", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries with conditions using bitand operator" } ]
[ { "msg_contents": "Here's a query and its EXPLAIN ANALYZE output:\n\ncms=> select count(*) from forum;\n count\n-------\n 90675\n(1 row)\n\ncms=> explain analyze select id,title from forum where _fts_ @@\n'fer'::tsquery;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on forum (cost=29.21..361.21 rows=91 width=35)\n(actual time=2.946..63.646 rows=8449 loops=1)\n Recheck Cond: (_fts_ @@ '''fer'''::tsquery)\n -> Bitmap Index Scan on forum_fts (cost=0.00..29.19 rows=91\nwidth=0) (actual time=2.119..2.119 rows=8449 loops=1)\n Index Cond: (_fts_ @@ '''fer'''::tsquery)\n Total runtime: 113.641 ms\n(5 rows)\n\nThe problem is - tsearch2 seems too slow. I have nothing to compare it\nto but 113 ms for searching through this small table of 90,000 records\nseems too slow. The forum_fts index is of GIN type and the table\ncertainly fits into RAM.\n\nWhen I issue a dumb query without an index, I get a comparable order of\nmagnitude performance:\n\ncms=> explain analyze select id,title from forum where content ilike\n'%fer%';\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------\n Seq Scan on forum (cost=0.00..7307.44 rows=3395 width=35) (actual\ntime=0.030..798.375 rows=10896 loops=1)\n Filter: (content ~~* '%fer%'::text)\n Total runtime: 864.384 ms\n(3 rows)\n\ncms=> explain analyze select id,title from forum where content like '%fer%';\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------\n Seq Scan on forum (cost=0.00..7307.44 rows=3395 width=35) (actual\ntime=0.024..146.959 rows=7596 loops=1)\n Filter: (content ~~ '%fer%'::text)\n Total runtime: 191.732 ms\n(3 rows)\n\nSome peculiarities of the setup which might or might not influence this\nperformance:\n\n1) I'm using ICU-patched postgresql because I cannot use my UTF-8 locale\notherwise - this is why the difference between the dumb queries is large\n(but I don't see how this can influence tsearch2 since it pre-builds the\ntsvector data with lowercase lexemes)\n\n2) My tsearch2 lexer is somewhat slow (but I don't see how it can\ninfluence these read-only queries on a pre-built, lexed and indexed data)\n\nAny ideas?\n\n", "msg_date": "Wed, 14 Jul 2010 13:47:52 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Understanding tsearch2 performance" }, { "msg_contents": "Something is not good with statistics, 91 est. vs 8449 actually returned.\nReturning 8449 rows could be quite long.\n\nOleg\nOn Wed, 14 Jul 2010, Ivan Voras wrote:\n\n> Here's a query and its EXPLAIN ANALYZE output:\n>\n> cms=> select count(*) from forum;\n> count\n> -------\n> 90675\n> (1 row)\n>\n> cms=> explain analyze select id,title from forum where _fts_ @@\n> 'fer'::tsquery;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on forum (cost=29.21..361.21 rows=91 width=35)\n> (actual time=2.946..63.646 rows=8449 loops=1)\n> Recheck Cond: (_fts_ @@ '''fer'''::tsquery)\n> -> Bitmap Index Scan on forum_fts (cost=0.00..29.19 rows=91\n> width=0) (actual time=2.119..2.119 rows=8449 loops=1)\n> Index Cond: (_fts_ @@ '''fer'''::tsquery)\n> Total runtime: 113.641 ms\n> (5 rows)\n>\n> The problem is - tsearch2 seems too slow. I have nothing to compare it\n> to but 113 ms for searching through this small table of 90,000 records\n> seems too slow. The forum_fts index is of GIN type and the table\n> certainly fits into RAM.\n>\n> When I issue a dumb query without an index, I get a comparable order of\n> magnitude performance:\n>\n> cms=> explain analyze select id,title from forum where content ilike\n> '%fer%';\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------\n> Seq Scan on forum (cost=0.00..7307.44 rows=3395 width=35) (actual\n> time=0.030..798.375 rows=10896 loops=1)\n> Filter: (content ~~* '%fer%'::text)\n> Total runtime: 864.384 ms\n> (3 rows)\n>\n> cms=> explain analyze select id,title from forum where content like '%fer%';\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------\n> Seq Scan on forum (cost=0.00..7307.44 rows=3395 width=35) (actual\n> time=0.024..146.959 rows=7596 loops=1)\n> Filter: (content ~~ '%fer%'::text)\n> Total runtime: 191.732 ms\n> (3 rows)\n>\n> Some peculiarities of the setup which might or might not influence this\n> performance:\n>\n> 1) I'm using ICU-patched postgresql because I cannot use my UTF-8 locale\n> otherwise - this is why the difference between the dumb queries is large\n> (but I don't see how this can influence tsearch2 since it pre-builds the\n> tsvector data with lowercase lexemes)\n>\n> 2) My tsearch2 lexer is somewhat slow (but I don't see how it can\n> influence these read-only queries on a pre-built, lexed and indexed data)\n>\n> Any ideas?\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Wed, 14 Jul 2010 16:31:28 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding tsearch2 performance" }, { "msg_contents": "On 07/14/10 14:31, Oleg Bartunov wrote:\n> Something is not good with statistics, 91 est. vs 8449 actually returned.\n\nI don't think the statistics difference is significant - it's actually\nusing the index so it's ok. And I've run vacuum analyze just before\nstarting the query.\n\n> Returning 8449 rows could be quite long.\n\nYou are right, I didn't test this. Issuing a query which returns a\nsmaller result set is much faster.\n\nBut, offtopic, why would returning 8500 records, each around 100 bytes\nlong so around 8.5 MB, over local unix sockets, be so slow? The machine\nin question has a sustained memory bendwidth of nearly 10 GB/s. Does\nPostgreSQL spend much time marshalling the data through the socket stream?\n\n", "msg_date": "Wed, 14 Jul 2010 14:55:36 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Understanding tsearch2 performance" }, { "msg_contents": "On Wed, 14 Jul 2010, Ivan Voras wrote:\n\n>> Returning 8449 rows could be quite long.\n>\n> You are right, I didn't test this. Issuing a query which returns a\n> smaller result set is much faster.\n>\n> But, offtopic, why would returning 8500 records, each around 100 bytes\n> long so around 8.5 MB, over local unix sockets, be so slow? The machine\n> in question has a sustained memory bendwidth of nearly 10 GB/s. Does\n> PostgreSQL spend much time marshalling the data through the socket stream?\n\nIt's disk access time.\nin the very bad case it could take ~5 ms (for fast drive) to get one just\none row.\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Wed, 14 Jul 2010 17:25:27 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding tsearch2 performance" }, { "msg_contents": "On 07/14/10 15:25, Oleg Bartunov wrote:\n> On Wed, 14 Jul 2010, Ivan Voras wrote:\n> \n>>> Returning 8449 rows could be quite long.\n>>\n>> You are right, I didn't test this. Issuing a query which returns a\n>> smaller result set is much faster.\n>>\n>> But, offtopic, why would returning 8500 records, each around 100 bytes\n>> long so around 8.5 MB, over local unix sockets, be so slow? The machine\n>> in question has a sustained memory bendwidth of nearly 10 GB/s. Does\n>> PostgreSQL spend much time marshalling the data through the socket\n>> stream?\n> \n> It's disk access time.\n> in the very bad case it could take ~5 ms (for fast drive) to get one just\n> one row.\n\nNo, it's not that. The table fits in RAM, I've verified there is no disk\nIO involved. Something else is wrong:\n\ncms=> explain analyze select id,title from forum where _fts_ @@\n'fer'::tsquery limit 10;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..43.31 rows=10 width=35) (actual time=0.194..0.373\nrows=10 loops=1)\n -> Index Scan using forum_fts on forum (cost=0.00..394.10 rows=91\nwidth=35) (actual time=0.182..0.256 rows=10 loops=1)\n Index Cond: (_fts_ @@ '''fer'''::tsquery)\n Total runtime: 0.507 ms\n(4 rows)\n\ncms=> explain analyze select id,title from forum where _fts_ @@\n'fer'::tsquery order by id limit 10;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=363.18..363.20 rows=10 width=35) (actual\ntime=118.358..118.516 rows=10 loops=1)\n -> Sort (cost=363.18..363.40 rows=91 width=35) (actual\ntime=118.344..118.396 rows=10 loops=1)\n Sort Key: id\n Sort Method: top-N heapsort Memory: 25kB\n -> Bitmap Heap Scan on forum (cost=29.21..361.21 rows=91\nwidth=35) (actual time=3.066..64.091 rows=8449 loops=1)\n Recheck Cond: (_fts_ @@ '''fer'''::tsquery)\n -> Bitmap Index Scan on forum_fts (cost=0.00..29.19\nrows=91 width=0) (actual time=2.106..2.106 rows=8449 loops=1)\n Index Cond: (_fts_ @@ '''fer'''::tsquery)\n Total runtime: 118.689 ms\n(9 rows)\n\nSee in the first query where I have a simple LIMIT, it fetches random 10\nrows quickly, but in the second one, as soon as I give it to execute and\ncalculate the entire result set before I limit it, the performance is\nhorrible.\n\n\n\n", "msg_date": "Wed, 14 Jul 2010 15:37:56 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Understanding tsearch2 performance" }, { "msg_contents": "* Ivan Voras ([email protected]) wrote:\n> Total runtime: 0.507 ms\n[...]\n> Total runtime: 118.689 ms\n> \n> See in the first query where I have a simple LIMIT, it fetches random 10\n> rows quickly, but in the second one, as soon as I give it to execute and\n> calculate the entire result set before I limit it, the performance is\n> horrible.\n\nWhat you've shown is that it takes 0.5ms for 10 rows, and 118ms for 8500\nrows. Now, maybe I've missed something, but that's 0.05ms per row for\nthe first query and 0.01ms per row for the second, and you've added a\nsort into the mix. The performance of going through the data actually\nimproves on a per-record basis, since you're doing more in bulk.\n\nSince you're ordering by 'id', PG has to look at every row returned by\nthe index scan. That's not free.\n\nRegarding the statistics, it's entirely possible that the index is *not*\nthe fastest way to pull this data (it's nearly 10% of the table..), if\nthe stats were better it might use a seq scan instead, not sure how bad\nthe cost of the filter itself would be.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 14 Jul 2010 09:49:28 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding tsearch2 performance" }, { "msg_contents": "On 07/14/10 15:49, Stephen Frost wrote:\n> * Ivan Voras ([email protected]) wrote:\n>> Total runtime: 0.507 ms\n> [...]\n>> Total runtime: 118.689 ms\n>>\n>> See in the first query where I have a simple LIMIT, it fetches random 10\n>> rows quickly, but in the second one, as soon as I give it to execute and\n>> calculate the entire result set before I limit it, the performance is\n>> horrible.\n> \n> What you've shown is that it takes 0.5ms for 10 rows, and 118ms for 8500\n> rows.\n\nYes, but...\n\n> Now, maybe I've missed something, but that's 0.05ms per row for\n> the first query and 0.01ms per row for the second, and you've added a\n> sort into the mix. The performance of going through the data actually\n> improves on a per-record basis, since you're doing more in bulk.\n> \n> Since you're ordering by 'id', PG has to look at every row returned by\n> the index scan. That's not free.\n\nThis part of the processing is going on on the backend, and the backend\nneeds to sort through 8500 integers. I don't think the sort is\nsignificant here.\n\n> Regarding the statistics, it's entirely possible that the index is *not*\n> the fastest way to pull this data (it's nearly 10% of the table..), if\n> the stats were better it might use a seq scan instead, not sure how bad\n> the cost of the filter itself would be.\n\nI think that what I'm asking here is: is it reasonable for tsearch2 to\nextract 8,500 rows from an index of 90,000 rows in 118 ms, given that\nthe approximately same task can be done with an unindexed \"LIKE\"\noperator in nearly the same time?\n\n\n", "msg_date": "Wed, 14 Jul 2010 15:57:35 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Understanding tsearch2 performance" } ]
[ { "msg_contents": "Ivan Voras < [email protected] > wrote:\n> On 07/14/10 15:49, Stephen Frost wrote:\n \n>> Regarding the statistics, it's entirely possible that the index\n>> is *not* the fastest way to pull this data (it's nearly 10% of\n>> the table..)\n> \n> I think that what I'm asking here is: is it reasonable for\n> tsearch2 to extract 8,500 rows from an index of 90,000 rows in 118\n> ms, given that the approximately same task can be done with an\n> unindexed \"LIKE\" operator in nearly the same time?\n \nThe answer is \"yes.\" When it's 10% of the table, a sequential scan\ncan be more efficient than an index, as Stephen indicated.\n \n-Kevin\n", "msg_date": "Wed, 14 Jul 2010 09:03:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Understanding tsearch2 performance" }, { "msg_contents": "On 07/14/10 16:03, Kevin Grittner wrote:\n> Ivan Voras < [email protected] > wrote:\n>> On 07/14/10 15:49, Stephen Frost wrote:\n> \n>>> Regarding the statistics, it's entirely possible that the index\n>>> is *not* the fastest way to pull this data (it's nearly 10% of\n>>> the table..)\n>>\n>> I think that what I'm asking here is: is it reasonable for\n>> tsearch2 to extract 8,500 rows from an index of 90,000 rows in 118\n>> ms, given that the approximately same task can be done with an\n>> unindexed \"LIKE\" operator in nearly the same time?\n> \n> The answer is \"yes.\" When it's 10% of the table, a sequential scan\n> can be more efficient than an index, as Stephen indicated.\n\nOk, to verify this I've tried increasing statistics on the field and\nrunning vacumm analyze full, which didn't help. Next, I've tried setting\nenable_indexscan to off, which also didn't do it:\n\ncms=> set enable_indexscan=off;\nSET\ncms=> explain analyze select id,title from forum where _fts_ @@\n'fer'::tsquery order by id limit 10;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=363.18..363.20 rows=10 width=35) (actual\ntime=192.243..192.406 rows=10 loops=1)\n -> Sort (cost=363.18..363.40 rows=91 width=35) (actual\ntime=192.229..192.283 rows=10 loops=1)\n Sort Key: id\n Sort Method: top-N heapsort Memory: 25kB\n -> Bitmap Heap Scan on forum (cost=29.21..361.21 rows=91\nwidth=35) (actual time=12.071..136.130 rows=8449 loops=1)\n Recheck Cond: (_fts_ @@ '''fer'''::tsquery)\n -> Bitmap Index Scan on forum_fts (cost=0.00..29.19\nrows=91 width=0) (actual time=11.169..11.169 rows=8449 loops=1)\n Index Cond: (_fts_ @@ '''fer'''::tsquery)\n Total runtime: 192.686 ms\n(9 rows)\n\nAny ideas on how to verify this?\n\n\n\n", "msg_date": "Wed, 14 Jul 2010 16:21:56 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding tsearch2 performance" }, { "msg_contents": "Ivan,\n\nhere is explain analyze output - 7122 out of 528155 docs\ntseval=# select count(*) from document;\n count \n--------\n 528155\n(1 row)\n\nTime: 345,562 ms\n\ntseval=# explain analyze select docno, title from document where vector @@ to_tsquery('english','mars');\n Bitmap Heap Scan on document (cost=1655.97..10518.34 rows=2641 width=13) (actual time=3.127..11.556 rows=7122 loops=1)\n Recheck Cond: (vector @@ '''mar'''::tsquery)\n -> Bitmap Index Scan on idx_vector (cost=0.00..1655.31 rows=2641 width=0) (actual time=1.899..1.899 rows=7122 loops=1)\n Index Cond: (vector @@ '''mar'''::tsquery)\n Total runtime: 12.303 ms\n(5 rows)\n\nThis is PostgreSQL 8.4.4 on Ubuntu machine.\n\n\nOleg\n\nOn Wed, 14 Jul 2010, Ivan Voras wrote:\n\n> On 07/14/10 16:03, Kevin Grittner wrote:\n>> Ivan Voras < [email protected] > wrote:\n>>> On 07/14/10 15:49, Stephen Frost wrote:\n>>\n>>>> Regarding the statistics, it's entirely possible that the index\n>>>> is *not* the fastest way to pull this data (it's nearly 10% of\n>>>> the table..)\n>>>\n>>> I think that what I'm asking here is: is it reasonable for\n>>> tsearch2 to extract 8,500 rows from an index of 90,000 rows in 118\n>>> ms, given that the approximately same task can be done with an\n>>> unindexed \"LIKE\" operator in nearly the same time?\n>>\n>> The answer is \"yes.\" When it's 10% of the table, a sequential scan\n>> can be more efficient than an index, as Stephen indicated.\n>\n> Ok, to verify this I've tried increasing statistics on the field and\n> running vacumm analyze full, which didn't help. Next, I've tried setting\n> enable_indexscan to off, which also didn't do it:\n>\n> cms=> set enable_indexscan=off;\n> SET\n> cms=> explain analyze select id,title from forum where _fts_ @@\n> 'fer'::tsquery order by id limit 10;\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=363.18..363.20 rows=10 width=35) (actual\n> time=192.243..192.406 rows=10 loops=1)\n> -> Sort (cost=363.18..363.40 rows=91 width=35) (actual\n> time=192.229..192.283 rows=10 loops=1)\n> Sort Key: id\n> Sort Method: top-N heapsort Memory: 25kB\n> -> Bitmap Heap Scan on forum (cost=29.21..361.21 rows=91\n> width=35) (actual time=12.071..136.130 rows=8449 loops=1)\n> Recheck Cond: (_fts_ @@ '''fer'''::tsquery)\n> -> Bitmap Index Scan on forum_fts (cost=0.00..29.19\n> rows=91 width=0) (actual time=11.169..11.169 rows=8449 loops=1)\n> Index Cond: (_fts_ @@ '''fer'''::tsquery)\n> Total runtime: 192.686 ms\n> (9 rows)\n>\n> Any ideas on how to verify this?\n>\n>\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Wed, 14 Jul 2010 18:36:48 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding tsearch2 performance" }, { "msg_contents": "Ivan Voras <[email protected]> wrote:\n \n> which didn't help.\n \nDidn't help what? You're processing each row in 22.8 microseconds. \nWhat kind of performance were you expecting?\n \n-Kevin\n", "msg_date": "Wed, 14 Jul 2010 10:16:56 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Understanding tsearch2 performance" }, { "msg_contents": "On 14 July 2010 17:16, Kevin Grittner <[email protected]> wrote:\n> Ivan Voras <[email protected]> wrote:\n>\n>> which didn't help.\n>\n> Didn't help what?  You're processing each row in 22.8 microseconds.\n> What kind of performance were you expecting?\n\nWell, I guess you're right. What I was expecting is a large bump in\nspeed going from LIKE to tsearch2 but then there's also record\nprocessing outside the index itself, which is probably where the time\ngoes.\n", "msg_date": "Wed, 14 Jul 2010 22:25:47 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding tsearch2 performance" } ]
[ { "msg_contents": "I have a query:\n\n SELECT d1.ID, d2.ID\n FROM DocPrimary d1\n JOIN DocPrimary d2 ON d2.BasedOn=d1.ID\n WHERE (d1.ID=234409763) or (d2.ID=234409763)\n\ni think what QO(Query Optimizer) can make it faster (now it seq scan and on\nmillion records works 7 sec)\n\n SELECT d1.ID, d2.ID\n FROM DocPrimary d1\n JOIN DocPrimary d2 ON d2.BasedOn=d1.ID\n WHERE (d2.BasedOn=234409763) or (d2.ID=234409763)\n\n\n ----------------------\n Slow Query\n ----------------------\n test=# EXPLAIN (ANALYZE on, VERBOSE on, COSTS on, BUFFERS off )SELECT d1.ID,\n d2.ID\n test-# FROM DocPrimary d1\n test-# JOIN DocPrimary d2 ON d2.BasedOn=d1.ID\n test-# WHERE (d1.ID=234409763) or (d2.ID=234409763);\n QUERY PLAN\n ------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=58.15..132.35 rows=2 width=8) (actual time=0.007..0.007\n rows=0 loops=1)\n Output: d1.id, d2.id\n Hash Cond: (d2.basedon = d1.id)\n Join Filter: ((d1.id = 234409763) OR (d2.id = 234409763))\n -> Seq Scan on public.docprimary d2 (cost=0.00..31.40 rows=2140\n width=8) (actual time=0.002..0.002 rows=0 loops=1)\n Output: d2.id, d2.basedon\n -> Hash (cost=31.40..31.40 rows=2140 width=4) (never executed)\n Output: d1.id\n -> Seq Scan on public.docprimary d1 (cost=0.00..31.40 rows=2140\n width=4) (never executed)\n Output: d1.id\n\n ------------------\n Fast Query\n ------------------\n test=# EXPLAIN (ANALYZE on, VERBOSE on, COSTS on, BUFFERS off )SELECT d1.ID,\n d2.ID\n test-# FROM DocPrimary d1\n test-# JOIN DocPrimary d2 ON d2.BasedOn=d1.ID\n test-# WHERE (d2.BasedOn=234409763) or (d2.ID=234409763);\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=8.60..58.67 rows=12 width=8) (actual time=0.026..0.026\n rows=0 loops=1)\n Output: d1.id, d2.id\n -> Bitmap Heap Scan on public.docprimary d2 (cost=8.60..19.31 rows=12\n width=8) (actual time=0.023..0.023 rows=0 loops=1)\n Output: d2.id, d2.basedon\n Recheck Cond: ((d2.basedon = 234409763) OR (d2.id = 234409763))\n -> BitmapOr (cost=8.60..8.60 rows=12 width=0) (actual\n time=0.018..0.018 rows=0 loops=1)\n -> Bitmap Index Scan on basedon_idx (cost=0.00..4.33\n rows=11 width=0) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (d2.basedon = 234409763)\n -> Bitmap Index Scan on id_pk (cost=0.00..4.26 rows=1\n width=0) (actual time=0.003..0.003 rows=0 loops=1)\n Index Cond: (d2.id = 234409763)\n -> Index Scan using id_pk on public.docprimary d1 (cost=0.00..3.27\n rows=1 width=4) (never executed)\n Output: d1.id, d1.basedon\n Index Cond: (d1.id = d2.basedon)\n\n\n--------------------------------------------\nPGver: PostgreSQL 9.0b x86\nOS: Win7 x64\n\n---------------------\nCreate table query:\n---------------------\n\nCREATE TABLE docprimary\n(\n id integer NOT NULL,\n basedon integer,\n CONSTRAINT id_pk PRIMARY KEY (id)\n);\nCREATE INDEX basedon_idx\n ON docprimary\n USING btree\n (basedon);\n\n", "msg_date": "Thu, 15 Jul 2010 10:12:27 +0400", "msg_from": "Zotov <[email protected]>", "msg_from_op": true, "msg_subject": "Query optimization problem" }, { "msg_contents": "Hello Zotov,\n\nSomehow the equivalence d2.basedon=d1.id is not used in the slow query, \nprobably because the equivalence constant value would be used inside a \nnot-base expression (the OR). You can see that the equivalence values \n*are* used by changing the or to an and and compare both queries. The \nonly thing you can do to guarantee the planner has all information to in \ncases like this it explicitly name the equivalence inside OR \nexpressions, e.g.\n\nSELECT d1.ID, d2.ID\n FROM DocPrimary d1\n JOIN DocPrimary d2 ON d2.BasedOn=d1.ID\n WHERE (d1.ID=234409763 and d2.basedon=234409763) or (d2.ID=234409763) ;\n\nregards,\nYeb Havinga\n\nPS: the analyze time of the slow query showed 0.007ms?\n\nZotov wrote:\n> I have a query:\n>\n> SELECT d1.ID, d2.ID\n> FROM DocPrimary d1\n> JOIN DocPrimary d2 ON d2.BasedOn=d1.ID\n> WHERE (d1.ID=234409763) or (d2.ID=234409763)\n>\n> i think what QO(Query Optimizer) can make it faster (now it seq scan \n> and on\n> million records works 7 sec)\n>\n> SELECT d1.ID, d2.ID\n> FROM DocPrimary d1\n> JOIN DocPrimary d2 ON d2.BasedOn=d1.ID\n> WHERE (d2.BasedOn=234409763) or (d2.ID=234409763)\n>\n>\n> ----------------------\n> Slow Query\n> ----------------------\n> test=# EXPLAIN (ANALYZE on, VERBOSE on, COSTS on, BUFFERS off )SELECT \n> d1.ID,\n> d2.ID\n> test-# FROM DocPrimary d1\n> test-# JOIN DocPrimary d2 ON d2.BasedOn=d1.ID\n> test-# WHERE (d1.ID=234409763) or (d2.ID=234409763);\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------ \n>\n> Hash Join (cost=58.15..132.35 rows=2 width=8) (actual \n> time=0.007..0.007\n> rows=0 loops=1)\n> Output: d1.id, d2.id\n> Hash Cond: (d2.basedon = d1.id)\n> Join Filter: ((d1.id = 234409763) OR (d2.id = 234409763))\n> -> Seq Scan on public.docprimary d2 (cost=0.00..31.40 rows=2140\n> width=8) (actual time=0.002..0.002 rows=0 loops=1)\n> Output: d2.id, d2.basedon\n> -> Hash (cost=31.40..31.40 rows=2140 width=4) (never executed)\n> Output: d1.id\n> -> Seq Scan on public.docprimary d1 (cost=0.00..31.40 \n> rows=2140\n> width=4) (never executed)\n> Output: d1.id\n>\n> ------------------\n> Fast Query\n> ------------------\n> test=# EXPLAIN (ANALYZE on, VERBOSE on, COSTS on, BUFFERS off )SELECT \n> d1.ID,\n> d2.ID\n> test-# FROM DocPrimary d1\n> test-# JOIN DocPrimary d2 ON d2.BasedOn=d1.ID\n> test-# WHERE (d2.BasedOn=234409763) or (d2.ID=234409763);\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------- \n>\n> Nested Loop (cost=8.60..58.67 rows=12 width=8) (actual \n> time=0.026..0.026\n> rows=0 loops=1)\n> Output: d1.id, d2.id\n> -> Bitmap Heap Scan on public.docprimary d2 (cost=8.60..19.31 \n> rows=12\n> width=8) (actual time=0.023..0.023 rows=0 loops=1)\n> Output: d2.id, d2.basedon\n> Recheck Cond: ((d2.basedon = 234409763) OR (d2.id = \n> 234409763))\n> -> BitmapOr (cost=8.60..8.60 rows=12 width=0) (actual\n> time=0.018..0.018 rows=0 loops=1)\n> -> Bitmap Index Scan on basedon_idx (cost=0.00..4.33\n> rows=11 width=0) (actual time=0.008..0.008 rows=0 loops=1)\n> Index Cond: (d2.basedon = 234409763)\n> -> Bitmap Index Scan on id_pk (cost=0.00..4.26 rows=1\n> width=0) (actual time=0.003..0.003 rows=0 loops=1)\n> Index Cond: (d2.id = 234409763)\n> -> Index Scan using id_pk on public.docprimary d1 \n> (cost=0.00..3.27\n> rows=1 width=4) (never executed)\n> Output: d1.id, d1.basedon\n> Index Cond: (d1.id = d2.basedon)\n>\n>\n> --------------------------------------------\n> PGver: PostgreSQL 9.0b x86\n> OS: Win7 x64\n>\n> ---------------------\n> Create table query:\n> ---------------------\n>\n> CREATE TABLE docprimary\n> (\n> id integer NOT NULL,\n> basedon integer,\n> CONSTRAINT id_pk PRIMARY KEY (id)\n> );\n> CREATE INDEX basedon_idx\n> ON docprimary\n> USING btree\n> (basedon);\n>\n>\n\n", "msg_date": "Thu, 15 Jul 2010 09:58:50 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimization problem" } ]
[ { "msg_contents": "I have two servers with equal specs, one of them running 8.3.7 and the new server running 8.4.4. The only tweak I have made from the default install (from Ubuntu repositories) is increasing shared_buffers to 768MB. Both servers are running 64-bit, but are different releases of Ubuntu. \n\nThis is the query I am running: \n\nSELECT DISTINCT test.tid, testresult.trscore, testresult.trpossiblescore, testresult.trstart, \ntestresult.trfinish, testresult.trscorebreakdown, testresult.fk_sid, testresult.fk_tid, test.tname, \nqr.qrscore, qr.qrtotalscore, testresult.trid, qr.qrid \nFROM testresult, test, questionresult qr \nWHERE test.tid = testresult.fk_tid AND qr.fk_trid = testresult.trid \nORDER BY test.tid; \n\nResults when running on the v8.3.7 server.... \nTotal query runtime: 32185 ms. \n700536 rows retrieved. \n\nResults when running on the v8.4.4 server.... \nTotal query runtime: 164227 ms. \n700536 rows retrieved. \n\nResults when running on a different v8.4.4 server with slightly faster hardware and shared_buffers at 1024MB.... \n(this one has a few more rows of data due to this being the server that is currently live, so it has more recent data) \nTotal query runtime: 157931 ms. \n700556 rows retrieved. \n\n\nAnyone have any ideas on where I should start looking to figure this out? I didn't perform any special steps when moving to v8.4, I just did a pg_dump from the 8.3 server and restored it on the new 8.4 servers. Maybe that is where I made a mistake. \n\nThanks! \nPatrick \n\nI have two servers with equal specs, one of them running 8.3.7 and the new server running 8.4.4. The only tweak I have made from the default install (from Ubuntu repositories) is increasing shared_buffers to 768MB. Both servers are running 64-bit, but are different releases of Ubuntu.This is the query I am running:SELECT DISTINCT test.tid, testresult.trscore, testresult.trpossiblescore, testresult.trstart,testresult.trfinish, testresult.trscorebreakdown, testresult.fk_sid, testresult.fk_tid, test.tname,qr.qrscore, qr.qrtotalscore, testresult.trid, qr.qridFROM testresult, test, questionresult qrWHERE test.tid = testresult.fk_tid AND qr.fk_trid = testresult.tridORDER BY test.tid;Results when running on the v8.3.7 server....Total query runtime: 32185 ms.700536 rows retrieved.Results when running on the v8.4.4 server....Total query runtime: 164227 ms.700536 rows retrieved.Results when running on a different v8.4.4 server with slightly faster hardware and shared_buffers at 1024MB....(this one has a few more rows of data due to this being the server that is currently live, so it has more recent data)Total query runtime: 157931 ms.700556 rows retrieved.Anyone have any ideas on where I should start looking to figure this out? I didn't perform any special steps when moving to v8.4, I just did a pg_dump from the 8.3 server and restored it on the new 8.4 servers. Maybe that is where I made a mistake.Thanks!Patrick", "msg_date": "Thu, 15 Jul 2010 10:41:22 -0400 (EDT)", "msg_from": "Patrick Donlin <[email protected]>", "msg_from_op": true, "msg_subject": "Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "On 15 July 2010 15:41, Patrick Donlin <[email protected]> wrote:\n> I have two servers with equal specs, one of them running 8.3.7 and the new\n> server running 8.4.4. The only tweak I have made from the default install\n> (from Ubuntu repositories) is increasing shared_buffers to 768MB. Both\n> servers are running 64-bit, but are different releases of Ubuntu.\n>\n> This is the query I am running:\n>\n> SELECT DISTINCT test.tid, testresult.trscore, testresult.trpossiblescore,\n> testresult.trstart,\n> testresult.trfinish, testresult.trscorebreakdown, testresult.fk_sid,\n> testresult.fk_tid, test.tname,\n> qr.qrscore, qr.qrtotalscore, testresult.trid, qr.qrid\n> FROM testresult, test, questionresult qr\n> WHERE test.tid = testresult.fk_tid AND qr.fk_trid = testresult.trid\n> ORDER BY test.tid;\n>\n> Results when running on the v8.3.7 server....\n> Total query runtime: 32185 ms.\n> 700536 rows retrieved.\n>\n> Results when running on the v8.4.4 server....\n> Total query runtime: 164227 ms.\n> 700536 rows retrieved.\n>\n> Results when running on a different v8.4.4 server with slightly faster\n> hardware and shared_buffers at 1024MB....\n> (this one has a few more rows of data due to this being the server that is\n> currently live, so it has more recent data)\n> Total query runtime: 157931 ms.\n> 700556 rows retrieved.\n>\n>\n> Anyone have any ideas on where I should start looking to figure this out? I\n> didn't perform any special steps when moving to v8.4, I just did a pg_dump\n> from the 8.3 server and restored it on the new 8.4 servers. Maybe that is\n> where I made a mistake.\n>\n> Thanks!\n> Patrick\n>\n\nFirst thing to check is did you do a VACUUM ANALYZE on the database?\n\nThom\n", "msg_date": "Thu, 15 Jul 2010 15:50:29 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "Patrick Donlin <[email protected]> wrote:\n \n> Anyone have any ideas on where I should start looking to figure\n> this out?\n \nYou're going to want to run EXPLAIN ANALYZE for the slow query on\nboth servers. If you want the rest of us to be able to contribute\nideas, we'll need a little more information -- please read this\npage:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n> I didn't perform any special steps when moving to v8.4, I just did\n> a pg_dump from the 8.3 server and restored it on the new 8.4\n> servers.\n \nA database VACUUM ANALYZE by a superuser is a good idea; otherwise\nthat's fine technique.\n \n-Kevin\n", "msg_date": "Thu, 15 Jul 2010 09:55:19 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "I'll read over that wiki entry, but for now here is the EXPLAIN ANALYZE output assuming I did it correctly. I have run vacuumdb --full --analyze, it actually runs as a nightly cron job. \n\n8.4.4 Sever: \n\"Unique (cost=202950.82..227521.59 rows=702022 width=86) (actual time=21273.371..22429.511 rows=700536 loops=1)\" \n\" -> Sort (cost=202950.82..204705.87 rows=702022 width=86) (actual time=21273.368..22015.948 rows=700536 loops=1)\" \n\" Sort Key: test.tid, testresult.trscore, testresult.trpossiblescore, testresult.trstart, testresult.trfinish, testresult.trscorebreakdown, testresult.fk_sid, test.tname, qr.qrscore, qr.qrtotalscore, testresult.trid, qr.qrid\" \n\" Sort Method: external merge Disk: 71768kB\" \n\" -> Hash Join (cost=2300.82..34001.42 rows=702022 width=86) (actual time=64.388..1177.468 rows=700536 loops=1)\" \n\" Hash Cond: (qr.fk_trid = testresult.trid)\" \n\" -> Seq Scan on questionresult qr (cost=0.00..12182.22 rows=702022 width=16) (actual time=0.090..275.518 rows=702022 loops=1)\" \n\" -> Hash (cost=1552.97..1552.97 rows=29668 width=74) (actual time=63.042..63.042 rows=29515 loops=1)\" \n\" -> Hash Join (cost=3.35..1552.97 rows=29668 width=74) (actual time=0.227..39.111 rows=29515 loops=1)\" \n\" Hash Cond: (testresult.fk_tid = test.tid)\" \n\" -> Seq Scan on testresult (cost=0.00..1141.68 rows=29668 width=53) (actual time=0.019..15.622 rows=29668 loops=1)\" \n\" -> Hash (cost=2.60..2.60 rows=60 width=21) (actual time=0.088..0.088 rows=60 loops=1)\" \n\" -> Seq Scan on test (cost=0.00..2.60 rows=60 width=21) (actual time=0.015..0.044 rows=60 loops=1)\" \n\"Total runtime: 22528.820 ms\" \n\n8.3.7 Server: \n\"Unique (cost=202950.82..227521.59 rows=702022 width=86) (actual time=22157.714..23343.461 rows=700536 loops=1)\" \n\" -> Sort (cost=202950.82..204705.87 rows=702022 width=86) (actual time=22157.706..22942.018 rows=700536 loops=1)\" \n\" Sort Key: test.tid, testresult.trscore, testresult.trpossiblescore, testresult.trstart, testresult.trfinish, testresult.trscorebreakdown, testresult.fk_sid, test.tname, qr.qrscore, qr.qrtotalscore, testresult.trid, qr.qrid\" \n\" Sort Method: external merge Disk: 75864kB\" \n\" -> Hash Join (cost=2300.82..34001.42 rows=702022 width=86) (actual time=72.842..1276.634 rows=700536 loops=1)\" \n\" Hash Cond: (qr.fk_trid = testresult.trid)\" \n\" -> Seq Scan on questionresult qr (cost=0.00..12182.22 rows=702022 width=16) (actual time=0.112..229.987 rows=702022 loops=1)\" \n\" -> Hash (cost=1552.97..1552.97 rows=29668 width=74) (actual time=71.421..71.421 rows=29515 loops=1)\" \n\" -> Hash Join (cost=3.35..1552.97 rows=29668 width=74) (actual time=0.398..44.524 rows=29515 loops=1)\" \n\" Hash Cond: (testresult.fk_tid = test.tid)\" \n\" -> Seq Scan on testresult (cost=0.00..1141.68 rows=29668 width=53) (actual time=0.117..20.890 rows=29668 loops=1)\" \n\" -> Hash (cost=2.60..2.60 rows=60 width=21) (actual time=0.112..0.112 rows=60 loops=1)\" \n\" -> Seq Scan on test (cost=0.00..2.60 rows=60 width=21) (actual time=0.035..0.069 rows=60 loops=1)\" \n\"Total runtime: 23462.639 ms\" \n\n\nThanks for the quick responses and being patient with me not providing enough information. \n-Patrick \n\n----- Original Message ----- \nFrom: \"Kevin Grittner\" <[email protected]> \nTo: \"Patrick Donlin\" <[email protected]>, [email protected] \nSent: Thursday, July 15, 2010 10:55:19 AM GMT -05:00 US/Canada Eastern \nSubject: Re: [PERFORM] Identical query slower on 8.4 vs 8.3 \n\nPatrick Donlin <[email protected]> wrote: \n\n> Anyone have any ideas on where I should start looking to figure \n> this out? \n\nYou're going to want to run EXPLAIN ANALYZE for the slow query on \nboth servers. If you want the rest of us to be able to contribute \nideas, we'll need a little more information -- please read this \npage: \n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions \n\n> I didn't perform any special steps when moving to v8.4, I just did \n> a pg_dump from the 8.3 server and restored it on the new 8.4 \n> servers. \n\nA database VACUUM ANALYZE by a superuser is a good idea; otherwise \nthat's fine technique. \n\n-Kevin \n\nI'll read over that wiki entry, but for now here is the EXPLAIN ANALYZE \noutput assuming I did it correctly. I have run vacuumdb --full \n--analyze,  it actually runs as a nightly cron job.\n\n8.4.4 Sever:\n\"Unique  (cost=202950.82..227521.59 rows=702022 width=86) (actual \ntime=21273.371..22429.511 rows=700536 loops=1)\"\n\"  ->  Sort  (cost=202950.82..204705.87 rows=702022 width=86) (actual\n time=21273.368..22015.948 rows=700536 loops=1)\"\n\"        Sort Key: test.tid, testresult.trscore, \ntestresult.trpossiblescore, testresult.trstart, testresult.trfinish, \ntestresult.trscorebreakdown, testresult.fk_sid, test.tname, qr.qrscore, \nqr.qrtotalscore, testresult.trid, qr.qrid\"\n\"        Sort Method:  external merge  Disk: 71768kB\"\n\"        ->  Hash Join  (cost=2300.82..34001.42 rows=702022 width=86)\n (actual time=64.388..1177.468 rows=700536 loops=1)\"\n\"              Hash Cond: (qr.fk_trid = testresult.trid)\"\n\"              ->  Seq Scan on questionresult qr  \n(cost=0.00..12182.22 rows=702022 width=16) (actual time=0.090..275.518 \nrows=702022 loops=1)\"\n\"              ->  Hash  (cost=1552.97..1552.97 rows=29668 width=74) \n(actual time=63.042..63.042 rows=29515 loops=1)\"\n\"                    ->  Hash Join  (cost=3.35..1552.97 rows=29668 \nwidth=74) (actual time=0.227..39.111 rows=29515 loops=1)\"\n\"                          Hash Cond: (testresult.fk_tid = test.tid)\"\n\"                          ->  Seq Scan on testresult  \n(cost=0.00..1141.68 rows=29668 width=53) (actual time=0.019..15.622 \nrows=29668 loops=1)\"\n\"                          ->  Hash  (cost=2.60..2.60 rows=60 \nwidth=21) (actual time=0.088..0.088 rows=60 loops=1)\"\n\"                                ->  Seq Scan on test  \n(cost=0.00..2.60 rows=60 width=21) (actual time=0.015..0.044 rows=60 \nloops=1)\"\n\"Total runtime: 22528.820 ms\"\n\n8.3.7 Server:\n\"Unique  (cost=202950.82..227521.59 rows=702022 width=86) (actual \ntime=22157.714..23343.461 rows=700536 loops=1)\"\n\"  ->  Sort  (cost=202950.82..204705.87 rows=702022 width=86) (actual\n time=22157.706..22942.018 rows=700536 loops=1)\"\n\"        Sort Key: test.tid, testresult.trscore, \ntestresult.trpossiblescore, testresult.trstart, testresult.trfinish, \ntestresult.trscorebreakdown, testresult.fk_sid, test.tname, qr.qrscore, \nqr.qrtotalscore, testresult.trid, qr.qrid\"\n\"        Sort Method:  external merge  Disk: 75864kB\"\n\"        ->  Hash Join  (cost=2300.82..34001.42 rows=702022 width=86)\n (actual time=72.842..1276.634 rows=700536 loops=1)\"\n\"              Hash Cond: (qr.fk_trid = testresult.trid)\"\n\"              ->  Seq Scan on questionresult qr  \n(cost=0.00..12182.22 rows=702022 width=16) (actual time=0.112..229.987 \nrows=702022 loops=1)\"\n\"              ->  Hash  (cost=1552.97..1552.97 rows=29668 width=74) \n(actual time=71.421..71.421 rows=29515 loops=1)\"\n\"                    ->  Hash Join  (cost=3.35..1552.97 rows=29668 \nwidth=74) (actual time=0.398..44.524 rows=29515 loops=1)\"\n\"                          Hash Cond: (testresult.fk_tid = test.tid)\"\n\"                          ->  Seq Scan on testresult  \n(cost=0.00..1141.68 rows=29668 width=53) (actual time=0.117..20.890 \nrows=29668 loops=1)\"\n\"                          ->  Hash  (cost=2.60..2.60 rows=60 \nwidth=21) (actual time=0.112..0.112 rows=60 loops=1)\"\n\"                                ->  Seq Scan on test  \n(cost=0.00..2.60 rows=60 width=21) (actual time=0.035..0.069 rows=60 \nloops=1)\"\n\"Total runtime: 23462.639 ms\"\n\n\nThanks for the quick responses and being patient with me not providing \nenough information.\n-Patrick----- Original Message -----From: \"Kevin Grittner\" <[email protected]>To: \"Patrick Donlin\" <[email protected]>, [email protected]: Thursday, July 15, 2010 10:55:19 AM GMT -05:00 US/Canada EasternSubject: Re: [PERFORM] Identical query slower on 8.4 vs 8.3Patrick Donlin <[email protected]> wrote: > Anyone have any ideas on where I should start looking to figure> this out? You're going to want to run EXPLAIN ANALYZE for the slow query onboth servers.  If you want the rest of us to be able to contributeideas, we'll need a little more information -- please read thispage: http://wiki.postgresql.org/wiki/SlowQueryQuestions > I didn't perform any special steps when moving to v8.4, I just did> a pg_dump from the 8.3 server and restored it on the new 8.4> servers. A database VACUUM ANALYZE by a superuser is a good idea; otherwisethat's fine technique. -Kevin", "msg_date": "Thu, 15 Jul 2010 11:12:53 -0400 (EDT)", "msg_from": "Patrick Donlin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "Excerpts from Patrick Donlin's message of jue jul 15 11:12:53 -0400 2010:\n> I'll read over that wiki entry, but for now here is the EXPLAIN ANALYZE output assuming I did it correctly. I have run vacuumdb --full --analyze, it actually runs as a nightly cron job. \n\nThese plans seem identical (though the fact that the leading whitespace\nwas trimmed means it's untrustworthy -- please in the future send them\nas text attachments instead so that your mailer doesn't interfere with\nformatting). The 8.4 plan is even a full second faster, according to\nthe \"total runtime\" line.\n\nThe slowness could've been caused by caching effects ...\n", "msg_date": "Thu, 15 Jul 2010 11:21:33 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "FULL is usually bad. Stick to \"vacuum analyze\" and drop the full.\n\nDo you have indexes on:\n\ntest.tid, testresult.fk_tid, questionresult.fk_trid and testresult.trid\n\n\n-Andy\n\n\n\nOn 7/15/2010 10:12 AM, Patrick Donlin wrote:\n> I'll read over that wiki entry, but for now here is the EXPLAIN ANALYZE\n> output assuming I did it correctly. I have run vacuumdb --full\n> --analyze, it actually runs as a nightly cron job.\n>\n> 8.4.4 Sever:\n> \"Unique (cost=202950.82..227521.59 rows=702022 width=86) (actual\n> time=21273.371..22429.511 rows=700536 loops=1)\"\n> \" -> Sort (cost=202950.82..204705.87 rows=702022 width=86) (actual\n> time=21273.368..22015.948 rows=700536 loops=1)\"\n> \" Sort Key: test.tid, testresult.trscore, testresult.trpossiblescore,\n> testresult.trstart, testresult.trfinish, testresult.trscorebreakdown,\n> testresult.fk_sid, test.tname, qr.qrscore, qr.qrtotalscore,\n> testresult.trid, qr.qrid\"\n> \" Sort Method: external merge Disk: 71768kB\"\n> \" -> Hash Join (cost=2300.82..34001.42 rows=702022 width=86) (actual\n> time=64.388..1177.468 rows=700536 loops=1)\"\n> \" Hash Cond: (qr.fk_trid = testresult.trid)\"\n> \" -> Seq Scan on questionresult qr (cost=0.00..12182.22 rows=702022\n> width=16) (actual time=0.090..275.518 rows=702022 loops=1)\"\n> \" -> Hash (cost=1552.97..1552.97 rows=29668 width=74) (actual\n> time=63.042..63.042 rows=29515 loops=1)\"\n> \" -> Hash Join (cost=3.35..1552.97 rows=29668 width=74) (actual\n> time=0.227..39.111 rows=29515 loops=1)\"\n> \" Hash Cond: (testresult.fk_tid = test.tid)\"\n> \" -> Seq Scan on testresult (cost=0.00..1141.68 rows=29668 width=53)\n> (actual time=0.019..15.622 rows=29668 loops=1)\"\n> \" -> Hash (cost=2.60..2.60 rows=60 width=21) (actual time=0.088..0.088\n> rows=60 loops=1)\"\n> \" -> Seq Scan on test (cost=0.00..2.60 rows=60 width=21) (actual\n> time=0.015..0.044 rows=60 loops=1)\"\n> \"Total runtime: 22528.820 ms\"\n>\n> 8.3.7 Server:\n> \"Unique (cost=202950.82..227521.59 rows=702022 width=86) (actual\n> time=22157.714..23343.461 rows=700536 loops=1)\"\n> \" -> Sort (cost=202950.82..204705.87 rows=702022 width=86) (actual\n> time=22157.706..22942.018 rows=700536 loops=1)\"\n> \" Sort Key: test.tid, testresult.trscore, testresult.trpossiblescore,\n> testresult.trstart, testresult.trfinish, testresult.trscorebreakdown,\n> testresult.fk_sid, test.tname, qr.qrscore, qr.qrtotalscore,\n> testresult.trid, qr.qrid\"\n> \" Sort Method: external merge Disk: 75864kB\"\n> \" -> Hash Join (cost=2300.82..34001.42 rows=702022 width=86) (actual\n> time=72.842..1276.634 rows=700536 loops=1)\"\n> \" Hash Cond: (qr.fk_trid = testresult.trid)\"\n> \" -> Seq Scan on questionresult qr (cost=0.00..12182.22 rows=702022\n> width=16) (actual time=0.112..229.987 rows=702022 loops=1)\"\n> \" -> Hash (cost=1552.97..1552.97 rows=29668 width=74) (actual\n> time=71.421..71.421 rows=29515 loops=1)\"\n> \" -> Hash Join (cost=3.35..1552.97 rows=29668 width=74) (actual\n> time=0.398..44.524 rows=29515 loops=1)\"\n> \" Hash Cond: (testresult.fk_tid = test.tid)\"\n> \" -> Seq Scan on testresult (cost=0.00..1141.68 rows=29668 width=53)\n> (actual time=0.117..20.890 rows=29668 loops=1)\"\n> \" -> Hash (cost=2.60..2.60 rows=60 width=21) (actual time=0.112..0.112\n> rows=60 loops=1)\"\n> \" -> Seq Scan on test (cost=0.00..2.60 rows=60 width=21) (actual\n> time=0.035..0.069 rows=60 loops=1)\"\n> \"Total runtime: 23462.639 ms\"\n>\n>\n> Thanks for the quick responses and being patient with me not providing\n> enough information.\n> -Patrick\n>\n> ----- Original Message -----\n> From: \"Kevin Grittner\" <[email protected]>\n> To: \"Patrick Donlin\" <[email protected]>, [email protected]\n> Sent: Thursday, July 15, 2010 10:55:19 AM GMT -05:00 US/Canada Eastern\n> Subject: Re: [PERFORM] Identical query slower on 8.4 vs 8.3\n>\n> Patrick Donlin <[email protected]> wrote:\n>\n> > Anyone have any ideas on where I should start looking to figure\n> > this out?\n>\n> You're going to want to run EXPLAIN ANALYZE for the slow query on\n> both servers. If you want the rest of us to be able to contribute\n> ideas, we'll need a little more information -- please read this\n> page:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> > I didn't perform any special steps when moving to v8.4, I just did\n> > a pg_dump from the 8.3 server and restored it on the new 8.4\n> > servers.\n>\n> A database VACUUM ANALYZE by a superuser is a good idea; otherwise\n> that's fine technique.\n>\n> -Kevin\n\n", "msg_date": "Thu, 15 Jul 2010 10:31:29 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "On Thu, Jul 15, 2010 at 9:41 AM, Patrick Donlin <[email protected]> wrote:\n> I have two servers with equal specs, one of them running 8.3.7 and the new\n> server running 8.4.4. The only tweak I have made from the default install\n> (from Ubuntu repositories) is increasing shared_buffers to 768MB. Both\n> servers are running 64-bit, but are different releases of Ubuntu.\n\n^^^ Right there. *different releases*. I've seen fairly significant\ndifferences in identical hardware with even minor O/S point releases.\n\nAfter you run a full vacuum and then reindex and then vacuum analyze\n(probably not entirely necessary) if there is still a difference I'd\npoint at the O/S.\n\n\n\n\n-- \nJon\n", "msg_date": "Thu, 15 Jul 2010 10:34:56 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "Patrick Donlin <[email protected]> wrote: \n \n> I have run vacuumdb --full --analyze, it \n> actually runs as a nightly cron job.\n \nThat's usually not wise -- VACUUM FULL can cause index bloat, and is\nnot normally necessary. If you have autovacuum turned on and run a\ndatabase vacuum each night, you can probably avoid ever running\nVACUUM FULL. A long-running transaction or mass deletes might still\nmake aggressive cleanup necessary on occasion, but you should\nconsider using CLUSTER instead of VACUUM FULL. So, you should\nprobably change your crontab job to vacuum --all --analyze.\n \nAlso, after a bulk load of a database like this you might consider a\none-time VACUUM FREEZE of the database. Without that there will\ncome a time when autovacuum will need to rewrite all rows from the\nbulk load which haven't subsequently been modified, in order to\nprevent transaction ID wraparound problems.\n \n-Kevin\n", "msg_date": "Thu, 15 Jul 2010 10:53:20 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "On Thu, Jul 15, 2010 at 11:12 AM, Patrick Donlin <[email protected]> wrote:\n> I'll read over that wiki entry, but for now here is the EXPLAIN ANALYZE\n> output assuming I did it correctly. I have run vacuumdb --full --analyze,\n> it actually runs as a nightly cron job.\n>\n> 8.4.4 Sever:\n> \"Unique  (cost=202950.82..227521.59 rows=702022 width=86) (actual\n> time=21273.371..22429.511 rows=700536 loops=1)\"\n> \"  ->  Sort  (cost=202950.82..204705.87 rows=702022 width=86) (actual\n> time=21273.368..22015.948 rows=700536 loops=1)\"\n> \"        Sort Key: test.tid, testresult.trscore, testresult.trpossiblescore,\n> testresult.trstart, testresult.trfinish, testresult.trscorebreakdown,\n> testresult.fk_sid, test.tname, qr.qrscore, qr.qrtotalscore, testresult.trid,\n> qr.qrid\"\n> \"        Sort Method:  external merge  Disk: 71768kB\"\n> \"        ->  Hash Join  (cost=2300.82..34001.42 rows=702022 width=86)\n> (actual time=64.388..1177.468 rows=700536 loops=1)\"\n> \"              Hash Cond: (qr.fk_trid = testresult.trid)\"\n> \"              ->  Seq Scan on questionresult qr  (cost=0.00..12182.22\n> rows=702022 width=16) (actual time=0.090..275.518 rows=702022 loops=1)\"\n> \"              ->  Hash  (cost=1552.97..1552.97 rows=29668 width=74) (actual\n> time=63.042..63.042 rows=29515 loops=1)\"\n> \"                    ->  Hash Join  (cost=3.35..1552.97 rows=29668 width=74)\n> (actual time=0.227..39.111 rows=29515 loops=1)\"\n> \"                          Hash Cond: (testresult.fk_tid = test.tid)\"\n> \"                          ->  Seq Scan on testresult  (cost=0.00..1141.68\n> rows=29668 width=53) (actual time=0.019..15.622 rows=29668 loops=1)\"\n> \"                          ->  Hash  (cost=2.60..2.60 rows=60 width=21)\n> (actual time=0.088..0.088 rows=60 loops=1)\"\n> \"                                ->  Seq Scan on test  (cost=0.00..2.60\n> rows=60 width=21) (actual time=0.015..0.044 rows=60 loops=1)\"\n> \"Total runtime: 22528.820 ms\"\n>\n> 8.3.7 Server:\n> \"Unique  (cost=202950.82..227521.59 rows=702022 width=86) (actual\n> time=22157.714..23343.461 rows=700536 loops=1)\"\n> \"  ->  Sort  (cost=202950.82..204705.87 rows=702022 width=86) (actual\n> time=22157.706..22942.018 rows=700536 loops=1)\"\n> \"        Sort Key: test.tid, testresult.trscore, testresult.trpossiblescore,\n> testresult.trstart, testresult.trfinish, testresult.trscorebreakdown,\n> testresult.fk_sid, test.tname, qr.qrscore, qr.qrtotalscore, testresult.trid,\n> qr.qrid\"\n> \"        Sort Method:  external merge  Disk: 75864kB\"\n> \"        ->  Hash Join  (cost=2300.82..34001.42 rows=702022 width=86)\n> (actual time=72.842..1276.634 rows=700536 loops=1)\"\n> \"              Hash Cond: (qr.fk_trid = testresult.trid)\"\n> \"              ->  Seq Scan on questionresult qr  (cost=0.00..12182.22\n> rows=702022 width=16) (actual time=0.112..229.987 rows=702022 loops=1)\"\n> \"              ->  Hash  (cost=1552.97..1552.97 rows=29668 width=74) (actual\n> time=71.421..71.421 rows=29515 loops=1)\"\n> \"                    ->  Hash Join  (cost=3.35..1552.97 rows=29668 width=74)\n> (actual time=0.398..44.524 rows=29515 loops=1)\"\n> \"                          Hash Cond: (testresult.fk_tid = test.tid)\"\n> \"                          ->  Seq Scan on testresult  (cost=0.00..1141.68\n> rows=29668 width=53) (actual time=0.117..20.890 rows=29668 loops=1)\"\n> \"                          ->  Hash  (cost=2.60..2.60 rows=60 width=21)\n> (actual time=0.112..0.112 rows=60 loops=1)\"\n> \"                                ->  Seq Scan on test  (cost=0.00..2.60\n> rows=60 width=21) (actual time=0.035..0.069 rows=60 loops=1)\"\n> \"Total runtime: 23462.639 ms\"\n\nyour plans are identical as is the runtime basically. this means you\nmight want to consider the following possibilities:\n*) operator error :-)\n*) cache effects\n*) environmental factors on the server at the time\n*) network/client issues\n\nI say network issues because if your explain analyze (which actually\ndoes run the entire query) is significantly faster than the full\nquery, then we have to consider that the formatting and transfer of\nthe data back to the client (even if it's on the same box) becomes\nsuspicious. If you've eliminated other possibilities, try running\nother big, trivially planned, mucho result returning queries (like\nselect * from table) on both servers and comparing times.\n\nmerlin\n", "msg_date": "Thu, 15 Jul 2010 12:04:13 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "On Thu, 2010-07-15 at 10:41 -0400, Patrick Donlin wrote:\n\n> Results when running on the v8.3.7 server....\n> Total query runtime: 32185 ms.\n> 700536 rows retrieved.\n> \n> Results when running on the v8.4.4 server....\n> Total query runtime: 164227 ms.\n> 700536 rows retrieved.\n> \n\n> \n> Anyone have any ideas on where I should start looking to figure this\n> out? I didn't perform any special steps when moving to v8.4, I just\n> did a pg_dump from the 8.3 server and restored it on the new 8.4\n> servers. Maybe that is where I made a mistake.\n\nThree immediate things come to mind:\n\n1. One had relations in file or shared buffer cache, the other didn't\n2. One is running ext4 versus ext3 and when you end up spilling to disk\nwhen you over run work_mem, the ext4 machine is faster, but without\nknowing which machine is which it is a bit tough to diagnose.\n3. You didn't run ANALYZE on one of the machines\n\nSincerely,\n\nJoshua D. Drake\n\n> \n> Thanks!\n> Patrick\n> \n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\n\n", "msg_date": "Thu, 15 Jul 2010 10:17:21 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "Thanks everyone for the input so far, Merlin's comment about the network gave me one of those duh moments since I have been running these queries remotely using pgadmin. I will experiment with this more tomorrow/Monday along with the other suggestions that have been posted to hopefully narrow it down. Running the query from my webserver yielded much better times, but from a quick look it seems my 8.4 server is still a bit slower. I will share more details as I dig into it more tomorrow or Monday. \n\n-Patrick \n\n----- Original Message ----- \nFrom: \"Merlin Moncure\" <[email protected]> \nTo: \"Patrick Donlin\" <[email protected]> \nCc: \"Kevin Grittner\" <[email protected]>, [email protected] \nSent: Thursday, July 15, 2010 12:04:13 PM GMT -05:00 US/Canada Eastern \nSubject: Re: [PERFORM] Identical query slower on 8.4 vs 8.3 \n\nyour plans are identical as is the runtime basically. this means you \nmight want to consider the following possibilities: \n*) operator error :-) \n*) cache effects \n*) environmental factors on the server at the time \n*) network/client issues \n\nI say network issues because if your explain analyze (which actually \ndoes run the entire query) is significantly faster than the full \nquery, then we have to consider that the formatting and transfer of \nthe data back to the client (even if it's on the same box) becomes \nsuspicious. If you've eliminated other possibilities, try running \nother big, trivially planned, mucho result returning queries (like \nselect * from table) on both servers and comparing times. \n\nmerlin \n\nThanks everyone for the input so far, Merlin's comment about the network gave me one of those duh moments since I have been running these queries remotely using pgadmin. I will experiment with this more tomorrow/Monday along with the other suggestions that have been posted to hopefully narrow it down. Running the query from my webserver yielded much better times, but from a quick look it seems my 8.4 server is still a bit slower. I will share more details as I dig into it more tomorrow or Monday.-Patrick----- Original Message -----From: \"Merlin Moncure\" <[email protected]>To: \"Patrick Donlin\" <[email protected]>Cc: \"Kevin Grittner\" <[email protected]>, [email protected]: Thursday, July 15, 2010 12:04:13 PM GMT -05:00 US/Canada EasternSubject: Re: [PERFORM] Identical query slower on 8.4 vs 8.3your plans are identical as is the runtime basically.  this means youmight want to consider the following possibilities:*) operator error :-)*) cache effects*) environmental factors on the server at the time*) network/client issuesI say network issues because if your explain analyze (which actuallydoes run the entire query) is significantly faster than the fullquery, then we have to consider that the formatting and transfer ofthe data back to the client (even if it's on the same box) becomessuspicious.  If you've eliminated other possibilities, try runningother big, trivially planned, mucho result returning queries (likeselect * from table) on both servers and comparing times.merlin", "msg_date": "Thu, 15 Jul 2010 14:48:04 -0400 (EDT)", "msg_from": "Patrick Donlin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": " \n\n> -----Original Message-----\n> From: Patrick Donlin [mailto:[email protected]] \n> Sent: Thursday, July 15, 2010 11:13 AM\n> To: Kevin Grittner; [email protected]\n> Subject: Re: Identical query slower on 8.4 vs 8.3\n> \n> I'll read over that wiki entry, but for now here is the \n> EXPLAIN ANALYZE output assuming I did it correctly. I have \n> run vacuumdb --full --analyze, it actually runs as a nightly \n> cron job.\n> \n> 8.4.4 Sever:\n> \"Unique (cost=202950.82..227521.59 rows=702022 width=86) \n> (actual time=21273.371..22429.511 rows=700536 loops=1)\"\n> \" -> Sort (cost=202950.82..204705.87 rows=702022 width=86) \n> (actual time=21273.368..22015.948 rows=700536 loops=1)\"\n> \" Sort Key: test.tid, testresult.trscore, \n> testresult.trpossiblescore, testresult.trstart, \n> testresult.trfinish, testresult.trscorebreakdown, \n> testresult.fk_sid, test.tname, qr.qrscore, qr.qrtotalscore, \n> testresult.trid, qr.qrid\"\n> \" Sort Method: external merge Disk: 71768kB\"\n> \" -> Hash Join (cost=2300.82..34001.42 rows=702022 \n> width=86) (actual time=64.388..1177.468 rows=700536 loops=1)\"\n> \" Hash Cond: (qr.fk_trid = testresult.trid)\"\n> \" -> Seq Scan on questionresult qr \n> (cost=0.00..12182.22 rows=702022 width=16) (actual \n> time=0.090..275.518 rows=702022 loops=1)\"\n> \" -> Hash (cost=1552.97..1552.97 rows=29668 \n> width=74) (actual time=63.042..63.042 rows=29515 loops=1)\"\n> \" -> Hash Join (cost=3.35..1552.97 \n> rows=29668 width=74) (actual time=0.227..39.111 rows=29515 loops=1)\"\n> \" Hash Cond: (testresult.fk_tid = test.tid)\"\n> \" -> Seq Scan on testresult \n> (cost=0.00..1141.68 rows=29668 width=53) (actual \n> time=0.019..15.622 rows=29668 loops=1)\"\n> \" -> Hash (cost=2.60..2.60 rows=60 \n> width=21) (actual time=0.088..0.088 rows=60 loops=1)\"\n> \" -> Seq Scan on test \n> (cost=0.00..2.60 rows=60 width=21) (actual time=0.015..0.044 \n> rows=60 loops=1)\"\n> \"Total runtime: 22528.820 ms\"\n> \n> 8.3.7 Server:\n> \"Unique (cost=202950.82..227521.59 rows=702022 width=86) \n> (actual time=22157.714..23343.461 rows=700536 loops=1)\"\n> \" -> Sort (cost=202950.82..204705.87 rows=702022 width=86) \n> (actual time=22157.706..22942.018 rows=700536 loops=1)\"\n> \" Sort Key: test.tid, testresult.trscore, \n> testresult.trpossiblescore, testresult.trstart, \n> testresult.trfinish, testresult.trscorebreakdown, \n> testresult.fk_sid, test.tname, qr.qrscore, qr.qrtotalscore, \n> testresult.trid, qr.qrid\"\n> \" Sort Method: external merge Disk: 75864kB\"\n> \" -> Hash Join (cost=2300.82..34001.42 rows=702022 \n> width=86) (actual time=72.842..1276.634 rows=700536 loops=1)\"\n> \" Hash Cond: (qr.fk_trid = testresult.trid)\"\n> \" -> Seq Scan on questionresult qr \n> (cost=0.00..12182.22 rows=702022 width=16) (actual \n> time=0.112..229.987 rows=702022 loops=1)\"\n> \" -> Hash (cost=1552.97..1552.97 rows=29668 \n> width=74) (actual time=71.421..71.421 rows=29515 loops=1)\"\n> \" -> Hash Join (cost=3.35..1552.97 \n> rows=29668 width=74) (actual time=0.398..44.524 rows=29515 loops=1)\"\n> \" Hash Cond: (testresult.fk_tid = test.tid)\"\n> \" -> Seq Scan on testresult \n> (cost=0.00..1141.68 rows=29668 width=53) (actual \n> time=0.117..20.890 rows=29668 loops=1)\"\n> \" -> Hash (cost=2.60..2.60 rows=60 \n> width=21) (actual time=0.112..0.112 rows=60 loops=1)\"\n> \" -> Seq Scan on test \n> (cost=0.00..2.60 rows=60 width=21) (actual time=0.035..0.069 \n> rows=60 loops=1)\"\n> \"Total runtime: 23462.639 ms\"\n> \n> \n> Thanks for the quick responses and being patient with me not \n> providing enough information.\n> -Patrick\n> \n\nWell, now that you've got similar runtime on both 8.4.4 and 8.3.7, here\nis a suggestion to improve performance of this query based on EXPLAIN\nANALYZE you proveded (should have done it in your first e-mail).\n\nEXPLAIN ANALYZE shows that most of the time (22015 ms on 8.4.4) spent on\nsorting you result set.\nAnd according to this: \"Sort Method: external merge Disk: 71768kB\" -\nsorting is done using disk, meaning your work_mem setting is not\nsufficient to do this sort in memory (I didn't go back through this\nthread far enough, to see if you provided info on how it is set).\n\nI'd suggest to increase the value up to ~80MB, if not for the system,\nmay be just for the session running this query.\nThen see if performance improved.\n\nAnd, with query performance issues always start with EXPLAIN ANALYZE.\n\nRegards,\nIgor Neyman \n", "msg_date": "Fri, 16 Jul 2010 10:10:40 -0400", "msg_from": "\"Igor Neyman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" }, { "msg_contents": "\n> I'd suggest to increase the value up to ~80MB, if not for the system,\n> may be just for the session running this query.\n> Then see if performance improved.\n\nDon't forget you can do this for the given query without affecting the\nother queries - just do something like\n\nSET work_mem = 128M\n\nand then run the query - it should work fine. This is great for testing\nand to set environment for special users (batch processes etc.).\n\nregards\nTomas\n\n", "msg_date": "Fri, 16 Jul 2010 16:17:24 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Identical query slower on 8.4 vs 8.3" } ]
[ { "msg_contents": "First of all, a little background.\n\nWe have a table which is used as a trigger table for entering and\nprocessing data for a network monitoring system.\n\nEssentially, we insert a set of columns into a table, and each row fires\na trigger function which calls a very large stored procedure which\naggregates data, etc. At that point, the row is deleted from the temp\ntable.\n\nCurrently, records are transferred from the data collector as a series\nof multi-row inserts.\n\nBefore going through the exercise of recoding, and given the fact that\neach of this inserts fires of a trigger, will I see any noticeable\nperformance?\n\n \n\nThe table definition follows:\n\n \n\nCREATE TABLE tbltmptests\n\n(\n\n tmptestsysid bigserial NOT NULL,\n\n testhash character varying(32),\n\n testtime timestamp with time zone,\n\n statusid integer,\n\n replytxt text,\n\n replyval real,\n\n groupid integer,\n\n CONSTRAINT tbltmptests_pkey PRIMARY KEY (tmptestsysid)\n\n)\n\nWITH (\n\n OIDS=FALSE\n\n);\n\nALTER TABLE tbltmptests OWNER TO postgres;\n\n \n\n-- Trigger: tbltmptests_tr on tbltmptests\n\n \n\n-- DROP TRIGGER tbltmptests_tr ON tbltmptests;\n\n \n\nCREATE TRIGGER tbltmptests_tr\n\n AFTER INSERT\n\n ON tbltmptests\n\n FOR EACH ROW\n\n EXECUTE PROCEDURE fn_testtrigger();\n\n \n\n \n\nAnother question - is there anything special we need to do to handle the\nprimary constraint field?\n\n \n\nNow, on a related note and looking forward to the streaming replication\nof v9, will this work with it, since we have multiple tables being\nupdate by a trigger function?\n\n\n\n\n\n\n\n\n\n\n\nFirst of all, a little background.\nWe have a table which is used as a trigger table for\nentering and processing data for a network monitoring system.\nEssentially, we insert a set of columns into a table, and\neach row fires a trigger function which calls a very large stored procedure\nwhich aggregates data, etc.  At that point, the row is deleted from the\ntemp table.\nCurrently, records are transferred from the data collector\nas a series of multi-row inserts.\nBefore going through the exercise of recoding, and given the\nfact that each of this inserts fires of a trigger, will I see any noticeable\nperformance?\n \nThe table definition follows:\n \nCREATE TABLE tbltmptests\n(\n  tmptestsysid bigserial NOT NULL,\n  testhash character varying(32),\n  testtime timestamp with time zone,\n  statusid integer,\n  replytxt text,\n  replyval real,\n  groupid integer,\n  CONSTRAINT tbltmptests_pkey PRIMARY KEY\n(tmptestsysid)\n)\nWITH (\n  OIDS=FALSE\n);\nALTER TABLE tbltmptests OWNER TO postgres;\n \n-- Trigger: tbltmptests_tr on tbltmptests\n \n-- DROP TRIGGER tbltmptests_tr ON tbltmptests;\n \nCREATE TRIGGER tbltmptests_tr\n  AFTER INSERT\n  ON tbltmptests\n  FOR EACH ROW\n  EXECUTE PROCEDURE fn_testtrigger();\n \n \nAnother question – is there anything special we need\nto do to handle the primary constraint field?\n \nNow, on a related note and looking forward to the streaming\nreplication of v9, will this work with it, since we have multiple tables being\nupdate by a trigger function?", "msg_date": "Thu, 15 Jul 2010 11:27:46 -0600", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Question of using COPY on a table with triggers" }, { "msg_contents": "> Essentially, we insert a set of columns into a table, and each row fires\n> a trigger function which calls a very large stored procedure\n\n\nFor inserting lots of rows, COPY is much faster than INSERT because it \nparses data (a lot) faster and is more \"data-stream-friendly\". However the \nactual inserting into the tbale and trigger-calling has to be done for \nboth.\n\nIf the trigger is a \"very large stored procedure\" it is very likely that \nexecuting it will take a lot more time than parsing & executing the \nINSERT. So, using COPY instead of INSERT will not gain you anything.\n", "msg_date": "Fri, 16 Jul 2010 00:46:54 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question of using COPY on a table with triggers" }, { "msg_contents": "That is what I thought.\r\nThe trigger calls a 3000 row stored procedure which does all of the calculations to aggregate data into 3 separate tables and then insert the raw data point into a 4th table.\r\n\r\n\r\n> -----Original Message-----\r\n> From: Pierre C [mailto:[email protected]]\r\n> Sent: Thursday, July 15, 2010 4:47 PM\r\n> To: Benjamin Krajmalnik; [email protected]\r\n> Subject: Re: [PERFORM] Question of using COPY on a table with triggers\r\n> \r\n> > Essentially, we insert a set of columns into a table, and each row\r\n> fires\r\n> > a trigger function which calls a very large stored procedure\r\n> \r\n> \r\n> For inserting lots of rows, COPY is much faster than INSERT because it\r\n> parses data (a lot) faster and is more \"data-stream-friendly\". However\r\n> the\r\n> actual inserting into the tbale and trigger-calling has to be done for\r\n> both.\r\n> \r\n> If the trigger is a \"very large stored procedure\" it is very likely\r\n> that\r\n> executing it will take a lot more time than parsing & executing the\r\n> INSERT. So, using COPY instead of INSERT will not gain you anything.\r\n", "msg_date": "Thu, 15 Jul 2010 17:28:39 -0600", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question of using COPY on a table with triggers" }, { "msg_contents": "\"Benjamin Krajmalnik\" <[email protected]> writes:\n> That is what I thought.\n> The trigger calls a 3000 row stored procedure which does all of the calculations to aggregate data into 3 separate tables and then insert the raw data point into a 4th table.\n\nYouch. Seems like you might want to rethink the idea of doing those\ncalculations incrementally for each added row. Wouldn't it be better\nto add all the new data and then do the aggregation once?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jul 2010 20:24:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question of using COPY on a table with triggers " } ]
[ { "msg_contents": "I am sending u the query along with execution plan. Please help\n\nexplain analyze select \ns.*,a.actid,a.phone,d.domid,d.domname,d.domno,a.actno,a.actname,p.descr \nas svcdescr\nfrom vwsubsmin s\ninner join packages p on s.svcno=p.pkgno\ninner join account a on a.actno=s.actno\ninner join ssgdom d on a.domno=d.domno\ninner join (select subsno from getexpiringsubs(1,cast(2 as \ninteger),cast(3 as double precision),\n'4') as (subsno int,expirydt timestamp without time zone,balcpt double \nprecision)) as e\non s.subsno=e.subsno\nwhere s.status <=15 and d.domno=273\norder by d.domname,s.expirydt,a.actname\n\n\n\n\"Sort (cost=79056.66..79056.67 rows=1 width=330) (actual \ntime=220244.497..220244.497 rows=0 loops=1)\"\n\" Sort Key: d.domname, CASE WHEN (v.expirydt IS NULL) THEN b.expirydt \nELSE v.expirydt END, a.actname\"\n\" -> Nested Loop (cost=78354.14..79056.65 rows=1 width=330) (actual \ntime=220244.457..220244.457 rows=0 loops=1)\"\n\" -> Nested Loop (cost=78354.14..79051.44 rows=1 width=296) \n(actual time=220244.422..220244.422 rows=0 loops=1)\"\n\" -> Hash Join (cost=78354.14..79047.51 rows=1 width=268) \n(actual time=220244.389..220244.389 rows=0 loops=1)\"\n\" Hash Cond: (\"outer\".actno = \"inner\".actno)\"\n\" -> Merge Join (cost=77605.44..78297.14 rows=333 \nwidth=221) (actual time=216573.695..216573.695 rows=0 loops=1)\"\n\" Merge Cond: (\"outer\".subsno = \"inner\".subsno)\"\n\" -> Merge Left Join (cost=77543.11..78080.70 \nrows=58313 width=225) (actual time=207017.909..207017.909 rows=1 loops=1)\"\n\" Merge Cond: ((\"outer\".subsno = \n\"inner\".subsno) AND (\"outer\".actno = \"inner\".actno))\"\n\" -> Sort (cost=36864.71..37010.49 \nrows=58313 width=144) (actual time=182412.046..182412.046 rows=1 loops=1)\"\n\" Sort Key: s.subsno, b.actno\"\n\" -> Hash Left Join \n(cost=10628.10..27483.78 rows=58313 width=144) (actual \ntime=155815.373..180210.411 rows=146953 loops=1)\"\n\" Hash Cond: (\"outer\".subsno \n= \"inner\".subsno)\"\n\" -> Hash Join \n(cost=6486.20..18594.41 rows=58313 width=136) (actual \ntime=154276.012..171743.982 rows=146953 loops=1)\"\n\" Hash Cond: \n(\"outer\".subsno = \"inner\".subsno)\"\n\" -> Seq Scan on \nactbal b (cost=0.00..4155.37 rows=174937 width=67) (actual \ntime=15.862..853.287 rows=174937 loops=1)\"\n\" -> Hash \n(cost=5599.42..5599.42 rows=58313 width=69) (actual \ntime=154252.586..154252.586 rows=146954 loops=1)\"\n\" -> Seq Scan on \nsubs s (cost=0.00..5599.42 rows=58313 width=69) (actual \ntime=409.370..153354.835 rows=146954 loops=1)\"\n\" Filter: \n(CASE WHEN ((status = 0) AND issubsexpired(subsno)) THEN 15 ELSE status \nEND <= 15)\"\n\" -> Hash \n(cost=2795.32..2795.32 rows=161032 width=12) (actual \ntime=1539.306..1539.306 rows=161032 loops=1)\"\n\" -> Seq Scan on \ncpnsubs c (cost=0.00..2795.32 rows=161032 width=12) (actual \ntime=445.696..1202.186 rows=161032 loops=1)\"\n\" -> Sort (cost=40678.41..40711.82 \nrows=13364 width=93) (actual time=24604.798..24604.798 rows=1 loops=1)\"\n\" Sort Key: v.subsno, v.actno\"\n\" -> Subquery Scan v \n(cost=36763.41..39330.87 rows=13364 width=93) (actual \ntime=23786.875..24304.328 rows=67576 loops=1)\"\n\" -> GroupAggregate \n(cost=36763.41..39197.23 rows=13364 width=61) (actual \ntime=23786.791..24241.895 rows=67576 loops=1)\"\n\" -> Sort \n(cost=36763.41..36942.35 rows=71576 width=61) (actual \ntime=23785.939..23849.227 rows=72402 loops=1)\"\n\" Sort Key: \nu.actno, u.subsno\"\n\" -> Hash Join \n(cost=5141.67..28427.93 rows=71576 width=61) (actual \ntime=7397.590..21721.903 rows=72402 loops=1)\"\n\" Hash \nCond: (\"outer\".ctno = \"inner\".ctno)\"\n\" -> Hash \nJoin (cost=5061.16..27273.78 rows=71576 width=32) (actual \ntime=6002.278..20257.764 rows=72402 loops=1)\"\n\" \nHash Cond: (\"outer\".cpno = \"inner\".cpno)\"\n\" -> \nSeq Scan on cpn c (cost=0.00..10132.94 rows=443194 width=12) (actual \ntime=1038.150..9313.905 rows=443194 loops=1)\"\n\" -> \nHash (cost=4252.22..4252.22 rows=71576 width=36) (actual \ntime=3524.715..3524.715 rows=72402 loops=1)\"\n\" \n-> Bitmap Heap Scan on cpnusage u (cost=448.52..4252.22 rows=71576 \nwidth=36) (actual time=832.658..3474.318 rows=72402 loops=1)\"\n\" \nRecheck Cond: (status < 15)\"\n\" \n-> Bitmap Index Scan on cpnusage_status (cost=0.00..448.52 rows=71576 \nwidth=0) (actual time=465.807..465.807 rows=72402 loops=1)\"\n\" \nIndex Cond: (status < 15)\"\n\" -> Hash \n(cost=79.75..79.75 rows=304 width=41) (actual time=1395.192..1395.192 \nrows=304 loops=1)\"\n\" -> \nHash Join (cost=40.60..79.75 rows=304 width=41) (actual \ntime=1394.251..1395.072 rows=304 loops=1)\"\n\" \nHash Cond: (\"outer\".ctno = \"inner\".ctno)\"\n\" \n-> Hash Left Join (cost=26.80..61.39 rows=304 width=37) (actual \ntime=932.963..933.672 rows=304 loops=1)\"\n\" \nHash Cond: ((\"outer\".price_class_id)::text = \n(\"inner\".price_class_id)::text)\"\n\" \n-> Hash Left Join (cost=18.34..49.62 rows=304 width=52) (actual \ntime=97.380..97.935 rows=304 loops=1)\"\n\" \nHash Cond: (\"outer\".validprduom = \"inner\".uomno)\"\n\" \n-> Hash Left Join (cost=17.26..43.98 rows=304 width=56) (actual \ntime=97.356..97.818 rows=304 loops=1)\"\n\" \nHash Cond: (\"outer\".timelimituom = \"inner\".uomno)\"\n\" \n-> Hash Left Join (cost=16.19..38.35 rows=304 width=51) (actual \ntime=51.738..52.119 rows=304 loops=1)\"\n\" \nHash Cond: (\"outer\".stno = \"inner\".svccat)\"\n\" \n-> Hash Left Join (cost=15.16..32.76 rows=304 width=55) (actual \ntime=2.668..2.953 rows=304 loops=1)\"\n\" \nHash Cond: (\"outer\".domno = \"inner\".domno)\"\n\" \n-> Seq Scan on cpntype q (cost=0.00..13.04 rows=304 width=59) (actual \ntime=0.001..0.099 rows=304 loops=1)\"\n\" \n-> Hash (cost=14.13..14.13 rows=413 width=4) (actual time=2.599..2.599 \nrows=413 loops=1)\"\n\" \n-> Seq Scan on ssgdom d (cost=0.00..14.13 rows=413 width=4) (actual \ntime=0.696..2.447 rows=413 loops=1)\"\n\" \n-> Hash (cost=1.02..1.02 rows=2 width=4) (actual time=49.041..49.041 \nrows=2 loops=1)\"\n\" \n-> Seq Scan on svccat s (cost=0.00..1.02 rows=2 width=4) (actual \ntime=48.997..48.999 rows=2 loops=1)\"\n\" \n-> Hash (cost=1.06..1.06 rows=6 width=13) (actual time=45.606..45.606 \nrows=6 loops=1)\"\n\" \n-> Seq Scan on timeuom u1 (cost=0.00..1.06 rows=6 width=13) (actual \ntime=45.593..45.599 rows=6 loops=1)\"\n\" \n-> Hash (cost=1.06..1.06 rows=6 width=4) (actual time=0.006..0.006 \nrows=6 loops=1)\"\n\" \n-> Seq Scan on timeuom u2 (cost=0.00..1.06 rows=6 width=4) (actual \ntime=0.002..0.002 rows=6 loops=1)\"\n\" \n-> Hash (cost=7.77..7.77 rows=277 width=15) (actual \ntime=835.538..835.538 rows=277 loops=1)\"\n\" \n-> Seq Scan on price_class l (cost=0.00..7.77 rows=277 width=15) \n(actual time=732.953..835.436 rows=277 loops=1)\"\n\" \n-> Hash (cost=13.04..13.04 rows=304 width=4) (actual \ntime=461.270..461.270 rows=304 loops=1)\"\n\" \n-> Seq Scan on cpntype t (cost=0.00..13.04 rows=304 width=4) (actual \ntime=234.548..461.194 rows=304 loops=1)\"\n\" -> Sort (cost=62.33..64.83 rows=1000 \nwidth=4) (actual time=9554.783..9554.783 rows=0 loops=1)\"\n\" Sort Key: getexpiringsubs.subsno\"\n\" -> Function Scan on getexpiringsubs \n(cost=0.00..12.50 rows=1000 width=4) (actual time=9554.086..9554.086 \nrows=0 loops=1)\"\n\" -> Hash (cost=748.00..748.00 rows=280 width=47) \n(actual time=3670.649..3670.649 rows=646 loops=1)\"\n\" -> Index Scan using account_domno on account \na (cost=0.00..748.00 rows=280 width=47) (actual time=455.439..3670.133 \nrows=646 loops=1)\"\n\" Index Cond: (273 = domno)\"\n\" -> Index Scan using packages_pkey on packages p \n(cost=0.00..3.91 rows=1 width=32) (never executed)\"\n\" Index Cond: (\"outer\".svcno = p.pkgno)\"\n\" -> Index Scan using ssgdom_pkey on ssgdom d (cost=0.00..5.19 \nrows=1 width=38) (never executed)\"\n\" Index Cond: (domno = 273)\"\n\"Total runtime: 220481.780 ms\"\n\n-- \nThanks and Regards, Srikanth Kata\n", "msg_date": "Sat, 17 Jul 2010 14:20:26 +0530", "msg_from": "Srikanth <[email protected]>", "msg_from_op": true, "msg_subject": "What is the best way to optimize the query." }, { "msg_contents": "Hello,\n\nOn 17 July 2010 12:50, Srikanth <[email protected]> wrote:\n> I am sending u the query along with execution plan. Please help\n>\n\nIt would be better if you start with it:\n\nhttp://www.postgresql.org/docs/8.4/interactive/indexes.html\nhttp://www.mohawksoft.org/?q=node/56\n\n\n-- \nSergey Konoplev\n\nBlog: http://gray-hemp.blogspot.com /\nLinkedin: http://ru.linkedin.com/in/grayhemp /\nJID/GTalk: [email protected] / Skype: gray-hemp / ICQ: 29353802\n", "msg_date": "Mon, 19 Jul 2010 10:07:26 +0400", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to optimize the query." }, { "msg_contents": "On 17/07/10 16:50, Srikanth wrote:\n> I am sending u the query along with execution plan. Please help\n> \n> explain analyze select\n> s.*,a.actid,a.phone,d.domid,d.domname,d.domno,a.actno,a.actname,p.descr\n> as svcdescr\n> from vwsubsmin s\n> inner join packages p on s.svcno=p.pkgno\n> inner join account a on a.actno=s.actno\n> inner join ssgdom d on a.domno=d.domno\n> inner join (select subsno from getexpiringsubs(1,cast(2 as\n> integer),cast(3 as double precision),\n> '4') as (subsno int,expirydt timestamp without time zone,balcpt double\n> precision)) as e\n> on s.subsno=e.subsno\n> where s.status <=15 and d.domno=273\n> order by d.domname,s.expirydt,a.actname\n\n\nLots of those names in the join list are views, aren't they?\n\nI doubt anyone can help you unless you show the definition of your views\nand the schema of your tables.\n\nAlso, consider attaching the EXPLAIN ANALYZE output as a text\nattachment, as your mail client is \"helpfully\" rewrapping it into\nunintelligible gibberish.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 19 Jul 2010 14:08:43 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to optimize the query." }, { "msg_contents": "On Sat, Jul 17, 2010 at 4:50 AM, Srikanth <[email protected]> wrote:\n> I am sending u the query along with execution plan. Please help\n\nLooks to me like your biggest problem is right here:\n\n\" -> Seq Scan\non subs s (cost=0.00..5599.42 rows=58313 width=69) (actual\ntime=409.370..153354.835 rows=146954 loops=1)\"\n\" Filter:\n(CASE WHEN ((status = 0) AND issubsexpired(subsno)) THEN 15 ELSE\nstatus END <= 15)\"\n\n1ms/row is pretty slow, so I'm guessing that the issubsexpired()\nfunction is not too speedy.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 27 Jul 2010 14:11:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the best way to optimize the query." } ]
[ { "msg_contents": "Hi,\n\nI have a situation to handle a log table which would accumulate a\nlarge amount of logs. This table only involves insert and query\noperations. To limit the table size, I tried to split this table by\ndate. However, the number of the logs is still large (46 million\nrecords per day). To further limit its size, I tried to split this log\ntable by log type. However, this action does not improve the\nperformance. It is much slower than the big table solution. I guess\nthis is because I need to pay more cost on the auto-vacuum/analyze for\nall split tables.\n\nCan anyone comment on this situation? Thanks in advance.\n\n\nkuopo.\n", "msg_date": "Mon, 19 Jul 2010 14:27:51 +0800", "msg_from": "kuopo <[email protected]>", "msg_from_op": true, "msg_subject": "how to handle a big table for data log" }, { "msg_contents": "Large tables, by themselves, are not necessarily a problem. The problem is what you might be trying to do with them. Depending on the operations you are trying to do, partitioning the table might help performance or make it worse.\n \nWhat kind of queries are you running? How many days of history are you keeping? Could you post an explain analyze output of a query that is being problematic?\nGiven the amount of data you hint about, your server configuration, and custom statistic targets for the big tables in question would be useful.\n\n>>> kuopo <[email protected]> 7/19/2010 1:27 AM >>>\nHi,\n\nI have a situation to handle a log table which would accumulate a\nlarge amount of logs. This table only involves insert and query\noperations. To limit the table size, I tried to split this table by\ndate. However, the number of the logs is still large (46 million\nrecords per day). To further limit its size, I tried to split this log\ntable by log type. However, this action does not improve the\nperformance. It is much slower than the big table solution. I guess\nthis is because I need to pay more cost on the auto-vacuum/analyze for\nall split tables.\n\nCan anyone comment on this situation? Thanks in advance.\n\n\nkuopo.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\nLarge tables, by themselves, are not necessarily a problem. The problem is what you might be trying to do with them. Depending on the operations you are trying to do, partitioning the table might help performance or make it worse.\n \nWhat kind of queries are you running? How many days of history are you keeping? Could you post an explain analyze output of a query that is being problematic?\nGiven the amount of data you hint about, your server configuration, and custom statistic targets for the big tables in question would be useful.>>> kuopo <[email protected]> 7/19/2010 1:27 AM >>>Hi,I have a situation to handle a log table which would accumulate alarge amount of logs. This table only involves insert and queryoperations. To limit the table size, I tried to split this table bydate. However, the number of the logs is still large (46 millionrecords per day). To further limit its size, I tried to split this logtable by log type. However, this action does not improve theperformance. It is much slower than the big table solution. I guessthis is because I need to pay more cost on the auto-vacuum/analyze forall split tables.Can anyone comment on this situation? Thanks in advance.kuopo.-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 19 Jul 2010 10:37:55 -0500", "msg_from": "\"Jorge Montero\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to handle a big table for data log" }, { "msg_contents": "Let me make my problem clearer. Here is a requirement to log data from a set\nof objects consistently. For example, the object maybe a mobile phone and it\nwill report its location every 30s. To record its historical trace, I create\na table like\n*CREATE TABLE log_table\n(\n id integer NOT NULL,\n data_type integer NOT NULL,\n data_value double precision,\n ts timestamp with time zone NOT NULL,\n CONSTRAINT log_table_pkey PRIMARY KEY (id, data_type, ts)\n)*;\nIn my location log example, the field data_type could be longitude or\nlatitude.\n\nI create a primary key (id, data_type, ts) to make my queries more\nefficient. The major type of queries would ask the latest data_value of a\ndata_type by given id and timestamp. For this kind of query, I make the\nfollowing SQL statement\n*SELECT * FROM log_table WHERE id=[given id] and data_type='longitude' and\n(ts = (SELECT max(ts) FROM log_table WHERE id=[given id]and\ndata_type='longitude' and ts<=[given timestamp]));*\nAccording to my evaluation, its performance is acceptable.\n\nHowever, I concern more about the performance of insert operation. As I have\nmentioned, the log_table is growing so I decide to partition it. Currently,\nI partition it by date and only keep it 60 days. This partition is helpful.\nBut when I partition it by data_type (in my case, the number of data_type is\nlimited, say 10), the performance of insert operation will be degraded. I\nguess this is caused by multiple vacuum/analyze on these partitioned\ndata_type log tables. However, if I put all data_type logs together, I can\nexpect that the performance of insert operation will also have degradation\nif I want to expand the system to support more mobile phones or more\ndata_type.\n\nThis is my current situation. Please give me some hints to improve the\nperformance (especially for the insert part).\n\n\nkuopo.\n\n\nOn Mon, Jul 19, 2010 at 11:37 PM, Jorge Montero <\[email protected]> wrote:\n> Large tables, by themselves, are not necessarily a problem. The problem is\n> what you might be trying to do with them. Depending on the operations you\n> are trying to do, partitioning the table might help performance or make it\n> worse.\n>\n> What kind of queries are you running? How many days of history are you\n> keeping? Could you post an explain analyze output of a query that is being\n> problematic?\n> Given the amount of data you hint about, your server configuration, and\n> custom statistic targets for the big tables in question would be useful.\n>\n>>>> kuopo <[email protected]> 7/19/2010 1:27 AM >>>\n> Hi,\n>\n> I have a situation to handle a log table which would accumulate a\n> large amount of logs. This table only involves insert and query\n> operations. To limit the table size, I tried to split this table by\n> date. However, the number of the logs is still large (46 million\n> records per day). To further limit its size, I tried to split this log\n> table by log type. However, this action does not improve the\n> performance. It is much slower than the big table solution. I guess\n> this is because I need to pay more cost on the auto-vacuum/analyze for\n> all split tables.\n>\n> Can anyone comment on this situation? Thanks in advance.\n>\n>\n> kuopo.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nLet me make my problem clearer. Here is a requirement to log data from a set of objects consistently. For example, the object maybe a mobile phone and it will report its location every 30s. To record its historical trace, I create a table like\nCREATE TABLE log_table(  id integer NOT NULL,  data_type integer NOT NULL,  data_value double precision,  ts timestamp with time zone NOT NULL,  CONSTRAINT log_table_pkey PRIMARY KEY (id, data_type, ts)\n);In my location log example, the field data_type could be longitude or latitude.I create a primary key (id, data_type, ts) to make my queries more efficient. The major type of queries would ask the latest data_value of a data_type by given id and timestamp. For this kind of query, I make the following SQL statement\nSELECT * FROM log_table WHERE id=[given id] and data_type='longitude' and (ts = (SELECT max(ts) FROM log_table WHERE id=[given id]and data_type='longitude' and ts<=[given timestamp]));According to my evaluation, its performance is acceptable.\nHowever, I concern more about the performance of insert operation. As I have mentioned, the log_table is growing so I decide to partition it. Currently, I partition it by date and only keep it 60 days. This partition is helpful. But when I partition it by data_type (in my case, the number of data_type is limited, say 10), the performance of insert operation will be degraded. I guess this is caused by multiple vacuum/analyze on these partitioned data_type log tables. However, if I put all data_type logs together, I can expect that the performance of insert operation will also have degradation if I want to expand the system to support more mobile phones or more data_type.\nThis is my current situation. Please give me some hints to improve the performance (especially for the insert part).kuopo.On Mon, Jul 19, 2010 at 11:37 PM, Jorge Montero <[email protected]> wrote:\n> Large tables, by themselves, are not necessarily a problem. The problem is> what you might be trying to do with them. Depending on the operations you> are trying to do, partitioning the table might help performance or make it\n> worse.>  > What kind of queries are you running? How many days of history are you> keeping? Could you post an explain analyze output of a query that is being> problematic?> Given the amount of data you hint about, your server configuration, and\n> custom statistic targets for the big tables in question would be useful.>>>>> kuopo <[email protected]> 7/19/2010 1:27 AM >>>> Hi,\n>> I have a situation to handle a log table which would accumulate a> large amount of logs. This table only involves insert and query> operations. To limit the table size, I tried to split this table by\n> date. However, the number of the logs is still large (46 million> records per day). To further limit its size, I tried to split this log> table by log type. However, this action does not improve the\n> performance. It is much slower than the big table solution. I guess> this is because I need to pay more cost on the auto-vacuum/analyze for> all split tables.>> Can anyone comment on this situation? Thanks in advance.\n>>> kuopo.>> --> Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance>", "msg_date": "Wed, 21 Jul 2010 11:51:44 +0800", "msg_from": "kuopo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to handle a big table for data log" }, { "msg_contents": "On Tue, Jul 20, 2010 at 9:51 PM, kuopo <[email protected]> wrote:\n\n> Let me make my problem clearer. Here is a requirement to log data from a\n> set of objects consistently. For example, the object maybe a mobile phone\n> and it will report its location every 30s. To record its historical trace, I\n> create a table like\n> *CREATE TABLE log_table\n> (\n> id integer NOT NULL,\n> data_type integer NOT NULL,\n> data_value double precision,\n> ts timestamp with time zone NOT NULL,\n> CONSTRAINT log_table_pkey PRIMARY KEY (id, data_type, ts)\n> )*;\n> In my location log example, the field data_type could be longitude or\n> latitude.\n>\n>\nI witnessed GridSQL in action many moons ago that managed a massive database\nlog table. From memory, the configuration was 4 database servers with a\ncumulative 500M+ records and queries were running under 5ms. May be worth a\nlook.\n\nhttp://www.enterprisedb.com/community/projects/gridsql.do\n\nGreg\n\nOn Tue, Jul 20, 2010 at 9:51 PM, kuopo <[email protected]> wrote:\nLet me make my problem clearer. Here is a requirement to log data from a set of objects consistently. For example, the object maybe a mobile phone and it will report its location every 30s. To record its historical trace, I create a table like\nCREATE TABLE log_table(  id integer NOT NULL,  data_type integer NOT NULL,  data_value double precision,  ts timestamp with time zone NOT NULL,  CONSTRAINT log_table_pkey PRIMARY KEY (id, data_type, ts)\n\n);In my location log example, the field data_type could be longitude or latitude.I witnessed GridSQL in action many moons ago that managed a massive database log table.  From memory, the configuration was 4 database servers with a cumulative 500M+ records and queries were running under 5ms.  May be worth a look.\nhttp://www.enterprisedb.com/community/projects/gridsql.doGreg", "msg_date": "Tue, 27 Jul 2010 14:02:08 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to handle a big table for data log" }, { "msg_contents": "On 7/20/10 8:51 PM, kuopo wrote:\n> Let me make my problem clearer. Here is a requirement to log data from a\n> set of objects consistently. For example, the object maybe a mobile\n> phone and it will report its location every 30s. To record its\n> historical trace, I create a table like\n> /CREATE TABLE log_table\n> (\n> id integer NOT NULL,\n> data_type integer NOT NULL,\n> data_value double precision,\n> ts timestamp with time zone NOT NULL,\n> CONSTRAINT log_table_pkey PRIMARY KEY (id, data_type, ts)\n> )/;\n> In my location log example, the field data_type could be longitude or\n> latitude.\n\nIf what you have is longitude and latitude, why this brain-dead EAV\ntable structure? You're making the table twice as large and half as\nuseful for no particular reason.\n\nUse the \"point\" datatype instead of anonymizing the data.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Tue, 27 Jul 2010 14:13:42 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to handle a big table for data log" } ]
[ { "msg_contents": "Hi people,\n\nI have a particular performance problem under a system installed.\n\nIn the lab I have an old Microside + Dual Core machine 1GB RAM with 40gb HD\nIDE and\na newer machine HP DL 380 G5 4GB RAM and 500GB SAS under RAID 0 (4 disks)\nP400i Smart Array Controller.\n\n**Both with Linux Kernel 2 .4.37.9 and Postgresql 7.2**\n\nUnder a insertion test we get a performance of 2.5 secs under 2000 inserts\n(table with a single char(50) column) in the IDE disk.\nAnd 500GB RAID 0 (4 disks!) and 37.5 secs under the same test!\n\nI tried:\n- All the postgresql.conf tuning possibles\n- All the kernel tuning values possibles\n- fstab mount options\n- limits.conf values\n- I/O tests with boonie++ that shows that P400 is superior than IDE - as\nexpected\n- 'massive' Google search combinations without answers\n- Mandriva 2009.1: result is about 12 secs (but it need to be kernel 2.4\nwith PGSQL 7.2)\n\n\nThe real case:\n~100k database insertions that runs in 2 min in the IDE, against ~40 minutes\n(P400i) in the HP.\n\nThe same test in the HP Blade G5 P200 controller works fine.\n\nThanks for any answers/analisys.\n\nBest regards.\n\n-- \nDaniel Ferreira\n\nHi people, I have a particular performance problem under a system installed.In the lab I have an old Microside + Dual Core machine 1GB RAM with 40gb HD IDE and a newer machine HP DL 380 G5 4GB RAM and 500GB SAS under RAID 0 (4 disks) P400i Smart Array Controller.\n**Both with Linux Kernel 2 .4.37.9 and Postgresql 7.2**Under a insertion test we get a performance of 2.5 secs under 2000 inserts (table with a single char(50) column) in the IDE disk.And 500GB RAID 0 (4 disks!) and 37.5 secs under the same test!\nI tried: - All the postgresql.conf tuning possibles- All the kernel tuning values possibles- fstab mount options- limits.conf values- I/O tests with boonie++ that shows that P400 is superior than IDE - as expected\n- 'massive' Google search combinations without answers- Mandriva 2009.1: result is about 12 secs (but it need to be kernel 2.4 with PGSQL 7.2)The real case: ~100k database insertions that runs in 2 min in the IDE, against ~40 minutes (P400i) in the HP.\nThe same test in the HP Blade G5 P200 controller works fine.Thanks for any answers/analisys.Best regards. -- Daniel Ferreira", "msg_date": "Mon, 19 Jul 2010 09:24:01 -0300", "msg_from": "Daniel Ferreira de Lima <[email protected]>", "msg_from_op": true, "msg_subject": "IDE x SAS RAID 0 on HP DL 380 G5 P400i controller performance problem" }, { "msg_contents": "Daniel Ferreira de Lima <[email protected]> wrote:\n \n> **Both with Linux Kernel 2 .4.37.9 and Postgresql 7.2**\n \nWow. You really need to upgrade.\n \n> Under a insertion test we get a performance of 2.5 secs under 2000\n> inserts (table with a single char(50) column) in the IDE disk.\n> And 500GB RAID 0 (4 disks!) and 37.5 secs under the same test!\n \nIs each insert in its own database transaction? (That is you've\ndone nothing to BEGIN a transaction, do the inserts, and then\nCOMMIT?) If so, you're old IDE is lying to PostgreSQL when it says\nit has committed each insert. If you pull the power cord on the\ncomputer a few times during this insert test, you'll find you wind\nup missing rows which were supposedly inserted, and you will likely\nhave database corruption, which might render the database totally\nunusable.\n \nIf you implement your RAID with a good controller which has a\nbattered backed-up RAM cache, and that is configured to write-back,\nyou'll see much better performance.\n \nSearch the archives for BBU and you'll find many other discussions\nof this.\n \n-Kevin\n", "msg_date": "Mon, 19 Jul 2010 08:28:03 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IDE x SAS RAID 0 on HP DL 380 G5 P400i\n\tcontroller performance problem" }, { "msg_contents": "> > **Both with Linux Kernel 2 .4.37.9 and Postgresql 7.2**\n>\n> Wow. You really need to upgrade.\n>\n\nYes, but unfortunately, actually it's impossible and economically\ninviable...\n\n\n\n> > Under a insertion test we get a performance of 2.5 secs under 2000\n> > inserts (table with a single char(50) column) in the IDE disk.\n> > And 500GB RAID 0 (4 disks!) and 37.5 secs under the same test!\n>\n> Is each insert in its own database transaction? (That is you've\n> done nothing to BEGIN a transaction, do the inserts, and then\n> COMMIT?) If so, you're old IDE is lying to PostgreSQL when it says\n> it has committed each insert. If you pull the power cord on the\n> computer a few times during this insert test, you'll find you wind\n> up missing rows which were supposedly inserted, and you will likely\n> have database corruption, which might render the database totally\n> unusable.\n>\n\nIt's a single inserts.sql with 2000 inserts.\nBut it's interesting test, I'll check the inserted data if it exist or not.\nMaybe it's inserting only in RAM and not at disk and this nice performance\nis fake.\n\n\n> If you implement your RAID with a good controller which has a\n> battered backed-up RAM cache, and that is configured to write-back,\n> you'll see much better performance.\n>\n\nI'm working on it.. a have been seen posts in the web of people saying\nthings\nabout the BBU in P400i be disabled by default. Or problems with older\nversion of kernel module (cciss) turn it off by some kind of\nincompatibility.\n\nSearch the archives for BBU and you'll find many other discussions\n> of this.\n>\n> -Kevin\n>\n\nThanks, Kevin.\n\n-- \nDaniel Ferreira\n\n \n> **Both with Linux Kernel 2 .4.37.9 and Postgresql 7.2**\n\nWow.  You really need to upgrade.Yes, but unfortunately, actually it's impossible and economically inviable... \n\n> Under a insertion test we get a performance of 2.5 secs under 2000\n> inserts (table with a single char(50) column) in the IDE disk.\n> And 500GB RAID 0 (4 disks!) and 37.5 secs under the same test!\n\nIs each insert in its own database transaction?  (That is you've\ndone nothing to BEGIN a transaction, do the inserts, and then\nCOMMIT?)  If so, you're old IDE is lying to PostgreSQL when it says\nit has committed each insert.  If you pull the power cord on the\ncomputer a few times during this insert test, you'll find you wind\nup missing rows which were supposedly inserted, and you will likely\nhave database corruption, which might render the database totally\nunusable.It's a single inserts.sql with 2000 inserts.  But it's interesting test, I'll check the inserted data if it exist or not.Maybe it's inserting only in RAM and not at disk and this nice performance is fake.\n \n\nIf you implement your RAID with a good controller which has a\nbattered backed-up RAM cache, and that is configured to write-back,\nyou'll see much better performance.I'm working on it.. a have been seen posts in the web of people saying things about the BBU in P400i be disabled by default. Or problems with older\nversion of kernel module (cciss) turn it off by some kind of incompatibility.\n\nSearch the archives for BBU and you'll find many other discussions\nof this.\n\n-Kevin Thanks, Kevin.-- Daniel Ferreira", "msg_date": "Mon, 19 Jul 2010 11:20:17 -0300", "msg_from": "Daniel Ferreira de Lima <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IDE x SAS RAID 0 on HP DL 380 G5 P400i controller\n\tperformance problem" }, { "msg_contents": "Daniel Ferreira de Lima wrote:\n>\n> \n>\n> > **Both with Linux Kernel 2 .4.37.9 and Postgresql 7.2**\n>\n> Wow. You really need to upgrade.\n>\n>\n> Yes, but unfortunately, actually it's impossible and economically \n> inviable...\n\nGenerally, getting good performance out of PostgreSQL 7.2 is also \nimpossible and economically inviable. You can either put resources \ntoward upgrading to a newer version, which will fix many of the \nperformance issues here, or toward changes to the old version that are \nnot guaranteed to actually improve anything.\n\nThat said, a look into the write-caching+BBU policy on your controller \nis worthwhile. If you're running this application successfully on some \nhardware but not others, that could be a source for the difference.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 19 Jul 2010 10:38:21 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IDE x SAS RAID 0 on HP DL 380 G5 P400i controller\n\tperformance problem" }, { "msg_contents": "> That said, a look into the write-caching+BBU policy on your controller is\n> worthwhile. If you're running this application successfully on some\n> hardware but not others, that could be a source for the difference.\n>\n\nI think it's really a BBU/BBWC problem.\nThe tests that we made in the lab with HP Blade G5 (G6 doesn't support\nkernel version 2.4) turning the battery off show us the 'same' performance\nof the \"pizza-box\" HP DL 380 G5.. an old joke. 40 secs to 2000 insertions:\nlike a chariot.\n\nWe're finding the cache expansion and batteries (and.. why it's not\ndefault???).\n\n\nI think that this thread is now finished.\n\nThanks!\n-- \nDaniel Ferreira\n\n \n\nThat said, a look into the write-caching+BBU policy on your controller is worthwhile.  If you're running this application successfully on some hardware but not others, that could be a source for the difference.\nI think it's really a BBU/BBWC problem. The tests that we made in the lab with HP Blade G5 (G6 doesn't support kernel version 2.4) turning the battery off show us the 'same' performance of the \"pizza-box\" HP DL 380 G5..  an old joke.  40 secs to 2000 insertions: like a chariot.\nWe're finding the cache expansion and batteries (and.. why it's not default???).I think that this thread is now finished.Thanks!-- Daniel Ferreira", "msg_date": "Mon, 19 Jul 2010 16:53:37 -0300", "msg_from": "Daniel Ferreira de Lima <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IDE x SAS RAID 0 on HP DL 380 G5 P400i controller\n\tperformance problem" }, { "msg_contents": "If you are using ext3, your performance on the RAID card may also improve if the postgres xlog is not on the same partition as the data. Otherwise, for every transaction commit, all of the data on the whole partition will have to be sync()'d not just the xlog.\n\nAlso, what is the performance difference between all the inserts in one script if you do:\n\n* all your statements in the script\nvs.\n* first line is \"BEGIN;\" then all your statements, then \"COMMIT;\" at the end?\n\nIf these two are about the same on your old IDE drive, then your I/O stack (file system + OS + hardware) is lying to you about fsync(). The latter should be a lot faster on your RAID card if write-back caching is not on.\n\n\n\nOn Jul 19, 2010, at 12:53 PM, Daniel Ferreira de Lima wrote:\n\n\n\nThat said, a look into the write-caching+BBU policy on your controller is worthwhile. If you're running this application successfully on some hardware but not others, that could be a source for the difference.\n\nI think it's really a BBU/BBWC problem.\nThe tests that we made in the lab with HP Blade G5 (G6 doesn't support kernel version 2.4) turning the battery off show us the 'same' performance of the \"pizza-box\" HP DL 380 G5.. an old joke. 40 secs to 2000 insertions: like a chariot.\n\nWe're finding the cache expansion and batteries (and.. why it's not default???).\n\n\nI think that this thread is now finished.\n\nThanks!\n--\nDaniel Ferreira\n\n\nIf you are using ext3, your performance on the RAID card may also improve if the postgres xlog is not on the same partition as the data.  Otherwise, for every transaction commit, all of the data on the whole partition will have to be sync()'d not just the xlog.Also, what is the performance difference between all the inserts in one script if you do:* all your statements in the scriptvs.* first line is \"BEGIN;\"  then all your statements, then \"COMMIT;\" at the end?   If these two are about the same on your old IDE drive, then your I/O stack (file system + OS + hardware) is lying to you about fsync().  The latter should be a lot faster on your RAID card if write-back caching is not on.On Jul 19, 2010, at 12:53 PM, Daniel Ferreira de Lima wrote: \n\nThat said, a look into the write-caching+BBU policy on your controller is worthwhile.  If you're running this application successfully on some hardware but not others, that could be a source for the difference.\nI think it's really a BBU/BBWC problem. The tests that we made in the lab with HP Blade G5 (G6 doesn't support kernel version 2.4) turning the battery off show us the 'same' performance of the \"pizza-box\" HP DL 380 G5..  an old joke.  40 secs to 2000 insertions: like a chariot.\nWe're finding the cache expansion and batteries (and.. why it's not default???).I think that this thread is now finished.Thanks!-- Daniel Ferreira", "msg_date": "Tue, 20 Jul 2010 15:58:07 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IDE x SAS RAID 0 on HP DL 380 G5 P400i controller\n\tperformance problem" } ]
[ { "msg_contents": "Hello,\n\nI don't think this is generally solvable but maybe it is so here goes.\nThe original situation was this:\n\nSELECT something, big_field, complex_function(big_field), rank FROM t1\nUNION ALL SELECT something, big_field, complex_function(big_field), rank\nfrom t2 ORDER BY rank LIMIT small_number;\n\nThis query first fetches all big_field datums and does all\ncomplex_function() calculations on them, then orders then by rank, even\nthough I actually need only small_number of records. There are two\nproblems here: first, selecting for all big_field values requires a lot\nof memory, which is unacceptable, and then, running complex_function()\non all of them takes too long.\n\nI did get rid of unnecessary complex_function() calculations by nesting\nqueries like:\n\nSELECT something, big_field, complex_function(big_field), rank FROM\n(SELECT original_query_without_complex_function_but_with_big_field ORDER\nBY rank LIMIT small_number);\n\nbut this still leaves gathering all the big_field datum from the\noriginal query. I cannot pull big_field out from this subquery because\nit comes from UNION of tables.\n\nAny suggestions?\n\n(I cannot limit big_field with substring() to reduce memory usage\nbecause it messes up complex_function()).\n\n", "msg_date": "Mon, 19 Jul 2010 17:09:32 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Big field, limiting and ordering" }, { "msg_contents": "19.07.10 18:09, Ivan Voras написав(ла):\n> Hello,\n>\n> I don't think this is generally solvable but maybe it is so here goes.\n> The original situation was this:\n>\n> SELECT something, big_field, complex_function(big_field), rank FROM t1\n> UNION ALL SELECT something, big_field, complex_function(big_field), rank\n> from t2 ORDER BY rank LIMIT small_number;\n>\n> This query first fetches all big_field datums and does all\n> complex_function() calculations on them, then orders then by rank, even\n> though I actually need only small_number of records. There are two\n> problems here: first, selecting for all big_field values requires a lot\n> of memory, which is unacceptable, and then, running complex_function()\n> on all of them takes too long.\n>\n> I did get rid of unnecessary complex_function() calculations by nesting\n> queries like:\n>\n> SELECT something, big_field, complex_function(big_field), rank FROM\n> (SELECT original_query_without_complex_function_but_with_big_field ORDER\n> BY rank LIMIT small_number);\n>\n> but this still leaves gathering all the big_field datum from the\n> original query. I cannot pull big_field out from this subquery because\n> it comes from UNION of tables.\n>\n> Any suggestions?\n> \nYou can do the next:\n\nSELECT something, big_field, complex_function(big_field), rank FROM\n(SELECT * from\n(\n(SELECT something, big_field, complex_function(big_field), rank FROM t1 order by rank limit small_number)\nUNION ALL (SELECT something, big_field, complex_function(big_field), rank\nfrom t2 ORDER BY rank LIMIT small_number)\n) a\n\n ORDER\nBY rank LIMIT small_number) b;\n\nSo, you take small_number records from each table, then select small_number best records from resulting set, then do the calculation.\n\nBest regards, Vitalii Tymchyshyn\n\n\n", "msg_date": "Mon, 19 Jul 2010 18:35:42 +0300", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big field, limiting and ordering" } ]
[ { "msg_contents": "Hi All;\n\nwe have a table partitioned by day, the check constraint on the child tables \nlooks like this (this is the may 31st partition):\n\nCHECK \n(stime >= '2010-05-30 00:00:00+00'::timestamp with time zone \n AND stime <= '2010-05-30 23:59:59+00'::timestamp with time zone)\n\n\nWe have a python based app that creates code like this:\n\n select\n *\n from\n table_a a, \n table_b b,\n table_d d\n where a.id = b.id\n and b.id = d.id\n and stime >= timestamp %s at time zone \\'UTC\\'\n and stime < timestamp %s at time zone \\'UTC\\'\n and stime >= timestamp %s at time zone d.name\n and stime < timestamp %s at time zone d.name\n ...\n\n\nso here's my questions:\n\n1) the above app generated query pshows up like this in pg_stat_activity:\n\nand stime >= timestamp E'2010-07-17' at time zone 'UTC' \nand stime < timestamp E'2010-07-21' at time zone 'UTC' \nand stime >= timestamp E'2010-07-18' at time zone d.name \nand stime < timestamp E'2010-07-19' at time zone d.name \n\nwhat's the E'date' from? and why does it show up this way?\n\n\n2) the above query creates a plan that does a sequential scan & filter on \nevery partition. Why won't it only hit the correct partitions? Is it due to \nthe way the date was specified? or maybe the \"at time zone\" syntax?\n\n\nThanks in advance...\n", "msg_date": "Tue, 20 Jul 2010 09:36:07 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "dates and partitioning" }, { "msg_contents": "On Tue, 2010-07-20 at 09:36 -0600, Kevin Kempter wrote:\n> Hi All;\n> \n> we have a table partitioned by day, the check constraint on the child tables \n> looks like this (this is the may 31st partition):\n> \n> CHECK \n> (stime >= '2010-05-30 00:00:00+00'::timestamp with time zone \n> AND stime <= '2010-05-30 23:59:59+00'::timestamp with time zone)\n> \n> \n> We have a python based app that creates code like this:\n> \n> select\n> *\n> from\n> table_a a, \n> table_b b,\n> table_d d\n> where a.id = b.id\n> and b.id = d.id\n> and stime >= timestamp %s at time zone \\'UTC\\'\n> and stime < timestamp %s at time zone \\'UTC\\'\n> and stime >= timestamp %s at time zone d.name\n> and stime < timestamp %s at time zone d.name\n> ...\n> \n> \n> so here's my questions:\n> \n> 1) the above app generated query pshows up like this in pg_stat_activity:\n> \n> and stime >= timestamp E'2010-07-17' at time zone 'UTC' \n> and stime < timestamp E'2010-07-21' at time zone 'UTC' \n> and stime >= timestamp E'2010-07-18' at time zone d.name \n> and stime < timestamp E'2010-07-19' at time zone d.name \n> \n> what's the E'date' from? and why does it show up this way?\n\nThat's E is an escape character. Python is likely putting that in.\n\nSee http://www.postgresql.org/docs/8.4/static/sql-syntax-lexical.html -\nsection 4.1.2.2\n\n> \n> 2) the above query creates a plan that does a sequential scan & filter on \n> every partition. Why won't it only hit the correct partitions? Is it due to \n> the way the date was specified? or maybe the \"at time zone\" syntax?\n\nDo you have constraint_exclusion turned on?\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Tue, 20 Jul 2010 11:55:30 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dates and partitioning" }, { "msg_contents": "Tuesday, July 20, 2010, 5:36:07 PM you wrote:\n\n\n> 2) the above query creates a plan that does a sequential scan & filter on \n> every partition. Why won't it only hit the correct partitions? Is it due to\n> the way the date was specified? or maybe the \"at time zone\" syntax?\n\nQuick guess: How is your 'constraint_exclusion'-setting? \nWhich version of postgres?\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Tue, 20 Jul 2010 17:55:48 +0200", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dates and partitioning" } ]
[ { "msg_contents": "Hi there.\n\nI think I found a potential performance gain if the query planner would be optimized. All Tests has been performed with 8.4.1 (and earlier versions) on CentOS 5.3 (x64)\n\nThe following query will run on my database (~250 GB) for ca. 1600 seconds and the sort will result in a disk merge deploying ca. 200 GB of data to the local disk (ca. 180.000 tmp-files)\n\nexplain SELECT DISTINCT t4.objid\nFROM fscsubfile t4, cooobject t6\n NOT EXISTS (\n WHERE t6.objid = t4.objid AND\n t4.fileresporgid = 573936067464397682 AND\n NOT EXISTS (\n SELECT 1\n FROM ataggval q1_1,\n atdateval t5\n WHERE q1_1.objid = t4.objid AND\n q1_1.attrid = 281479288456451 AND\n q1_1.aggrid = 0 AND\n t5.aggrid = q1_1.aggval AND\n t5.objid = t4.objid AND\n t5.attrid = 281479288456447 ) AND\n ((t6.objclassid IN (285774255832590,285774255764301))) AND\n ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n ORDER BY t4.objid;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nUnique (cost=2592137103.99..2592137104.00 rows=1 width=8)\n -> Sort (cost=2592137103.99..2592137104.00 rows=1 width=8)\n Sort Key: t4.objid\n -> Nested Loop (cost=1105592553.38..2592137103.98 rows=1 width=8)\n -> Hash Anti Join (cost=1105592553.38..2592137095.75 rows=1 width=8)\n Hash Cond: ((t4.objid = q1_1.objid) AND (t4.objid = t5.objid))\n -> Bitmap Heap Scan on fscsubfile t4 (cost=154.42...14136.40 rows=5486 width=8)\n Recheck Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n -> Bitmap Index Scan on ind_fscsubfile_filerespons (cost=0.00..153.05 rows=5486 width=0)\n Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n -> Hash (cost=11917516.57..11917516.57 rows=55006045159 width=16)\n -> Nested Loop (cost=0.00..11917516.57 rows=55006045159 width=16)\n -> Seq Scan on atdateval t5 (cost=0.00...294152.40 rows=1859934 width=12)\n Filter: (attrid = 281479288456447::bigint)\n -> Index Scan using ind_ataggval on ataggval q1_1 (cost=0.00..6.20 rows=4 width=12)\n Index Cond: ((q1_1.attrid = 281479288456451::bigint) AND (q1_1.aggval = t5.aggrid))\n Filter: (q1_1.aggrid = 0)\n -> Index Scan using cooobjectix on cooobject t6 (cost=0.00..8.22 rows=1 width=8)\n Index Cond: (t6.objid = t4.objid)\n Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\n(20 rows)\n\n\nAs the disks pace is limited on my test system I can't provide the \"explain analyze\" output\nIf I change the query as follows the query takes only 12 seconds and only needs 2 tmp files for sorting.\n(Changed lines are marked with [!!!!!] as I don't know HTML-Mails will be delivered without conversion\n\nexplain SELECT DISTINCT t4.objid\nFROM fscsubfile t4, cooobject t6\nWHERE t6.objid = t4.objid AND\nt4.fileresporgid = 573936067464397682 AND\n NOT EXISTS (\n SELECT 1\n FROM ataggval q1_1,\n atdateval t5\n WHERE q1_1.objid = t4.objid AND\n q1_1.attrid = 281479288456451 AND\n q1_1.aggrid = 0 AND\n t5.aggrid = q1_1.aggval AND\n t5.objid = q1_1.objid AND [!!!!!]\n t5.attrid = 281479288456447 ) AND\n ((t6.objclassid IN (285774255832590,285774255764301))) AND\n ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n ORDER BY t4.objid;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\nUnique (cost=918320.29..971968.88 rows=1 width=8)\n -> Nested Loop (cost=918320.29..971968.88 rows=1 width=8)\n -> Merge Anti Join (cost=918320.29..971960.65 rows=1 width=8)\n Merge Cond: (t4.objid = q1_1.objid)\n -> Index Scan using ind_fscsubfile_filerespons on fscsubfile t4 (cost=0.00..19016.05 rows=5486 width=8)\n Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n -> Materialize (cost=912418.42..956599.36 rows=22689 width=8)\n -> Merge Join (cost=912418.42..956372.47 rows=22689 width=8)\n Merge Cond: ((t5.objid = q1_1.objid) AND (t5.aggrid = q1_1.aggval))\n -> Sort (cost=402024.80..406674.63 rows=1859934 width=12)\n Sort Key: t5.objid, t5.aggrid\n -> Bitmap Heap Scan on atdateval t5 (cost=43749.07..176555.24 rows=1859934 width=12)\n Recheck Cond: (attrid = 281479288456447::bigint)\n -> Bitmap Index Scan on ind_atdateval (cost=0.00..43284.08 rows=1859934 width=0)\n Index Cond: (attrid = 281479288456447::bigint)\n -> Materialize (cost=510392.25..531663.97 rows=1701738 width=12)\n -> Sort (cost=510392.25..514646.59 rows=1701738 width=12)\n Sort Key: q1_1.objid, q1_1.aggval\n -> Bitmap Heap Scan on ataggval q1_1 (cost=44666.00..305189.47 rows=1701738 width=12)\n Recheck Cond: (attrid = 281479288456451::bigint)\n Filter: (aggrid = 0)\n -> Bitmap Index Scan on ind_ataggval (cost=0.00..44240.56 rows=1860698 width=0)\n Index Cond: (attrid = 281479288456451::bigint)\n -> Index Scan using cooobjectix on cooobject t6 (cost=0.00..8.22 rows=1 width=8)\n Index Cond: (t6.objid = t4.objid)\n Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\n(26 rows)\n\nexplain analyze SELECT DISTINCT t4.objid\nFROM fscsubfile t4, cooobject t6\nWHERE t6.objid = t4.objid AND\nt4.fileresporgid = 573936067464397682 AND\n NOT EXISTS (\n SELECT 1\n FROM ataggval q1_1,\n atdateval t5\n WHERE q1_1.objid = t4.objid AND\n q1_1.attrid = 281479288456451 AND\n q1_1.aggrid = 0 AND\n t5.aggrid = q1_1.aggval AND\n t5.objid = q1_1.objid AND [!!!!!]\n t5.attrid = 281479288456447 ) AND\n((t6.objclassid IN (285774255832590,285774255764301))) AND\n((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\nORDER BY t4.objid;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nUnique (cost=918320.29..971968.88 rows=1 width=8) (actual time=12079.598..12083.048 rows=64 loops=1)\n -> Nested Loop (cost=918320.29..971968.88 rows=1 width=8) (actual time=12079.594..12083.010 rows=64 loops=1)\n -> Merge Anti Join (cost=918320.29..971960.65 rows=1 width=8) (actual time=12037.524..12081.989 rows=108 loops=1)\n Merge Cond: (t4.objid = q1_1.objid)\n -> Index Scan using ind_fscsubfile_filerespons on fscsubfile t4 (cost=0.00..19016.05 rows=5486 width=8) (actual time=0.073..83.498 rows=63436 loops=1)\n Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n -> Materialize (cost=912418.42..956599.36 rows=22689 width=8) (actual time=8866.253..11753.055 rows=1299685 loops=1)\n -> Merge Join (cost=912418.42..956372.47 rows=22689 width=8) (actual time=8866.246..11413.397 rows=1299685 loops=1)\n Merge Cond: ((t5.objid = q1_1.objid) AND (t5.aggrid = q1_1.aggval))\n -> Sort (cost=402024.80..406674.63 rows=1859934 width=12) (actual time=3133.362..3774.076 rows=1299685 loops=1)\n Sort Key: t5.objid, t5.aggrid\n Sort Method: external merge Disk: 47192kB\n -> Bitmap Heap Scan on atdateval t5 (cost=43749.07..176555.24 rows=1859934 width=12) (actual time=282.454..1079.038 rows=1857906 loops=1)\n Recheck Cond: (attrid = 281479288456447::bigint)\n -> Bitmap Index Scan on ind_atdateval (cost=0.00..43284.08 rows=1859934 width=0) (actual time=258.749...258.749 rows=1857906 loops=1)\n Index Cond: (attrid = 281479288456447::bigint)\n -> Materialize (cost=510392.25..531663.97 rows=1701738 width=12) (actual time=5732.872..6683.784 rows=1299685 loops=1)\n -> Sort (cost=510392.25..514646.59 rows=1701738 width=12) (actual time=5732.866..6387.188 rows=1299685 loops=1)\n Sort Key: q1_1.objid, q1_1.aggval\n Sort Method: external merge Disk: 39920kB\n -> Bitmap Heap Scan on ataggval q1_1 (cost=44666.00..305189.47 rows=1701738 width=12) (actual time=1644.983..3634.044 rows=1857906 loops=1)\n Recheck Cond: (attrid = 281479288456451::bigint)\n Filter: (aggrid = 0)\n -> Bitmap Index Scan on ind_ataggval (cost=0.00..44240.56 rows=1860698 width=0) (actual time=1606.325..1606.325 rows=1877336 loops=1)\n Index Cond: (attrid = 281479288456451::bigint)\n -> Index Scan using cooobjectix on cooobject t6 (cost=0.00..8.22 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=108)\n Index Cond: (t6.objid = t4.objid)\n Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\nTotal runtime: 12108.663 ms\n(29 rows)\n\n\nAnother way to optimize my query is to change it as follows:\n(Once again changes are marked with [!!!!!]\n\nexplain SELECT DISTINCT t4.objid\nFROM fscsubfile t4, cooobject t6\nWHERE t6.objid = t4.objid AND\nt4.fileresporgid = 573936067464397682 AND\n NOT EXISTS (\n SELECT 1\n FROM ataggval q1_1,\n atdateval t5\n WHERE q1_1.objid = t5.objid AND [!!!!!]\n q1_1.attrid = 281479288456451 AND\n q1_1.aggrid = 0 AND\n t5.aggrid = q1_1.aggval AND\n t5.objid = t4.objid AND\n t5.attrid = 281479288456447 ) AND\n ((t6.objclassid IN (285774255832590,285774255764301))) AND\n ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n ORDER BY t4.objid;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\nUnique (cost=916978.86..969139.72 rows=1 width=8)\n -> Nested Loop (cost=916978.86..969139.72 rows=1 width=8)\n -> Merge Anti Join (cost=916978.86..969131.49 rows=1 width=8)\n Merge Cond: (t4.objid = t5.objid)\n -> Index Scan using ind_fscsubfile_filerespons on fscsubfile t4 (cost=0.00..19016.05 rows=5486 width=8)\n Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n -> Materialize (cost=912418.42..956599.36 rows=22689 width=8)\n -> Merge Join (cost=912418.42..956372.47 rows=22689 width=8)\n Merge Cond: ((t5.objid = q1_1.objid) AND (t5.aggrid = q1_1.aggval))\n -> Sort (cost=402024.80..406674.63 rows=1859934 width=12)\n Sort Key: t5.objid, t5.aggrid\n -> Bitmap Heap Scan on atdateval t5 (cost=43749.07..176555.24 rows=1859934 width=12)\n Recheck Cond: (attrid = 281479288456447::bigint)\n -> Bitmap Index Scan on ind_atdateval (cost=0.00..43284.08 rows=1859934 width=0)\n Index Cond: (attrid = 281479288456447::bigint)\n -> Materialize (cost=510392.25..531663.97 rows=1701738 width=12)\n -> Sort (cost=510392.25..514646.59 rows=1701738 width=12)\n Sort Key: q1_1.objid, q1_1.aggval\n -> Bitmap Heap Scan on ataggval q1_1 (cost=44666.00..305189.47 rows=1701738 width=12)\n Recheck Cond: (attrid = 281479288456451::bigint)\n Filter: (aggrid = 0)\n -> Bitmap Index Scan on ind_ataggval (cost=0.00..44240.56 rows=1860698 width=0)\n Index Cond: (attrid = 281479288456451::bigint)\n -> Index Scan using cooobjectix on cooobject t6 (cost=0.00..8.22 rows=1 width=8)\n Index Cond: (t6.objid = t4.objid)\n Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\n(26 rows)\n\n\nexplain analyze SELECT DISTINCT t4.objid\nFROM fscsubfile t4, cooobject t6\nWHERE t6.objid = t4.objid AND\nt4.fileresporgid = 573936067464397682 AND\n NOT EXISTS (\n SELECT 1\n FROM ataggval q1_1,\n atdateval t5\n WHERE q1_1.objid = t5.objid AND [!!!!!]\n q1_1.attrid = 281479288456451 AND\n q1_1.aggrid = 0 AND\n t5.aggrid = q1_1.aggval AND\n t5.objid = t4.objid AND\n t5.attrid = 281479288456447 ) AND\n((t6.objclassid IN (285774255832590,285774255764301))) AND\n((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\nORDER BY t4.objid;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nUnique (cost=916978.86..969139.72 rows=1 width=8) (actual time=12102.964..12106.409 rows=64 loops=1)\n -> Nested Loop (cost=916978.86..969139.72 rows=1 width=8) (actual time=12102.959..12106.375 rows=64 loops=1)\n -> Merge Anti Join (cost=916978.86..969131.49 rows=1 width=8) (actual time=12060.916..12105.374 rows=108 loops=1)\n Merge Cond: (t4.objid = t5.objid)\n -> Index Scan using ind_fscsubfile_filerespons on fscsubfile t4 (cost=0.00..19016.05 rows=5486 width=8) (actual time=0.080..81.397 rows=63436 loops=1)\n Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n -> Materialize (cost=912418.42..956599.36 rows=22689 width=8) (actual time=8874.492..11778.254 rows=1299685 loops=1)\n -> Merge Join (cost=912418.42..956372.47 rows=22689 width=8) (actual time=8874.484..11437.175 rows=1299685 loops=1)\n Merge Cond: ((t5.objid = q1_1.objid) AND (t5.aggrid = q1_1.aggval))\n -> Sort (cost=402024.80..406674.63 rows=1859934 width=12) (actual time=3117.555..3756.062 rows=1299685 loops=1)\n Sort Key: t5.objid, t5.aggrid\n Sort Method: external merge Disk: 39920kB\n -> Bitmap Heap Scan on atdateval t5 (cost=43749.07..176555.24 rows=1859934 width=12) (actual time=289.475..1079.624 rows=1857906 loops=1)\n Recheck Cond: (attrid = 281479288456447::bigint)\n -> Bitmap Index Scan on ind_atdateval (cost=0.00..43284.08 rows=1859934 width=0) (actual time=265.720...265.720 rows=1857906 loops=1)\n Index Cond: (attrid = 281479288456447::bigint)\n -> Materialize (cost=510392.25..531663.97 rows=1701738 width=12) (actual time=5756.915..6707.864 rows=1299685 loops=1)\n -> Sort (cost=510392.25..514646.59 rows=1701738 width=12) (actual time=5756.909..6409.819 rows=1299685 loops=1)\n Sort Key: q1_1.objid, q1_1.aggval\n Sort Method: external merge Disk: 39920kB\n -> Bitmap Heap Scan on ataggval q1_1 (cost=44666.00..305189.47 rows=1701738 width=12) (actual time=1646.955..3628.918 rows=1857906 loops=1)\n Recheck Cond: (attrid = 281479288456451::bigint)\n Filter: (aggrid = 0)\n -> Bitmap Index Scan on ind_ataggval (cost=0.00..44240.56 rows=1860698 width=0) (actual time=1608.233..1608.233 rows=1877336 loops=1)\n Index Cond: (attrid = 281479288456451::bigint)\n -> Index Scan using cooobjectix on cooobject t6 (cost=0.00..8.22 rows=1 width=8) (actual time=0.008..0.009 rows=1 loops=108)\n Index Cond: (t6.objid = t4.objid)\n Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\nTotal runtime: 12129.613 ms\n(29 rows)\n\n\n\nAs the query performs in roughly 12 seconds in both (changed) cases you might advise to change my queries :-)\n(In fact we are working on this)\nAs the primary performance impact can also be reproduced in a small database (querytime > 1 minute) I checked this issue on MS-SQL server and Oracle. On MSSQL server there is no difference in the execution plan if you change the query an the performance is well. Oralce shows a slightly difference but the performance is also well.\nAs I mentioned we are looking forward to change our query but in my opinion there could be a general performance gain if this issue is addressed. (especially if you don't know you run into this issue on the query performance is sufficient enough)\n\ngreets\nArmin\n", "msg_date": "Tue, 20 Jul 2010 16:25:55 +0000", "msg_from": "\"Kneringer, Armin\" <[email protected]>", "msg_from_op": true, "msg_subject": "potential performance gain by query planner optimization" }, { "msg_contents": "Hello\n\n2010/7/20 Kneringer, Armin <[email protected]>:\n> Hi there.\n>\n> I think I found a potential performance gain if the query planner would be optimized. All Tests has been performed with 8.4.1 (and earlier versions) on CentOS 5.3 (x64)\n>\n> The following query will run on my database (~250 GB) for ca. 1600 seconds and the sort will result in a disk merge deploying ca. 200 GB of data to the local disk (ca. 180.000 tmp-files)\n\ncan you try show check explain with set enable_hashjoin to off; ?\n\nRegards\n\nPavel Stehule\n\n>\n> explain SELECT DISTINCT t4.objid\n> FROM fscsubfile t4, cooobject t6\n>  NOT EXISTS (\n>  WHERE t6.objid = t4.objid AND\n>  t4.fileresporgid = 573936067464397682 AND\n>   NOT EXISTS (\n>   SELECT 1\n>   FROM ataggval q1_1,\n>   atdateval t5\n>   WHERE q1_1.objid = t4.objid AND\n>   q1_1.attrid = 281479288456451 AND\n>   q1_1.aggrid = 0 AND\n>   t5.aggrid = q1_1.aggval AND\n>   t5.objid = t4.objid AND\n>   t5.attrid = 281479288456447 ) AND\n>  ((t6.objclassid IN (285774255832590,285774255764301))) AND\n>  ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n>  ORDER BY t4.objid;\n>\n>                                                                                  QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique  (cost=2592137103.99..2592137104.00 rows=1 width=8)\n>   ->  Sort  (cost=2592137103.99..2592137104.00 rows=1 width=8)\n>         Sort Key: t4.objid\n>         ->  Nested Loop  (cost=1105592553.38..2592137103.98 rows=1 width=8)\n>               ->  Hash Anti Join  (cost=1105592553.38..2592137095.75 rows=1 width=8)\n>                     Hash Cond: ((t4.objid = q1_1.objid) AND (t4.objid = t5.objid))\n>                     ->  Bitmap Heap Scan on fscsubfile t4  (cost=154.42...14136.40 rows=5486 width=8)\n>                           Recheck Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n>                           ->  Bitmap Index Scan on ind_fscsubfile_filerespons  (cost=0.00..153.05 rows=5486 width=0)\n>                                 Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n>                     ->  Hash  (cost=11917516.57..11917516.57 rows=55006045159 width=16)\n>                           ->  Nested Loop  (cost=0.00..11917516.57 rows=55006045159 width=16)\n>                                 ->  Seq Scan on atdateval t5  (cost=0.00...294152.40 rows=1859934 width=12)\n>                                       Filter: (attrid = 281479288456447::bigint)\n>                                 ->  Index Scan using ind_ataggval on ataggval q1_1  (cost=0.00..6.20 rows=4 width=12)\n>                                       Index Cond: ((q1_1.attrid = 281479288456451::bigint) AND (q1_1.aggval = t5.aggrid))\n>                                       Filter: (q1_1.aggrid = 0)\n>               ->  Index Scan using cooobjectix on cooobject t6  (cost=0.00..8.22 rows=1 width=8)\n>                     Index Cond: (t6.objid = t4.objid)\n>                     Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\n> (20 rows)\n>\n>\n> As the disks pace is limited on my test system I can't provide the \"explain analyze\" output\n> If I change the query as follows the query takes only 12 seconds and only needs 2 tmp files for sorting.\n> (Changed lines are marked with [!!!!!] as I don't know HTML-Mails will be delivered without conversion\n>\n> explain SELECT DISTINCT t4.objid\n> FROM fscsubfile t4, cooobject t6\n> WHERE t6.objid = t4.objid AND\n> t4.fileresporgid = 573936067464397682 AND\n>   NOT EXISTS (\n>   SELECT 1\n>   FROM ataggval q1_1,\n>   atdateval t5\n>   WHERE q1_1.objid = t4.objid AND\n>   q1_1.attrid = 281479288456451 AND\n>   q1_1.aggrid = 0 AND\n>   t5.aggrid = q1_1.aggval AND\n>   t5.objid = q1_1.objid AND                 [!!!!!]\n>   t5.attrid = 281479288456447 ) AND\n>   ((t6.objclassid IN (285774255832590,285774255764301))) AND\n>   ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n>  ORDER BY t4.objid;\n>                                                                            QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique  (cost=918320.29..971968.88 rows=1 width=8)\n>   ->  Nested Loop  (cost=918320.29..971968.88 rows=1 width=8)\n>         ->  Merge Anti Join  (cost=918320.29..971960.65 rows=1 width=8)\n>               Merge Cond: (t4.objid = q1_1.objid)\n>               ->  Index Scan using ind_fscsubfile_filerespons on fscsubfile t4  (cost=0.00..19016.05 rows=5486 width=8)\n>                     Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n>               ->  Materialize  (cost=912418.42..956599.36 rows=22689 width=8)\n>                     ->  Merge Join  (cost=912418.42..956372.47 rows=22689 width=8)\n>                           Merge Cond: ((t5.objid = q1_1.objid) AND (t5.aggrid = q1_1.aggval))\n>                           ->  Sort  (cost=402024.80..406674.63 rows=1859934 width=12)\n>                                 Sort Key: t5.objid, t5.aggrid\n>                                 ->  Bitmap Heap Scan on atdateval t5  (cost=43749.07..176555.24 rows=1859934 width=12)\n>                                       Recheck Cond: (attrid = 281479288456447::bigint)\n>                                       ->  Bitmap Index Scan on ind_atdateval  (cost=0.00..43284.08 rows=1859934 width=0)\n>                                             Index Cond: (attrid = 281479288456447::bigint)\n>                           ->  Materialize  (cost=510392.25..531663.97 rows=1701738 width=12)\n>                                 ->  Sort  (cost=510392.25..514646.59 rows=1701738 width=12)\n>                                       Sort Key: q1_1.objid, q1_1.aggval\n>                                       ->  Bitmap Heap Scan on ataggval q1_1  (cost=44666.00..305189.47 rows=1701738 width=12)\n>                                             Recheck Cond: (attrid = 281479288456451::bigint)\n>                                             Filter: (aggrid = 0)\n>                                             ->  Bitmap Index Scan on ind_ataggval  (cost=0.00..44240.56 rows=1860698 width=0)\n>                                                   Index Cond: (attrid = 281479288456451::bigint)\n>         ->  Index Scan using cooobjectix on cooobject t6  (cost=0.00..8.22 rows=1 width=8)\n>               Index Cond: (t6.objid = t4.objid)\n>               Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\n> (26 rows)\n>\n> explain analyze SELECT DISTINCT t4.objid\n> FROM fscsubfile t4, cooobject t6\n> WHERE t6.objid = t4.objid AND\n> t4.fileresporgid = 573936067464397682 AND\n>  NOT EXISTS (\n>  SELECT 1\n>  FROM ataggval q1_1,\n>  atdateval t5\n>  WHERE q1_1.objid = t4.objid AND\n>  q1_1.attrid = 281479288456451 AND\n>  q1_1.aggrid = 0 AND\n>  t5.aggrid = q1_1.aggval AND\n>  t5.objid = q1_1.objid AND                 [!!!!!]\n>  t5.attrid = 281479288456447 ) AND\n> ((t6.objclassid IN (285774255832590,285774255764301))) AND\n> ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n> ORDER BY t4.objid;\n>                                                                                     QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique  (cost=918320.29..971968.88 rows=1 width=8) (actual time=12079.598..12083.048 rows=64 loops=1)\n>   ->  Nested Loop  (cost=918320.29..971968.88 rows=1 width=8) (actual time=12079.594..12083.010 rows=64 loops=1)\n>         ->  Merge Anti Join  (cost=918320.29..971960.65 rows=1 width=8) (actual time=12037.524..12081.989 rows=108 loops=1)\n>               Merge Cond: (t4.objid = q1_1.objid)\n>               ->  Index Scan using ind_fscsubfile_filerespons on fscsubfile t4  (cost=0.00..19016.05 rows=5486 width=8) (actual time=0.073..83.498 rows=63436 loops=1)\n>                     Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n>               ->  Materialize  (cost=912418.42..956599.36 rows=22689 width=8) (actual time=8866.253..11753.055 rows=1299685 loops=1)\n>                     ->  Merge Join  (cost=912418.42..956372.47 rows=22689 width=8) (actual time=8866.246..11413.397 rows=1299685 loops=1)\n>                           Merge Cond: ((t5.objid = q1_1.objid) AND (t5.aggrid = q1_1.aggval))\n>                           ->  Sort  (cost=402024.80..406674.63 rows=1859934 width=12) (actual time=3133.362..3774.076 rows=1299685 loops=1)\n>                                 Sort Key: t5.objid, t5.aggrid\n>                                 Sort Method:  external merge  Disk: 47192kB\n>                                 ->  Bitmap Heap Scan on atdateval t5  (cost=43749.07..176555.24 rows=1859934 width=12) (actual time=282.454..1079.038 rows=1857906 loops=1)\n>                                       Recheck Cond: (attrid = 281479288456447::bigint)\n>                                       ->  Bitmap Index Scan on ind_atdateval  (cost=0.00..43284.08 rows=1859934 width=0) (actual time=258.749...258.749 rows=1857906 loops=1)\n>                                             Index Cond: (attrid = 281479288456447::bigint)\n>                           ->  Materialize  (cost=510392.25..531663.97 rows=1701738 width=12) (actual time=5732.872..6683.784 rows=1299685 loops=1)\n>                                 ->  Sort  (cost=510392.25..514646.59 rows=1701738 width=12) (actual time=5732.866..6387.188 rows=1299685 loops=1)\n>                                      Sort Key: q1_1.objid, q1_1.aggval\n>                                       Sort Method:  external merge  Disk: 39920kB\n>                                       ->  Bitmap Heap Scan on ataggval q1_1  (cost=44666.00..305189.47 rows=1701738 width=12) (actual time=1644.983..3634.044 rows=1857906 loops=1)\n>                                             Recheck Cond: (attrid = 281479288456451::bigint)\n>                                             Filter: (aggrid = 0)\n>                                             ->  Bitmap Index Scan on ind_ataggval  (cost=0.00..44240.56 rows=1860698 width=0) (actual time=1606.325..1606.325 rows=1877336 loops=1)\n>                                                   Index Cond: (attrid = 281479288456451::bigint)\n>         ->  Index Scan using cooobjectix on cooobject t6  (cost=0.00..8.22 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=108)\n>               Index Cond: (t6.objid = t4.objid)\n>               Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\n> Total runtime: 12108.663 ms\n> (29 rows)\n>\n>\n> Another way to optimize my query is to change it as follows:\n> (Once again changes are marked with [!!!!!]\n>\n> explain SELECT DISTINCT t4.objid\n> FROM fscsubfile t4, cooobject t6\n> WHERE t6.objid = t4.objid AND\n> t4.fileresporgid = 573936067464397682 AND\n>   NOT EXISTS (\n>   SELECT 1\n>   FROM ataggval q1_1,\n>   atdateval t5\n>   WHERE q1_1.objid = t5.objid AND                 [!!!!!]\n>   q1_1.attrid = 281479288456451 AND\n>   q1_1.aggrid = 0 AND\n>   t5.aggrid = q1_1.aggval AND\n>   t5.objid = t4.objid AND\n>   t5.attrid = 281479288456447 ) AND\n>  ((t6.objclassid IN (285774255832590,285774255764301))) AND\n>  ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n>  ORDER BY t4.objid;\n>                                                                            QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique  (cost=916978.86..969139.72 rows=1 width=8)\n>   ->  Nested Loop  (cost=916978.86..969139.72 rows=1 width=8)\n>         ->  Merge Anti Join  (cost=916978.86..969131.49 rows=1 width=8)\n>               Merge Cond: (t4.objid = t5.objid)\n>               ->  Index Scan using ind_fscsubfile_filerespons on fscsubfile t4  (cost=0.00..19016.05 rows=5486 width=8)\n>                     Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n>               ->  Materialize  (cost=912418.42..956599.36 rows=22689 width=8)\n>                     ->  Merge Join  (cost=912418.42..956372.47 rows=22689 width=8)\n>                           Merge Cond: ((t5.objid = q1_1.objid) AND (t5.aggrid = q1_1.aggval))\n>                           ->  Sort  (cost=402024.80..406674.63 rows=1859934 width=12)\n>                                 Sort Key: t5.objid, t5.aggrid\n>                                 ->  Bitmap Heap Scan on atdateval t5  (cost=43749.07..176555.24 rows=1859934 width=12)\n>                                       Recheck Cond: (attrid = 281479288456447::bigint)\n>                                       ->  Bitmap Index Scan on ind_atdateval  (cost=0.00..43284.08 rows=1859934 width=0)\n>                                             Index Cond: (attrid = 281479288456447::bigint)\n>                           ->  Materialize  (cost=510392.25..531663.97 rows=1701738 width=12)\n>                                 ->  Sort  (cost=510392.25..514646.59 rows=1701738 width=12)\n>                                       Sort Key: q1_1.objid, q1_1.aggval\n>                                       ->  Bitmap Heap Scan on ataggval q1_1  (cost=44666.00..305189.47 rows=1701738 width=12)\n>                                             Recheck Cond: (attrid = 281479288456451::bigint)\n>                                             Filter: (aggrid = 0)\n>                                             ->  Bitmap Index Scan on ind_ataggval  (cost=0.00..44240.56 rows=1860698 width=0)\n>                                                   Index Cond: (attrid = 281479288456451::bigint)\n>         ->  Index Scan using cooobjectix on cooobject t6  (cost=0.00..8.22 rows=1 width=8)\n>               Index Cond: (t6.objid = t4.objid)\n>               Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\n> (26 rows)\n>\n>\n> explain analyze SELECT DISTINCT t4.objid\n> FROM fscsubfile t4, cooobject t6\n> WHERE t6.objid = t4.objid AND\n> t4.fileresporgid = 573936067464397682 AND\n>  NOT EXISTS (\n>  SELECT 1\n>  FROM ataggval q1_1,\n>  atdateval t5\n>  WHERE q1_1.objid = t5.objid AND                 [!!!!!]\n>  q1_1.attrid = 281479288456451 AND\n>  q1_1.aggrid = 0 AND\n>  t5.aggrid = q1_1.aggval AND\n>  t5.objid = t4.objid AND\n>  t5.attrid = 281479288456447 ) AND\n> ((t6.objclassid IN (285774255832590,285774255764301))) AND\n> ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n> ORDER BY t4.objid;\n>                                                                                     QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Unique  (cost=916978.86..969139.72 rows=1 width=8) (actual time=12102.964..12106.409 rows=64 loops=1)\n>   ->  Nested Loop  (cost=916978.86..969139.72 rows=1 width=8) (actual time=12102.959..12106.375 rows=64 loops=1)\n>         ->  Merge Anti Join  (cost=916978.86..969131.49 rows=1 width=8) (actual time=12060.916..12105.374 rows=108 loops=1)\n>               Merge Cond: (t4.objid = t5.objid)\n>               ->  Index Scan using ind_fscsubfile_filerespons on fscsubfile t4  (cost=0.00..19016.05 rows=5486 width=8) (actual time=0.080..81.397 rows=63436 loops=1)\n>                     Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n>               ->  Materialize  (cost=912418.42..956599.36 rows=22689 width=8) (actual time=8874.492..11778.254 rows=1299685 loops=1)\n>                     ->  Merge Join  (cost=912418.42..956372.47 rows=22689 width=8) (actual time=8874.484..11437.175 rows=1299685 loops=1)\n>                           Merge Cond: ((t5.objid = q1_1.objid) AND (t5.aggrid = q1_1.aggval))\n>                           ->  Sort  (cost=402024.80..406674.63 rows=1859934 width=12) (actual time=3117.555..3756.062 rows=1299685 loops=1)\n>                                 Sort Key: t5.objid, t5.aggrid\n>                                 Sort Method:  external merge  Disk: 39920kB\n>                                 ->  Bitmap Heap Scan on atdateval t5  (cost=43749.07..176555.24 rows=1859934 width=12) (actual time=289.475..1079.624 rows=1857906 loops=1)\n>                                       Recheck Cond: (attrid = 281479288456447::bigint)\n>                                       ->  Bitmap Index Scan on ind_atdateval  (cost=0.00..43284.08 rows=1859934 width=0) (actual time=265.720...265.720 rows=1857906 loops=1)\n>                                             Index Cond: (attrid = 281479288456447::bigint)\n>                           ->  Materialize  (cost=510392.25..531663.97 rows=1701738 width=12) (actual time=5756.915..6707.864 rows=1299685 loops=1)\n>                                 ->  Sort  (cost=510392.25..514646.59 rows=1701738 width=12) (actual time=5756.909..6409.819 rows=1299685 loops=1)\n>                                       Sort Key: q1_1.objid, q1_1.aggval\n>                                       Sort Method:  external merge  Disk: 39920kB\n>                                       ->  Bitmap Heap Scan on ataggval q1_1  (cost=44666.00..305189.47 rows=1701738 width=12) (actual time=1646.955..3628.918 rows=1857906 loops=1)\n>                                             Recheck Cond: (attrid = 281479288456451::bigint)\n>                                             Filter: (aggrid = 0)\n>                                             ->  Bitmap Index Scan on ind_ataggval  (cost=0.00..44240.56 rows=1860698 width=0) (actual time=1608.233..1608.233 rows=1877336 loops=1)\n>                                                   Index Cond: (attrid = 281479288456451::bigint)\n>         ->  Index Scan using cooobjectix on cooobject t6  (cost=0.00..8.22 rows=1 width=8) (actual time=0.008..0.009 rows=1 loops=108)\n>               Index Cond: (t6.objid = t4.objid)\n>               Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\n> Total runtime: 12129.613 ms\n> (29 rows)\n>\n>\n>\n> As the query performs in roughly 12 seconds in both (changed) cases you might advise to change my queries :-)\n> (In fact we are working on this)\n> As the primary performance impact can also be reproduced in a small database (querytime > 1 minute) I checked this issue on MS-SQL server and Oracle. On MSSQL server there is no difference in the execution plan if you change the query an the performance is well. Oralce shows a slightly difference but the performance is also well.\n> As I mentioned we are looking forward to change our query but in my opinion there could be a general performance gain if this issue is addressed. (especially if you don't know you run into this issue on the query performance is sufficient enough)\n>\n> greets\n> Armin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 20 Jul 2010 21:39:04 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: potential performance gain by query planner\n\toptimization" }, { "msg_contents": "Hi Pavel,\n\nTurning hashjoin off also does the trick. (the explain output is below)\nMy basic intention was to check if the query planner could be optmized to automatically improve the query processing.\nIn this case all users (espacially those which are not be aware of ineffective query processing e.g. due their own queries) might profit by faster query execution.\nThis is just a thought (or suggestion) for further enhancement. Evt. it will be added to the project backlog.\n\nkind regards\nArmin\n\nFor reasons of completeness the eplain output with \"hashjoin off\":\n\n\n# explain analyze SELECT DISTINCT t4.objid FROM fscsubfile t4, cooobject t6 WHERE t6.objid = t4.objid AND t4.fileresporgid = 573936067464397682 AND NOT EXISTS (SELECT 1 FROM ataggval q1_1, atdateval t5 WHERE q1_1.objid = t4.objid AND q1_1.attrid = 281479288456451 AND q1_1.aggrid = 0 AND t5.aggrid = q1_1.aggval AND t5.objid = t4.objid AND t5.attrid = 281479288456447 ) AND ( (t6.objclassid IN (285774255832590,285774255764301))) AND ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952)) ORDER BY t4.objid;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=13639921632.59..14512729357.51 rows=1 width=8) (actual time=14172.154..14468.435 rows=64 loops=1)\n -> Nested Loop (cost=13639921632.59..14512729357.51 rows=1 width=8) (actual time=14172.148..14468.364 rows=64 loops=1)\n -> Merge Anti Join (cost=13639921632.59..14512729349.28 rows=1 width=8) (actual time=14092.764..14108.850 rows=108 loops=1)\n Merge Cond: ((t4.objid = q1_1.objid) AND (t4.objid = t5.objid))\n -> Sort (cost=14477.12..14490.83 rows=5486 width=8) (actual time=100.070..109.200 rows=63436 loops=1)\n Sort Key: t4.objid\n Sort Method: quicksort Memory: 4510kB\n -> Bitmap Heap Scan on fscsubfile t4 (cost=154.42...14136.40 rows=5486 width=8) (actual time=14.645..54.176 rows=63436 loops=1)\n Recheck Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n -> Bitmap Index Scan on ind_fscsubfile_filerespons (cost=0.00..153.05 rows=5486 width=0) (actual time=11.438..11.438 rows=63436 loops=1)\n Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n -> Materialize (cost=13584864369.09..14272439933.58 rows=55006045159 width=16) (actual time=12914.166..13699.719 rows=1299867 loops=1)\n -> Sort (cost=13584864369.09..13722379481.99 rows=55006045159 width=16) (actual time=12914.153..13411.554 rows=1299867 loops=1)\n Sort Key: q1_1.objid, t5.objid\n Sort Method: external merge Disk: 47192kB\n -> Nested Loop (cost=0.00..11917516.57 rows=55006045159 width=16) (actual time=0.621..10505.130 rows=1858326 loops=1)\n -> Seq Scan on atdateval t5 (cost=0.00...294152.40 rows=1859934 width=12) (actual time=0.593..1870.182 rows=1857906 loops=1)\n Filter: (attrid = 281479288456447::bigint)\n -> Index Scan using ind_ataggval on ataggval q1_1 (cost=0.00..6.20 rows=4 width=12) (actual time=0.004..0.004 rows=1 loops=1857906)\n Index Cond: ((q1_1.attrid = 281479288456451::bigint) AND (q1_1.aggval = t5.aggrid))\n Filter: (q1_1.aggrid = 0)\n -> Index Scan using cooobjectix on cooobject t6 (cost=0.00..8.22 rows=1 width=8) (actual time=3.327..3.328 rows=1 loops=108)\n Index Cond: (t6.objid = t4.objid)\n Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\n Total runtime: 14487.434 ms\n\n\n\n\n\n\n-----Original Message-----\nFrom: Pavel Stehule [mailto:[email protected]]\nSent: Dienstag, 20. Juli 2010 21:39\nTo: Kneringer, Armin\nCc: [email protected]\nSubject: Re: [PERFORM] potential performance gain by query planner optimization\n\nHello\n\n2010/7/20 Kneringer, Armin <[email protected]>:\n> Hi there.\n>\n> I think I found a potential performance gain if the query planner\n> would be optimized. All Tests has been performed with 8.4.1 (and\n> earlier versions) on CentOS 5.3 (x64)\n>\n> The following query will run on my database (~250 GB) for ca. 1600\n> seconds and the sort will result in a disk merge deploying ca. 200 GB\n> of data to the local disk (ca. 180.000 tmp-files)\n\ncan you try show check explain with set enable_hashjoin to off; ?\n\nRegards\n\nPavel Stehule\n\n>\n> explain SELECT DISTINCT t4.objid\n> FROM fscsubfile t4, cooobject t6\n> NOT EXISTS (\n> WHERE t6.objid = t4.objid AND\n> t4.fileresporgid = 573936067464397682 AND\n> NOT EXISTS (\n> SELECT 1\n> FROM ataggval q1_1,\n> atdateval t5\n> WHERE q1_1.objid = t4.objid AND\n> q1_1.attrid = 281479288456451 AND\n> q1_1.aggrid = 0 AND\n> t5.aggrid = q1_1.aggval AND\n> t5.objid = t4.objid AND\n> t5.attrid = 281479288456447 ) AND\n> ((t6.objclassid IN (285774255832590,285774255764301))) AND\n> ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n> ORDER BY t4.objid;\n>\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> ----------------------------------\n> Unique (cost=2592137103.99..2592137104.00 rows=1 width=8)\n> -> Sort (cost=2592137103.99..2592137104.00 rows=1 width=8)\n> Sort Key: t4.objid\n> -> Nested Loop (cost=1105592553.38..2592137103.98 rows=1\n> width=8)\n> -> Hash Anti Join (cost=1105592553.38..2592137095.75\n> rows=1 width=8)\n> Hash Cond: ((t4.objid = q1_1.objid) AND (t4.objid\n> = t5.objid))\n> -> Bitmap Heap Scan on fscsubfile t4\n> (cost=154.42...14136.40 rows=5486 width=8)\n> Recheck Cond: ((fileresporgid =\n> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n> AND (objid < 573936101807357952::bigint))\n> -> Bitmap Index Scan on\n> ind_fscsubfile_filerespons (cost=0.00..153.05 rows=5486 width=0)\n> Index Cond: ((fileresporgid =\n> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n> AND (objid < 573936101807357952::bigint))\n> -> Hash (cost=11917516.57..11917516.57\n> rows=55006045159 width=16)\n> -> Nested Loop (cost=0.00..11917516.57\n> rows=55006045159 width=16)\n> -> Seq Scan on atdateval t5\n> (cost=0.00...294152.40 rows=1859934 width=12)\n> Filter: (attrid =\n> 281479288456447::bigint)\n> -> Index Scan using ind_ataggval on\n> ataggval q1_1 (cost=0.00..6.20 rows=4 width=12)\n> Index Cond: ((q1_1.attrid =\n> 281479288456451::bigint) AND (q1_1.aggval = t5.aggrid))\n> Filter: (q1_1.aggrid = 0)\n> -> Index Scan using cooobjectix on cooobject t6\n> (cost=0.00..8.22 rows=1 width=8)\n> Index Cond: (t6.objid = t4.objid)\n> Filter: (t6.objclassid = ANY\n> ('{285774255832590,285774255764301}'::bigint[]))\n> (20 rows)\n>\n>\n> As the disks pace is limited on my test system I can't provide the\n> \"explain analyze\" output If I change the query as follows the query takes only 12 seconds and only needs 2 tmp files for sorting.\n> (Changed lines are marked with [!!!!!] as I don't know HTML-Mails will\n> be delivered without conversion\n>\n> explain SELECT DISTINCT t4.objid\n> FROM fscsubfile t4, cooobject t6\n> WHERE t6.objid = t4.objid AND\n> t4.fileresporgid = 573936067464397682 AND\n> NOT EXISTS (\n> SELECT 1\n> FROM ataggval q1_1,\n> atdateval t5\n> WHERE q1_1.objid = t4.objid AND\n> q1_1.attrid = 281479288456451 AND\n> q1_1.aggrid = 0 AND\n> t5.aggrid = q1_1.aggval AND\n> t5.objid = q1_1.objid AND [!!!!!]\n> t5.attrid = 281479288456447 ) AND\n> ((t6.objclassid IN (285774255832590,285774255764301))) AND\n> ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n> ORDER BY t4.objid;\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> ---------------------- Unique (cost=918320.29..971968.88 rows=1\n> width=8)\n> -> Nested Loop (cost=918320.29..971968.88 rows=1 width=8)\n> -> Merge Anti Join (cost=918320.29..971960.65 rows=1\n> width=8)\n> Merge Cond: (t4.objid = q1_1.objid)\n> -> Index Scan using ind_fscsubfile_filerespons on\n> fscsubfile t4 (cost=0.00..19016.05 rows=5486 width=8)\n> Index Cond: ((fileresporgid =\n> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n> AND (objid < 573936101807357952::bigint))\n> -> Materialize (cost=912418.42..956599.36 rows=22689\n> width=8)\n> -> Merge Join (cost=912418.42..956372.47\n> rows=22689 width=8)\n> Merge Cond: ((t5.objid = q1_1.objid) AND\n> (t5.aggrid = q1_1.aggval))\n> -> Sort (cost=402024.80..406674.63\n> rows=1859934 width=12)\n> Sort Key: t5.objid, t5.aggrid\n> -> Bitmap Heap Scan on atdateval t5\n> (cost=43749.07..176555.24 rows=1859934 width=12)\n> Recheck Cond: (attrid =\n> 281479288456447::bigint)\n> -> Bitmap Index Scan on\n> ind_atdateval (cost=0.00..43284.08 rows=1859934 width=0)\n> Index Cond: (attrid =\n> 281479288456447::bigint)\n> -> Materialize (cost=510392.25..531663.97\n> rows=1701738 width=12)\n> -> Sort (cost=510392.25..514646.59\n> rows=1701738 width=12)\n> Sort Key: q1_1.objid,\n> q1_1.aggval\n> -> Bitmap Heap Scan on ataggval\n> q1_1 (cost=44666.00..305189.47 rows=1701738 width=12)\n> Recheck Cond: (attrid =\n> 281479288456451::bigint)\n> Filter: (aggrid = 0)\n> -> Bitmap Index Scan on\n> ind_ataggval (cost=0.00..44240.56 rows=1860698 width=0)\n> Index Cond: (attrid\n> = 281479288456451::bigint)\n> -> Index Scan using cooobjectix on cooobject t6\n> (cost=0.00..8.22 rows=1 width=8)\n> Index Cond: (t6.objid = t4.objid)\n> Filter: (t6.objclassid = ANY\n> ('{285774255832590,285774255764301}'::bigint[]))\n> (26 rows)\n>\n> explain analyze SELECT DISTINCT t4.objid FROM fscsubfile t4, cooobject\n> t6 WHERE t6.objid = t4.objid AND t4.fileresporgid = 573936067464397682\n> AND\n> NOT EXISTS (\n> SELECT 1\n> FROM ataggval q1_1,\n> atdateval t5\n> WHERE q1_1.objid = t4.objid AND\n> q1_1.attrid = 281479288456451 AND\n> q1_1.aggrid = 0 AND\n> t5.aggrid = q1_1.aggval AND\n> t5.objid = q1_1.objid AND [!!!!!]\n> t5.attrid = 281479288456447 ) AND\n> ((t6.objclassid IN (285774255832590,285774255764301))) AND ((t4.objid\n> > 573936097512390656 and t4.objid < 573936101807357952)) ORDER BY\n> t4.objid;\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> -----------------------------------\n> Unique (cost=918320.29..971968.88 rows=1 width=8) (actual\n> time=12079.598..12083.048 rows=64 loops=1)\n> -> Nested Loop (cost=918320.29..971968.88 rows=1 width=8) (actual\n> time=12079.594..12083.010 rows=64 loops=1)\n> -> Merge Anti Join (cost=918320.29..971960.65 rows=1\n> width=8) (actual time=12037.524..12081.989 rows=108 loops=1)\n> Merge Cond: (t4.objid = q1_1.objid)\n> -> Index Scan using ind_fscsubfile_filerespons on\n> fscsubfile t4 (cost=0.00..19016.05 rows=5486 width=8) (actual\n> time=0.073..83.498 rows=63436 loops=1)\n> Index Cond: ((fileresporgid =\n> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n> AND (objid < 573936101807357952::bigint))\n> -> Materialize (cost=912418.42..956599.36 rows=22689\n> width=8) (actual time=8866.253..11753.055 rows=1299685 loops=1)\n> -> Merge Join (cost=912418.42..956372.47\n> rows=22689 width=8) (actual time=8866.246..11413.397 rows=1299685\n> loops=1)\n> Merge Cond: ((t5.objid = q1_1.objid) AND\n> (t5.aggrid = q1_1.aggval))\n> -> Sort (cost=402024.80..406674.63\n> rows=1859934 width=12) (actual time=3133.362..3774.076 rows=1299685\n> loops=1)\n> Sort Key: t5.objid, t5.aggrid\n> Sort Method: external merge Disk:\n> 47192kB\n> -> Bitmap Heap Scan on atdateval t5\n> (cost=43749.07..176555.24 rows=1859934 width=12) (actual\n> time=282.454..1079.038 rows=1857906 loops=1)\n> Recheck Cond: (attrid =\n> 281479288456447::bigint)\n> -> Bitmap Index Scan on\n> ind_atdateval (cost=0.00..43284.08 rows=1859934 width=0) (actual\n> time=258.749...258.749 rows=1857906 loops=1)\n> Index Cond: (attrid =\n> 281479288456447::bigint)\n> -> Materialize (cost=510392.25..531663.97\n> rows=1701738 width=12) (actual time=5732.872..6683.784 rows=1299685\n> loops=1)\n> -> Sort (cost=510392.25..514646.59\n> rows=1701738 width=12) (actual time=5732.866..6387.188 rows=1299685\n> loops=1)\n> Sort Key: q1_1.objid, q1_1.aggval\n> Sort Method: external merge\n> Disk: 39920kB\n> -> Bitmap Heap Scan on ataggval\n> q1_1 (cost=44666.00..305189.47 rows=1701738 width=12) (actual\n> time=1644.983..3634.044 rows=1857906 loops=1)\n> Recheck Cond: (attrid =\n> 281479288456451::bigint)\n> Filter: (aggrid = 0)\n> -> Bitmap Index Scan on\n> ind_ataggval (cost=0.00..44240.56 rows=1860698 width=0) (actual\n> time=1606.325..1606.325 rows=1877336 loops=1)\n> Index Cond: (attrid\n> = 281479288456451::bigint)\n> -> Index Scan using cooobjectix on cooobject t6\n> (cost=0.00..8.22 rows=1 width=8) (actual time=0.009..0.009 rows=1\n> loops=108)\n> Index Cond: (t6.objid = t4.objid)\n> Filter: (t6.objclassid = ANY\n> ('{285774255832590,285774255764301}'::bigint[]))\n> Total runtime: 12108.663 ms\n> (29 rows)\n>\n>\n> Another way to optimize my query is to change it as follows:\n> (Once again changes are marked with [!!!!!]\n>\n> explain SELECT DISTINCT t4.objid\n> FROM fscsubfile t4, cooobject t6\n> WHERE t6.objid = t4.objid AND\n> t4.fileresporgid = 573936067464397682 AND\n> NOT EXISTS (\n> SELECT 1\n> FROM ataggval q1_1,\n> atdateval t5\n> WHERE q1_1.objid = t5.objid AND [!!!!!]\n> q1_1.attrid = 281479288456451 AND\n> q1_1.aggrid = 0 AND\n> t5.aggrid = q1_1.aggval AND\n> t5.objid = t4.objid AND\n> t5.attrid = 281479288456447 ) AND\n> ((t6.objclassid IN (285774255832590,285774255764301))) AND\n> ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n> ORDER BY t4.objid;\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> ---------------------- Unique (cost=916978.86..969139.72 rows=1\n> width=8)\n> -> Nested Loop (cost=916978.86..969139.72 rows=1 width=8)\n> -> Merge Anti Join (cost=916978.86..969131.49 rows=1\n> width=8)\n> Merge Cond: (t4.objid = t5.objid)\n> -> Index Scan using ind_fscsubfile_filerespons on\n> fscsubfile t4 (cost=0.00..19016.05 rows=5486 width=8)\n> Index Cond: ((fileresporgid =\n> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n> AND (objid < 573936101807357952::bigint))\n> -> Materialize (cost=912418.42..956599.36 rows=22689\n> width=8)\n> -> Merge Join (cost=912418.42..956372.47\n> rows=22689 width=8)\n> Merge Cond: ((t5.objid = q1_1.objid) AND\n> (t5.aggrid = q1_1.aggval))\n> -> Sort (cost=402024.80..406674.63\n> rows=1859934 width=12)\n> Sort Key: t5.objid, t5.aggrid\n> -> Bitmap Heap Scan on atdateval t5\n> (cost=43749.07..176555.24 rows=1859934 width=12)\n> Recheck Cond: (attrid =\n> 281479288456447::bigint)\n> -> Bitmap Index Scan on\n> ind_atdateval (cost=0.00..43284.08 rows=1859934 width=0)\n> Index Cond: (attrid =\n> 281479288456447::bigint)\n> -> Materialize (cost=510392.25..531663.97\n> rows=1701738 width=12)\n> -> Sort (cost=510392.25..514646.59\n> rows=1701738 width=12)\n> Sort Key: q1_1.objid,\n> q1_1.aggval\n> -> Bitmap Heap Scan on ataggval\n> q1_1 (cost=44666.00..305189.47 rows=1701738 width=12)\n> Recheck Cond: (attrid =\n> 281479288456451::bigint)\n> Filter: (aggrid = 0)\n> -> Bitmap Index Scan on\n> ind_ataggval (cost=0.00..44240.56 rows=1860698 width=0)\n> Index Cond: (attrid\n> = 281479288456451::bigint)\n> -> Index Scan using cooobjectix on cooobject t6\n> (cost=0.00..8.22 rows=1 width=8)\n> Index Cond: (t6.objid = t4.objid)\n> Filter: (t6.objclassid = ANY\n> ('{285774255832590,285774255764301}'::bigint[]))\n> (26 rows)\n>\n>\n> explain analyze SELECT DISTINCT t4.objid FROM fscsubfile t4, cooobject\n> t6 WHERE t6.objid = t4.objid AND t4.fileresporgid = 573936067464397682\n> AND\n> NOT EXISTS (\n> SELECT 1\n> FROM ataggval q1_1,\n> atdateval t5\n> WHERE q1_1.objid = t5.objid AND [!!!!!]\n> q1_1.attrid = 281479288456451 AND\n> q1_1.aggrid = 0 AND\n> t5.aggrid = q1_1.aggval AND\n> t5.objid = t4.objid AND\n> t5.attrid = 281479288456447 ) AND\n> ((t6.objclassid IN (285774255832590,285774255764301))) AND ((t4.objid\n> > 573936097512390656 and t4.objid < 573936101807357952)) ORDER BY\n> t4.objid;\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> -----------------------------------\n>\n> Unique (cost=916978.86..969139.72 rows=1 width=8) (actual\n> time=12102.964..12106.409 rows=64 loops=1)\n> -> Nested Loop (cost=916978.86..969139.72 rows=1 width=8) (actual\n> time=12102.959..12106.375 rows=64 loops=1)\n> -> Merge Anti Join (cost=916978.86..969131.49 rows=1\n> width=8) (actual time=12060.916..12105.374 rows=108 loops=1)\n> Merge Cond: (t4.objid = t5.objid)\n> -> Index Scan using ind_fscsubfile_filerespons on\n> fscsubfile t4 (cost=0.00..19016.05 rows=5486 width=8) (actual\n> time=0.080..81.397 rows=63436 loops=1)\n> Index Cond: ((fileresporgid =\n> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n> AND (objid < 573936101807357952::bigint))\n> -> Materialize (cost=912418.42..956599.36 rows=22689\n> width=8) (actual time=8874.492..11778.254 rows=1299685 loops=1)\n> -> Merge Join (cost=912418.42..956372.47\n> rows=22689 width=8) (actual time=8874.484..11437.175 rows=1299685\n> loops=1)\n> Merge Cond: ((t5.objid = q1_1.objid) AND\n> (t5.aggrid = q1_1.aggval))\n> -> Sort (cost=402024.80..406674.63\n> rows=1859934 width=12) (actual time=3117.555..3756.062 rows=1299685\n> loops=1)\n> Sort Key: t5.objid, t5.aggrid\n> Sort Method: external merge Disk:\n> 39920kB\n> -> Bitmap Heap Scan on atdateval t5\n> (cost=43749.07..176555.24 rows=1859934 width=12) (actual\n> time=289.475..1079.624 rows=1857906 loops=1)\n> Recheck Cond: (attrid =\n> 281479288456447::bigint)\n> -> Bitmap Index Scan on\n> ind_atdateval (cost=0.00..43284.08 rows=1859934 width=0) (actual\n> time=265.720...265.720 rows=1857906 loops=1)\n> Index Cond: (attrid =\n> 281479288456447::bigint)\n> -> Materialize (cost=510392.25..531663.97\n> rows=1701738 width=12) (actual time=5756.915..6707.864 rows=1299685\n> loops=1)\n> -> Sort (cost=510392.25..514646.59\n> rows=1701738 width=12) (actual time=5756.909..6409.819 rows=1299685\n> loops=1)\n> Sort Key: q1_1.objid,\n> q1_1.aggval\n> Sort Method: external merge\n> Disk: 39920kB\n> -> Bitmap Heap Scan on ataggval\n> q1_1 (cost=44666.00..305189.47 rows=1701738 width=12) (actual\n> time=1646.955..3628.918 rows=1857906 loops=1)\n> Recheck Cond: (attrid =\n> 281479288456451::bigint)\n> Filter: (aggrid = 0)\n> -> Bitmap Index Scan on\n> ind_ataggval (cost=0.00..44240.56 rows=1860698 width=0) (actual\n> time=1608.233..1608.233 rows=1877336 loops=1)\n> Index Cond: (attrid\n> = 281479288456451::bigint)\n> -> Index Scan using cooobjectix on cooobject t6\n> (cost=0.00..8.22 rows=1 width=8) (actual time=0.008..0.009 rows=1\n> loops=108)\n> Index Cond: (t6.objid = t4.objid)\n> Filter: (t6.objclassid = ANY\n> ('{285774255832590,285774255764301}'::bigint[]))\n> Total runtime: 12129.613 ms\n> (29 rows)\n>\n>\n>\n> As the query performs in roughly 12 seconds in both (changed) cases\n> you might advise to change my queries :-) (In fact we are working on\n> this) As the primary performance impact can also be reproduced in a small database (querytime > 1 minute) I checked this issue on MS-SQL server and Oracle. On MSSQL server there is no difference in the execution plan if you change the query an the performance is well. Oralce shows a slightly difference but the performance is also well.\n> As I mentioned we are looking forward to change our query but in my\n> opinion there could be a general performance gain if this issue is\n> addressed. (especially if you don't know you run into this issue on\n> the query performance is sufficient enough)\n>\n> greets\n> Armin\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 21 Jul 2010 16:47:54 +0000", "msg_from": "\"Kneringer, Armin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: potential performance gain by query planner\n optimization" }, { "msg_contents": "2010/7/21 Kneringer, Armin <[email protected]>:\n> Hi Pavel,\n>\n> Turning hashjoin off also does the trick. (the explain output is below)\n> My basic intention was to check if the query planner could be optmized to automatically improve the query processing.\n> In this case all users (espacially those which are not be aware of ineffective query processing e.g. due their own queries) might profit by faster query execution.\n> This is just a thought (or suggestion) for further enhancement. Evt. it will be added to the project backlog.\n>\n\nYou have a problem with inadequate statistics. Somewhere optimalizer\nprefer hash join (available for sets less than work_mem), but try to\nstore to much data to hash tables and system will to use a swap :(.\n\nRegards\nPavel Stehule\n\n> kind regards\n> Armin\n>\n> For reasons of completeness the eplain output with \"hashjoin off\":\n>\n>\n> # explain analyze SELECT DISTINCT t4.objid FROM fscsubfile t4, cooobject t6 WHERE t6.objid = t4.objid AND t4.fileresporgid = 573936067464397682 AND NOT EXISTS (SELECT 1 FROM ataggval q1_1, atdateval t5 WHERE q1_1.objid = t4.objid AND q1_1.attrid = 281479288456451 AND q1_1.aggrid = 0 AND t5.aggrid = q1_1.aggval AND t5.objid = t4.objid AND t5.attrid = 281479288456447 ) AND ( (t6.objclassid IN (285774255832590,285774255764301))) AND ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952)) ORDER BY t4.objid;\n>                                                                                  QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Unique  (cost=13639921632.59..14512729357.51 rows=1 width=8) (actual time=14172.154..14468.435 rows=64 loops=1)\n>   ->  Nested Loop  (cost=13639921632.59..14512729357.51 rows=1 width=8) (actual time=14172.148..14468.364 rows=64 loops=1)\n>         ->  Merge Anti Join  (cost=13639921632.59..14512729349.28 rows=1 width=8) (actual time=14092.764..14108.850 rows=108 loops=1)\n>               Merge Cond: ((t4.objid = q1_1.objid) AND (t4.objid = t5.objid))\n>               ->  Sort  (cost=14477.12..14490.83 rows=5486 width=8) (actual time=100.070..109.200 rows=63436 loops=1)\n>                     Sort Key: t4.objid\n>                     Sort Method:  quicksort  Memory: 4510kB\n>                     ->  Bitmap Heap Scan on fscsubfile t4  (cost=154.42...14136.40 rows=5486 width=8) (actual time=14.645..54.176 rows=63436 loops=1)\n>                           Recheck Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n>                           ->  Bitmap Index Scan on ind_fscsubfile_filerespons  (cost=0.00..153.05 rows=5486 width=0) (actual time=11.438..11.438 rows=63436 loops=1)\n>                                 Index Cond: ((fileresporgid = 573936067464397682::bigint) AND (objid > 573936097512390656::bigint) AND (objid < 573936101807357952::bigint))\n>               ->  Materialize  (cost=13584864369.09..14272439933.58 rows=55006045159 width=16) (actual time=12914.166..13699.719 rows=1299867 loops=1)\n>                     ->  Sort  (cost=13584864369.09..13722379481.99 rows=55006045159 width=16) (actual time=12914.153..13411.554 rows=1299867 loops=1)\n>                           Sort Key: q1_1.objid, t5.objid\n>                           Sort Method:  external merge  Disk: 47192kB\n>                           ->  Nested Loop  (cost=0.00..11917516.57 rows=55006045159 width=16) (actual time=0.621..10505.130 rows=1858326 loops=1)\n>                                 ->  Seq Scan on atdateval t5  (cost=0.00...294152.40 rows=1859934 width=12) (actual time=0.593..1870.182 rows=1857906 loops=1)\n>                                       Filter: (attrid = 281479288456447::bigint)\n>                                 ->  Index Scan using ind_ataggval on ataggval q1_1  (cost=0.00..6.20 rows=4 width=12) (actual time=0.004..0.004 rows=1 loops=1857906)\n>                                       Index Cond: ((q1_1.attrid = 281479288456451::bigint) AND (q1_1.aggval = t5.aggrid))\n>                                       Filter: (q1_1.aggrid = 0)\n>         ->  Index Scan using cooobjectix on cooobject t6  (cost=0.00..8.22 rows=1 width=8) (actual time=3.327..3.328 rows=1 loops=108)\n>               Index Cond: (t6.objid = t4.objid)\n>               Filter: (t6.objclassid = ANY ('{285774255832590,285774255764301}'::bigint[]))\n>  Total runtime: 14487.434 ms\n>\n>\n>\n>\n>\n>\n> -----Original Message-----\n> From: Pavel Stehule [mailto:[email protected]]\n> Sent: Dienstag, 20. Juli 2010 21:39\n> To: Kneringer, Armin\n> Cc: [email protected]\n> Subject: Re: [PERFORM] potential performance gain by query planner optimization\n>\n> Hello\n>\n> 2010/7/20 Kneringer, Armin <[email protected]>:\n>> Hi there.\n>>\n>> I think I found a potential performance gain if the query planner\n>> would be optimized. All Tests has been performed with 8.4.1 (and\n>> earlier versions) on CentOS 5.3 (x64)\n>>\n>> The following query will run on my database (~250 GB) for ca. 1600\n>> seconds and the sort will result in a disk merge deploying ca. 200 GB\n>> of data to the local disk (ca. 180.000 tmp-files)\n>\n> can you try show check explain with set enable_hashjoin to off;   ?\n>\n> Regards\n>\n> Pavel Stehule\n>\n>>\n>> explain SELECT DISTINCT t4.objid\n>> FROM fscsubfile t4, cooobject t6\n>>  NOT EXISTS (\n>>  WHERE t6.objid = t4.objid AND\n>>  t4.fileresporgid = 573936067464397682 AND\n>>   NOT EXISTS (\n>>   SELECT 1\n>>   FROM ataggval q1_1,\n>>   atdateval t5\n>>   WHERE q1_1.objid = t4.objid AND\n>>   q1_1.attrid = 281479288456451 AND\n>>   q1_1.aggrid = 0 AND\n>>   t5.aggrid = q1_1.aggval AND\n>>   t5.objid = t4.objid AND\n>>   t5.attrid = 281479288456447 ) AND\n>>  ((t6.objclassid IN (285774255832590,285774255764301))) AND\n>>  ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n>>  ORDER BY t4.objid;\n>>\n>>\n>> QUERY PLAN\n>> ----------------------------------------------------------------------\n>> ----------------------------------------------------------------------\n>> ----------------------------------\n>> Unique  (cost=2592137103.99..2592137104.00 rows=1 width=8)\n>>   ->  Sort  (cost=2592137103.99..2592137104.00 rows=1 width=8)\n>>         Sort Key: t4.objid\n>>         ->  Nested Loop  (cost=1105592553.38..2592137103.98 rows=1\n>> width=8)\n>>               ->  Hash Anti Join  (cost=1105592553.38..2592137095.75\n>> rows=1 width=8)\n>>                     Hash Cond: ((t4.objid = q1_1.objid) AND (t4.objid\n>> = t5.objid))\n>>                     ->  Bitmap Heap Scan on fscsubfile t4\n>> (cost=154.42...14136.40 rows=5486 width=8)\n>>                           Recheck Cond: ((fileresporgid =\n>> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n>> AND (objid < 573936101807357952::bigint))\n>>                           ->  Bitmap Index Scan on\n>> ind_fscsubfile_filerespons  (cost=0.00..153.05 rows=5486 width=0)\n>>                                 Index Cond: ((fileresporgid =\n>> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n>> AND (objid < 573936101807357952::bigint))\n>>                     ->  Hash  (cost=11917516.57..11917516.57\n>> rows=55006045159 width=16)\n>>                           ->  Nested Loop  (cost=0.00..11917516.57\n>> rows=55006045159 width=16)\n>>                                 ->  Seq Scan on atdateval t5\n>> (cost=0.00...294152.40 rows=1859934 width=12)\n>>                                       Filter: (attrid =\n>> 281479288456447::bigint)\n>>                                 ->  Index Scan using ind_ataggval on\n>> ataggval q1_1  (cost=0.00..6.20 rows=4 width=12)\n>>                                       Index Cond: ((q1_1.attrid =\n>> 281479288456451::bigint) AND (q1_1.aggval = t5.aggrid))\n>>                                       Filter: (q1_1.aggrid = 0)\n>>               ->  Index Scan using cooobjectix on cooobject t6\n>> (cost=0.00..8.22 rows=1 width=8)\n>>                     Index Cond: (t6.objid = t4.objid)\n>>                     Filter: (t6.objclassid = ANY\n>> ('{285774255832590,285774255764301}'::bigint[]))\n>> (20 rows)\n>>\n>>\n>> As the disks pace is limited on my test system I can't provide the\n>> \"explain analyze\" output If I change the query as follows the query takes only 12 seconds and only needs 2 tmp files for sorting.\n>> (Changed lines are marked with [!!!!!] as I don't know HTML-Mails will\n>> be delivered without conversion\n>>\n>> explain SELECT DISTINCT t4.objid\n>> FROM fscsubfile t4, cooobject t6\n>> WHERE t6.objid = t4.objid AND\n>> t4.fileresporgid = 573936067464397682 AND\n>>   NOT EXISTS (\n>>   SELECT 1\n>>   FROM ataggval q1_1,\n>>   atdateval t5\n>>   WHERE q1_1.objid = t4.objid AND\n>>   q1_1.attrid = 281479288456451 AND\n>>   q1_1.aggrid = 0 AND\n>>   t5.aggrid = q1_1.aggval AND\n>>   t5.objid = q1_1.objid AND                 [!!!!!]\n>>   t5.attrid = 281479288456447 ) AND\n>>   ((t6.objclassid IN (285774255832590,285774255764301))) AND\n>>   ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n>>  ORDER BY t4.objid;\n>>\n>> QUERY PLAN\n>> ----------------------------------------------------------------------\n>> ----------------------------------------------------------------------\n>> ---------------------- Unique  (cost=918320.29..971968.88 rows=1\n>> width=8)\n>>   ->  Nested Loop  (cost=918320.29..971968.88 rows=1 width=8)\n>>         ->  Merge Anti Join  (cost=918320.29..971960.65 rows=1\n>> width=8)\n>>               Merge Cond: (t4.objid = q1_1.objid)\n>>               ->  Index Scan using ind_fscsubfile_filerespons on\n>> fscsubfile t4  (cost=0.00..19016.05 rows=5486 width=8)\n>>                     Index Cond: ((fileresporgid =\n>> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n>> AND (objid < 573936101807357952::bigint))\n>>               ->  Materialize  (cost=912418.42..956599.36 rows=22689\n>> width=8)\n>>                     ->  Merge Join  (cost=912418.42..956372.47\n>> rows=22689 width=8)\n>>                           Merge Cond: ((t5.objid = q1_1.objid) AND\n>> (t5.aggrid = q1_1.aggval))\n>>                           ->  Sort  (cost=402024.80..406674.63\n>> rows=1859934 width=12)\n>>                                 Sort Key: t5.objid, t5.aggrid\n>>                                 ->  Bitmap Heap Scan on atdateval t5\n>> (cost=43749.07..176555.24 rows=1859934 width=12)\n>>                                       Recheck Cond: (attrid =\n>> 281479288456447::bigint)\n>>                                       ->  Bitmap Index Scan on\n>> ind_atdateval  (cost=0.00..43284.08 rows=1859934 width=0)\n>>                                             Index Cond: (attrid =\n>> 281479288456447::bigint)\n>>                           ->  Materialize  (cost=510392.25..531663.97\n>> rows=1701738 width=12)\n>>                                 ->  Sort  (cost=510392.25..514646.59\n>> rows=1701738 width=12)\n>>                                       Sort Key: q1_1.objid,\n>> q1_1.aggval\n>>                                       ->  Bitmap Heap Scan on ataggval\n>> q1_1  (cost=44666.00..305189.47 rows=1701738 width=12)\n>>                                             Recheck Cond: (attrid =\n>> 281479288456451::bigint)\n>>                                             Filter: (aggrid = 0)\n>>                                             ->  Bitmap Index Scan on\n>> ind_ataggval  (cost=0.00..44240.56 rows=1860698 width=0)\n>>                                                   Index Cond: (attrid\n>> = 281479288456451::bigint)\n>>         ->  Index Scan using cooobjectix on cooobject t6\n>> (cost=0.00..8.22 rows=1 width=8)\n>>               Index Cond: (t6.objid = t4.objid)\n>>               Filter: (t6.objclassid = ANY\n>> ('{285774255832590,285774255764301}'::bigint[]))\n>> (26 rows)\n>>\n>> explain analyze SELECT DISTINCT t4.objid FROM fscsubfile t4, cooobject\n>> t6 WHERE t6.objid = t4.objid AND t4.fileresporgid = 573936067464397682\n>> AND\n>>  NOT EXISTS (\n>>  SELECT 1\n>>  FROM ataggval q1_1,\n>>  atdateval t5\n>>  WHERE q1_1.objid = t4.objid AND\n>>  q1_1.attrid = 281479288456451 AND\n>>  q1_1.aggrid = 0 AND\n>>  t5.aggrid = q1_1.aggval AND\n>>  t5.objid = q1_1.objid AND                 [!!!!!]\n>>  t5.attrid = 281479288456447 ) AND\n>> ((t6.objclassid IN (285774255832590,285774255764301))) AND ((t4.objid\n>> > 573936097512390656 and t4.objid < 573936101807357952)) ORDER BY\n>> t4.objid;\n>>\n>> QUERY PLAN\n>> ----------------------------------------------------------------------\n>> ----------------------------------------------------------------------\n>> -----------------------------------\n>> Unique  (cost=918320.29..971968.88 rows=1 width=8) (actual\n>> time=12079.598..12083.048 rows=64 loops=1)\n>>   ->  Nested Loop  (cost=918320.29..971968.88 rows=1 width=8) (actual\n>> time=12079.594..12083.010 rows=64 loops=1)\n>>         ->  Merge Anti Join  (cost=918320.29..971960.65 rows=1\n>> width=8) (actual time=12037.524..12081.989 rows=108 loops=1)\n>>               Merge Cond: (t4.objid = q1_1.objid)\n>>               ->  Index Scan using ind_fscsubfile_filerespons on\n>> fscsubfile t4  (cost=0.00..19016.05 rows=5486 width=8) (actual\n>> time=0.073..83.498 rows=63436 loops=1)\n>>                     Index Cond: ((fileresporgid =\n>> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n>> AND (objid < 573936101807357952::bigint))\n>>               ->  Materialize  (cost=912418.42..956599.36 rows=22689\n>> width=8) (actual time=8866.253..11753.055 rows=1299685 loops=1)\n>>                     ->  Merge Join  (cost=912418.42..956372.47\n>> rows=22689 width=8) (actual time=8866.246..11413.397 rows=1299685\n>> loops=1)\n>>                           Merge Cond: ((t5.objid = q1_1.objid) AND\n>> (t5.aggrid = q1_1.aggval))\n>>                           ->  Sort  (cost=402024.80..406674.63\n>> rows=1859934 width=12) (actual time=3133.362..3774.076 rows=1299685\n>> loops=1)\n>>                                 Sort Key: t5.objid, t5.aggrid\n>>                                 Sort Method:  external merge  Disk:\n>> 47192kB\n>>                                 ->  Bitmap Heap Scan on atdateval t5\n>> (cost=43749.07..176555.24 rows=1859934 width=12) (actual\n>> time=282.454..1079.038 rows=1857906 loops=1)\n>>                                       Recheck Cond: (attrid =\n>> 281479288456447::bigint)\n>>                                       ->  Bitmap Index Scan on\n>> ind_atdateval  (cost=0.00..43284.08 rows=1859934 width=0) (actual\n>> time=258.749...258.749 rows=1857906 loops=1)\n>>                                             Index Cond: (attrid =\n>> 281479288456447::bigint)\n>>                           ->  Materialize  (cost=510392.25..531663.97\n>> rows=1701738 width=12) (actual time=5732.872..6683.784 rows=1299685\n>> loops=1)\n>>                                 ->  Sort  (cost=510392.25..514646.59\n>> rows=1701738 width=12) (actual time=5732.866..6387.188 rows=1299685\n>> loops=1)\n>>                                      Sort Key: q1_1.objid, q1_1.aggval\n>>                                       Sort Method:  external merge\n>> Disk: 39920kB\n>>                                       ->  Bitmap Heap Scan on ataggval\n>> q1_1  (cost=44666.00..305189.47 rows=1701738 width=12) (actual\n>> time=1644.983..3634.044 rows=1857906 loops=1)\n>>                                             Recheck Cond: (attrid =\n>> 281479288456451::bigint)\n>>                                             Filter: (aggrid = 0)\n>>                                             ->  Bitmap Index Scan on\n>> ind_ataggval  (cost=0.00..44240.56 rows=1860698 width=0) (actual\n>> time=1606.325..1606.325 rows=1877336 loops=1)\n>>                                                   Index Cond: (attrid\n>> = 281479288456451::bigint)\n>>         ->  Index Scan using cooobjectix on cooobject t6\n>> (cost=0.00..8.22 rows=1 width=8) (actual time=0.009..0.009 rows=1\n>> loops=108)\n>>               Index Cond: (t6.objid = t4.objid)\n>>               Filter: (t6.objclassid = ANY\n>> ('{285774255832590,285774255764301}'::bigint[]))\n>> Total runtime: 12108.663 ms\n>> (29 rows)\n>>\n>>\n>> Another way to optimize my query is to change it as follows:\n>> (Once again changes are marked with [!!!!!]\n>>\n>> explain SELECT DISTINCT t4.objid\n>> FROM fscsubfile t4, cooobject t6\n>> WHERE t6.objid = t4.objid AND\n>> t4.fileresporgid = 573936067464397682 AND\n>>   NOT EXISTS (\n>>   SELECT 1\n>>   FROM ataggval q1_1,\n>>   atdateval t5\n>>   WHERE q1_1.objid = t5.objid AND                 [!!!!!]\n>>   q1_1.attrid = 281479288456451 AND\n>>   q1_1.aggrid = 0 AND\n>>   t5.aggrid = q1_1.aggval AND\n>>   t5.objid = t4.objid AND\n>>   t5.attrid = 281479288456447 ) AND\n>>  ((t6.objclassid IN (285774255832590,285774255764301))) AND\n>>  ((t4.objid > 573936097512390656 and t4.objid < 573936101807357952))\n>>  ORDER BY t4.objid;\n>>\n>> QUERY PLAN\n>> ----------------------------------------------------------------------\n>> ----------------------------------------------------------------------\n>> ---------------------- Unique  (cost=916978.86..969139.72 rows=1\n>> width=8)\n>>   ->  Nested Loop  (cost=916978.86..969139.72 rows=1 width=8)\n>>         ->  Merge Anti Join  (cost=916978.86..969131.49 rows=1\n>> width=8)\n>>               Merge Cond: (t4.objid = t5.objid)\n>>               ->  Index Scan using ind_fscsubfile_filerespons on\n>> fscsubfile t4  (cost=0.00..19016.05 rows=5486 width=8)\n>>                     Index Cond: ((fileresporgid =\n>> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n>> AND (objid < 573936101807357952::bigint))\n>>               ->  Materialize  (cost=912418.42..956599.36 rows=22689\n>> width=8)\n>>                     ->  Merge Join  (cost=912418.42..956372.47\n>> rows=22689 width=8)\n>>                           Merge Cond: ((t5.objid = q1_1.objid) AND\n>> (t5.aggrid = q1_1.aggval))\n>>                           ->  Sort  (cost=402024.80..406674.63\n>> rows=1859934 width=12)\n>>                                 Sort Key: t5.objid, t5.aggrid\n>>                                 ->  Bitmap Heap Scan on atdateval t5\n>> (cost=43749.07..176555.24 rows=1859934 width=12)\n>>                                       Recheck Cond: (attrid =\n>> 281479288456447::bigint)\n>>                                       ->  Bitmap Index Scan on\n>> ind_atdateval  (cost=0.00..43284.08 rows=1859934 width=0)\n>>                                             Index Cond: (attrid =\n>> 281479288456447::bigint)\n>>                           ->  Materialize  (cost=510392.25..531663.97\n>> rows=1701738 width=12)\n>>                                 ->  Sort  (cost=510392.25..514646.59\n>> rows=1701738 width=12)\n>>                                       Sort Key: q1_1.objid,\n>> q1_1.aggval\n>>                                       ->  Bitmap Heap Scan on ataggval\n>> q1_1  (cost=44666.00..305189.47 rows=1701738 width=12)\n>>                                             Recheck Cond: (attrid =\n>> 281479288456451::bigint)\n>>                                             Filter: (aggrid = 0)\n>>                                             ->  Bitmap Index Scan on\n>> ind_ataggval  (cost=0.00..44240.56 rows=1860698 width=0)\n>>                                                   Index Cond: (attrid\n>> = 281479288456451::bigint)\n>>         ->  Index Scan using cooobjectix on cooobject t6\n>> (cost=0.00..8.22 rows=1 width=8)\n>>               Index Cond: (t6.objid = t4.objid)\n>>               Filter: (t6.objclassid = ANY\n>> ('{285774255832590,285774255764301}'::bigint[]))\n>> (26 rows)\n>>\n>>\n>> explain analyze SELECT DISTINCT t4.objid FROM fscsubfile t4, cooobject\n>> t6 WHERE t6.objid = t4.objid AND t4.fileresporgid = 573936067464397682\n>> AND\n>>  NOT EXISTS (\n>>  SELECT 1\n>>  FROM ataggval q1_1,\n>>  atdateval t5\n>>  WHERE q1_1.objid = t5.objid AND                 [!!!!!]\n>>  q1_1.attrid = 281479288456451 AND\n>>  q1_1.aggrid = 0 AND\n>>  t5.aggrid = q1_1.aggval AND\n>>  t5.objid = t4.objid AND\n>>  t5.attrid = 281479288456447 ) AND\n>> ((t6.objclassid IN (285774255832590,285774255764301))) AND ((t4.objid\n>> > 573936097512390656 and t4.objid < 573936101807357952)) ORDER BY\n>> t4.objid;\n>>\n>> QUERY PLAN\n>> ----------------------------------------------------------------------\n>> ----------------------------------------------------------------------\n>> -----------------------------------\n>>\n>> Unique  (cost=916978.86..969139.72 rows=1 width=8) (actual\n>> time=12102.964..12106.409 rows=64 loops=1)\n>>   ->  Nested Loop  (cost=916978.86..969139.72 rows=1 width=8) (actual\n>> time=12102.959..12106.375 rows=64 loops=1)\n>>         ->  Merge Anti Join  (cost=916978.86..969131.49 rows=1\n>> width=8) (actual time=12060.916..12105.374 rows=108 loops=1)\n>>               Merge Cond: (t4.objid = t5.objid)\n>>               ->  Index Scan using ind_fscsubfile_filerespons on\n>> fscsubfile t4  (cost=0.00..19016.05 rows=5486 width=8) (actual\n>> time=0.080..81.397 rows=63436 loops=1)\n>>                     Index Cond: ((fileresporgid =\n>> 573936067464397682::bigint) AND (objid > 573936097512390656::bigint)\n>> AND (objid < 573936101807357952::bigint))\n>>               ->  Materialize  (cost=912418.42..956599.36 rows=22689\n>> width=8) (actual time=8874.492..11778.254 rows=1299685 loops=1)\n>>                     ->  Merge Join  (cost=912418.42..956372.47\n>> rows=22689 width=8) (actual time=8874.484..11437.175 rows=1299685\n>> loops=1)\n>>                           Merge Cond: ((t5.objid = q1_1.objid) AND\n>> (t5.aggrid = q1_1.aggval))\n>>                           ->  Sort  (cost=402024.80..406674.63\n>> rows=1859934 width=12) (actual time=3117.555..3756.062 rows=1299685\n>> loops=1)\n>>                                 Sort Key: t5.objid, t5.aggrid\n>>                                 Sort Method:  external merge  Disk:\n>> 39920kB\n>>                                 ->  Bitmap Heap Scan on atdateval t5\n>> (cost=43749.07..176555.24 rows=1859934 width=12) (actual\n>> time=289.475..1079.624 rows=1857906 loops=1)\n>>                                       Recheck Cond: (attrid =\n>> 281479288456447::bigint)\n>>                                       ->  Bitmap Index Scan on\n>> ind_atdateval  (cost=0.00..43284.08 rows=1859934 width=0) (actual\n>> time=265.720...265.720 rows=1857906 loops=1)\n>>                                             Index Cond: (attrid =\n>> 281479288456447::bigint)\n>>                           ->  Materialize  (cost=510392.25..531663.97\n>> rows=1701738 width=12) (actual time=5756.915..6707.864 rows=1299685\n>> loops=1)\n>>                                 ->  Sort  (cost=510392.25..514646.59\n>> rows=1701738 width=12) (actual time=5756.909..6409.819 rows=1299685\n>> loops=1)\n>>                                       Sort Key: q1_1.objid,\n>> q1_1.aggval\n>>                                       Sort Method:  external merge\n>> Disk: 39920kB\n>>                                       ->  Bitmap Heap Scan on ataggval\n>> q1_1  (cost=44666.00..305189.47 rows=1701738 width=12) (actual\n>> time=1646.955..3628.918 rows=1857906 loops=1)\n>>                                             Recheck Cond: (attrid =\n>> 281479288456451::bigint)\n>>                                             Filter: (aggrid = 0)\n>>                                             ->  Bitmap Index Scan on\n>> ind_ataggval  (cost=0.00..44240.56 rows=1860698 width=0) (actual\n>> time=1608.233..1608.233 rows=1877336 loops=1)\n>>                                                   Index Cond: (attrid\n>> = 281479288456451::bigint)\n>>         ->  Index Scan using cooobjectix on cooobject t6\n>> (cost=0.00..8.22 rows=1 width=8) (actual time=0.008..0.009 rows=1\n>> loops=108)\n>>               Index Cond: (t6.objid = t4.objid)\n>>               Filter: (t6.objclassid = ANY\n>> ('{285774255832590,285774255764301}'::bigint[]))\n>> Total runtime: 12129.613 ms\n>> (29 rows)\n>>\n>>\n>>\n>> As the query performs in roughly 12 seconds in both (changed) cases\n>> you might advise to change my queries :-) (In fact we are working on\n>> this) As the primary performance impact can also be reproduced in a small database (querytime > 1 minute) I checked this issue on MS-SQL server and Oracle. On MSSQL server there is no difference in the execution plan if you change the query an the performance is well. Oralce shows a slightly difference but the performance is also well.\n>> As I mentioned we are looking forward to change our query but in my\n>> opinion there could be a general performance gain if this issue is\n>> addressed. (especially if you don't know you run into this issue on\n>> the query performance is sufficient enough)\n>>\n>> greets\n>> Armin\n>>\n>> --\n>> Sent via pgsql-performance mailing list\n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n", "msg_date": "Wed, 21 Jul 2010 19:21:55 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: potential performance gain by query planner\n\toptimization" }, { "msg_contents": "\"Kneringer, Armin\" <[email protected]> writes:\n> I think I found a potential performance gain if the query planner would be optimized. All Tests has been performed with 8.4.1 (and earlier versions) on CentOS 5.3 (x64)\n\n> The following query will run on my database (~250 GB) for ca. 1600 seconds and the sort will result in a disk merge deploying ca. 200 GB of data to the local disk (ca. 180.000 tmp-files)\n\nWhat have you got work_mem set to? It looks like you must be using an\nunreasonably large value, else the planner wouldn't have tried to use a\nhash join here:\n\n> -> Hash (cost=11917516.57..11917516.57 rows=55006045159 width=16)\n> -> Nested Loop (cost=0.00..11917516.57 rows=55006045159 width=16)\n> -> Seq Scan on atdateval t5 (cost=0.00...294152.40 rows=1859934 width=12)\n> Filter: (attrid = 281479288456447::bigint)\n> -> Index Scan using ind_ataggval on ataggval q1_1 (cost=0.00..6.20 rows=4 width=12)\n> Index Cond: ((q1_1.attrid = 281479288456451::bigint) AND (q1_1.aggval = t5.aggrid))\n> Filter: (q1_1.aggrid = 0)\n\nAlso, please try something newer than 8.4.1 --- this might be some\nalready-fixed bug.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 2010 14:25:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: potential performance gain by query planner optimization " } ]
[ { "msg_contents": "Hi,\nI have a question concerning the uses of indexes in Postgresql.\nI red that in PG a query can not use more than one index per table: \"a \nquery or data manipulation command can use at most one index per table\".\nActually I found this a little weird and unfortunately I could not find \nfurther explanation with my Google searches. But the tests I made proved \nthat this is true:\n\nIf we have table :\n\n*create table test_index(col_1 integer, col_2 integer, col_3 integer, \ncol_4 integer)*\n\n\nand we have 2 indexes created on this table:\n\n*create index idx_col_1 on test_index (col_1)*\n\n*create index idx_col_2 on test_index (col_2)*\n\nA query like : *select * from test_index where col_1 = 15 and col_2 = \n30* would never use both the indexes. The query plan is:\n\n*\"Index Scan using idx_col_2 on test_index (cost=0.00..8.27 rows=1 \nwidth=16) (actual time=0.092..0.092 rows=0 loops=1)\"*\n\n*\" Index Cond: (col_2 = 30)\"*\n\n*\" Filter: (col_1 = 15)\"*\n\n*\"Total runtime: 0.127 ms\"*\n\nThe query will use *idx_col_2 *only and apply the other condition \nignoring the other index(*idx_col_1*).\n\n\nSo please can you give some more details about this point. Is the above \ncitation true or I misunderstood it?\n\nA next step is what if a query made a join on two tables table1 and \ntable2 (for ex: where table1.id = table2.id and table2.col_2 = 3 and \ntable2.col_3 = 4)?\nWill it use, for table2, the index of the join column (table2.id) only \nand neglect the indexes of the other two columns(col_2 and col_3) \nalthough they are present in the where clause.\n\nThanks for your response,\n\nElias\n\n\n\n\n\n\n\n\n\n\n\nHi,\nI have a question concerning the\nuses of indexes in Postgresql.\nI red that in PG a query can not\nuse more than one index per table: \"a query or data manipulation\ncommand can use at most one index per table\".\nActually I\nfound this a little weird and unfortunately I could not find further\nexplanation with my Google searches. But the tests I made proved that\nthis is true:\nIf we have table :\ncreate table test_index(col_1\ninteger, col_2 integer, col_3 integer, col_4 integer)\n\nand we have 2 indexes created on this\ntable:\ncreate index idx_col_1 on test_index\n(col_1)\ncreate index idx_col_2 on test_index\n(col_2)\nA query like : select * from\ntest_index where col_1 = 15 and col_2 = 30 would never use both\nthe indexes. The query plan is:\n\"Index Scan using idx_col_2 on\ntest_index (cost=0.00..8.27 rows=1 width=16) (actual\ntime=0.092..0.092 rows=0 loops=1)\"\n\" Index Cond: (col_2 = 30)\"\n\" Filter: (col_1 = 15)\"\n\"Total runtime: 0.127 ms\"\nThe query will use idx_col_2\nonly and apply the other condition ignoring the\nother index(idx_col_1).\n\nSo please can you\ngive some more details about this point. Is the above citation true\nor I misunderstood it?\n\nA next step is\nwhat if a query made a join on two tables table1 and table2 (for ex:\nwhere table1.id = table2.id and table2.col_2 = 3 and table2.col_3 =\n4)?\nWill it use, for\ntable2, the index of the join column (table2.id) only and neglect the\nindexes of the other two columns(col_2 and col_3) although they are\npresent in the where clause.\nThanks for your\nresponse,\nElias", "msg_date": "Wed, 21 Jul 2010 10:31:07 +0300", "msg_from": "Elias Ghanem <[email protected]>", "msg_from_op": true, "msg_subject": "Using more tha one index per table" }, { "msg_contents": "In response to Elias Ghanem :\n> Hi,\n> I have a question concerning the uses of indexes in Postgresql.\n> I red that in PG a query can not use more than one index per table: \"a query or\n> data manipulation command can use at most one index per table\".\n\nThat's not true, but it's true for MySQL, afaik.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Wed, 21 Jul 2010 09:53:53 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "On Wed, Jul 21, 2010 at 12:53 AM, A. Kretschmer <\[email protected]> wrote:\n\n> In response to Elias Ghanem :\n> > Hi,\n> > I have a question concerning the uses of indexes in Postgresql.\n> > I red that in PG a query can not use more than one index per table: \"a\n> query or\n> > data manipulation command can use at most one index per table\".\n>\n> That's not true, but it's true for MySQL, afaik.\n>\n>\n> Andreas\n> --\n> Andreas Kretschmer\n> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n> GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThat is not true either, though MySQL is less good at using bitmap'ed\nindexes. 5.0 can use \"merge indexes\",\n\n-- \nRob Wultsch\[email protected]\n\nOn Wed, Jul 21, 2010 at 12:53 AM, A. Kretschmer <[email protected]> wrote:\n\nIn response to Elias Ghanem :\n> Hi,\n> I have a question concerning the uses of indexes in Postgresql.\n> I red that in PG a query can not use more than one index per table: \"a query or\n> data manipulation command can use at most one index per table\".\n\nThat's not true, but it's true for MySQL, afaik.\n\n\nAndreas\n--\nAndreas Kretschmer\nKontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nThat is not true either, though MySQL is less good at using bitmap'ed indexes. 5.0 can use \"merge indexes\",-- Rob [email protected]", "msg_date": "Wed, 21 Jul 2010 00:58:16 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "On Wed, Jul 21, 2010 at 1:31 AM, Elias Ghanem <[email protected]> wrote:\n> Hi,\n> I have a question concerning the uses of indexes in Postgresql.\n> I red that in PG a query can not use more than one index per table: \"a query\n> or data manipulation command can use at most one index per table\".\n> Actually I found this a little weird and unfortunately I could not find\n> further explanation with my Google searches. But the tests I made proved\n> that this is true:\n>\n> If we have table :\n>\n> create table test_index(col_1 integer, col_2 integer, col_3 integer, col_4\n> integer)\n>\n> and we have 2 indexes created on this table:\n>\n> create index idx_col_1 on test_index (col_1)\n>\n> create index idx_col_2 on test_index (col_2)\n>\n> A query like : select * from test_index where col_1 = 15 and col_2 = 30\n> would never use both the indexes. The query plan is:\n>\n> \"Index Scan using idx_col_2 on test_index (cost=0.00..8.27 rows=1 width=16)\n> (actual time=0.092..0.092 rows=0 loops=1)\"\n>\n> \" Index Cond: (col_2 = 30)\"\n>\n> \" Filter: (col_1 = 15)\"\n>\n> \"Total runtime: 0.127 ms\"\n>\n> The query will use idx_col_2 only and apply the other condition ignoring the\n> other index(idx_col_1).\n>\n> So please can you give some more details about this point. Is the above\n> citation true or I misunderstood it?\n\nWell, it's not really a citation without a source, which you didn't\nprovide. But it's definitely no longer true, and hasn't been for some\nyears now. I think it was 8.0 or 8.1 that introduced bitmap index\nscans. Here's a sample query that uses on from one of my dbs at work.\n\nexplain select * from test_table where id1 between 3047964 and 1261382\nand id2 between 443365 and 452479;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on test_table (cost=2497.05..3535.69 rows=870 width=326)\n Recheck Cond: ((id1 >= 3047964) AND (id1 <= 1261382) AND (id2 >=\n443365) AND (id2 <= 452479))\n -> BitmapAnd (cost=2497.05..2497.05 rows=870 width=0)\n -> Bitmap Index Scan on test_table_pkey (cost=0.00..181.67\nrows=13573 width=0)\n Index Cond: ((id1 >= 3047964) AND (id1 <= 1261382))\n -> Bitmap Index Scan on test_table_id2_idx\n(cost=0.00..2314.70 rows=174076 width=0)\n Index Cond: ((id2 >= 443365) AND (id2 <= 452479))\n\n\n> A next step is what if a query made a join on two tables table1 and table2\n> (for ex: where table1.id = table2.id and table2.col_2 = 3 and table2.col_3 =\n> 4)?\n> Will it use, for table2, the index of the join column (table2.id) only and\n> neglect the indexes of the other two columns(col_2 and col_3) although they\n> are present in the where clause.\n\nNone of the behavior of the query planner is written in stone. You'll\nnotice that up above the query planner has estimated costs for each\noperation. Pgsql's query planner will look at various options and\nchoose the cheapest, which may or my not be to use a bitmap index scan\non your query.\n\nExplain select ... will show you the plan.\n\nExplain analyze select ... will show you plan and the actual execution\nof it, so you can compare what the query planner expected and what\nreally happened. Note that explain analyze actually runs the query,\nso explain analyze delete will actually delete things. You can get\naround this with a transaction:\n\nbegin;\nexplain analyze delete ... ;\nrollback;\n", "msg_date": "Wed, 21 Jul 2010 02:01:00 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "On Wed, Jul 21, 2010 at 1:58 AM, Rob Wultsch <[email protected]> wrote:\n> On Wed, Jul 21, 2010 at 12:53 AM, A. Kretschmer\n> <[email protected]> wrote:\n>>\n>> In response to Elias Ghanem :\n>> > Hi,\n>> > I have a question concerning the uses of indexes in Postgresql.\n>> > I red that in PG a query can not use more than one index per table: \"a\n>> > query or\n>> > data manipulation command can use at most one index per table\".\n>>\n>> That's not true, but it's true for MySQL, afaik.\n>>\n>\n> That is not true either, though MySQL is less good at using bitmap'ed\n> indexes. 5.0 can use \"merge indexes\",\n\nYeah, the biggest problem MySQL has is that it's got a pretty\nsimplistic query planner so it often makes poor choices.\n\nNote that PostgreSQL on the other hand, has a much smarter query\nplanner. So it usually makes better choices. But when it makes a\nwrong one, it can be a doozie. Luckily, reported strange behavior in\nthe query planner is usually fixed pretty quickly.\n", "msg_date": "Wed, 21 Jul 2010 02:03:28 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "On 7/21/2010 2:31 AM, Elias Ghanem wrote:\n> Hi,\n> I have a question concerning the uses of indexes in Postgresql.\n> I red that in PG a query can not use more than one index per table: \"a\n> query or data manipulation command can use at most one index per table\".\n> Actually I found this a little weird and unfortunately I could not find\n> further explanation with my Google searches. But the tests I made proved\n> that this is true:\n>\n> If we have table :\n>\n> *create table test_index(col_1 integer, col_2 integer, col_3 integer,\n> col_4 integer)*\n>\n>\n> and we have 2 indexes created on this table:\n>\n> *create index idx_col_1 on test_index (col_1)*\n>\n> *create index idx_col_2 on test_index (col_2)*\n>\n> A query like : *select * from test_index where col_1 = 15 and col_2 =\n> 30* would never use both the indexes. The query plan is:\n>\n> *\"Index Scan using idx_col_2 on test_index (cost=0.00..8.27 rows=1\n> width=16) (actual time=0.092..0.092 rows=0 loops=1)\"*\n>\n> *\" Index Cond: (col_2 = 30)\"*\n>\n> *\" Filter: (col_1 = 15)\"*\n>\n> *\"Total runtime: 0.127 ms\"*\n>\n> The query will use *idx_col_2 *only and apply the other condition\n> ignoring the other index(*idx_col_1*).\n>\n>\n> So please can you give some more details about this point. Is the above\n> citation true or I misunderstood it?\n>\n> A next step is what if a query made a join on two tables table1 and\n> table2 (for ex: where table1.id = table2.id and table2.col_2 = 3 and\n> table2.col_3 = 4)?\n> Will it use, for table2, the index of the join column (table2.id) only\n> and neglect the indexes of the other two columns(col_2 and col_3)\n> although they are present in the where clause.\n>\n> Thanks for your response,\n>\n> Elias\n>\n\nAs others have said, it will use more than one index. The question you \nmay have though, is why didnt it?\n\nIts because an index isn't always faster. The answer to both your \nquestions (does it get used, and how about in a join) comes down to \nselectivity. If an index can drastically cut down the number of rows \nthen it'll be used, otherwise its faster to scan for the ones you need.\n\nIn your first example:\nselect * from test_index where col_1 = 15 and col_2 = 30\n\nthe planner will use whatever index has the test selectivity. If 100's \nof rows have col_1 = 15, but only 5 rows have col_2 = 30, then its much \nfaster to pull out the 5 rows with col_2 = 30 and just scan them for \ncol_1 = 15.\n\nLets say both are highly selective (there are only a few rows each). \nAgain its not going to be faster to use both indexes:\n\nread the col_1 index for 15\nfetch 5 rows from db.\nread the col2_ index for 30\nfetch different 5 rows from db\nscan/bitmap the 10 rows for the both col_1 and col_2 conditions.\n\nvs:\nread col_1 index for 15\nfetch 5 rows from db.\nscan 5 rows for col_2 condition\n\nThe join case is exactly the same. If the index can be used to reduce \nthe resultset, or make individual row lookups faster, then it'll be used.\n\nI have some description tables, like: (id integer, descr text)\nwith maybe 100 rows in it. PG never uses the unique index on id, it \nalways table scans it... because its faster.\n\n-Andy\n", "msg_date": "Wed, 21 Jul 2010 09:14:11 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "Elias Ghanem wrote:\n>\n> I red that in PG a query can not use more than one index per table: \"a \n> query or data manipulation command can use at most one index per table\".\n>\n\nYou'll find that at the very end of \nhttp://www.postgresql.org/docs/7.4/static/indexes.html and \nhttp://www.postgresql.org/docs/8.0/static/indexes.html ; try \nhttp://www.postgresql.org/docs/8.1/static/indexes.html instead and \nyou'll discover that text has been removed because it was no longer true \nas of this version. If you find yourself at a PostgreSQL documentation \npage, often the search engines link to an older version with outdated \ninformation just because those have had more time accumulate links to \nthem. A useful trick to know is that if you replace the version number \nwith \"current\", you'll get to the latest version most of the time \n(sometimes the name of the page is changed between versions, too, but \nthis isn't that frequent).\n\nSo for this example, \nhttp://www.postgresql.org/docs/current/static/indexes.html will take you \nto the documentation for 8.4, which is the latest released version.\n\nAs for your example, you can't test optimizer behavior with trivial \ntables. The overhead of using the index isn't zero, and it will often \nbe deemed excessive for a small example. So for this:\n\n*\"Index Scan using idx_col_2 on test_index (cost=0.00..8.27 rows=1 \nwidth=16) (actual time=0.092..0.092 rows=0 loops=1)\"*\n\n*\" Index Cond: (col_2 = 30)\"*\n\n*\" Filter: (col_1 = 15)\"*\n\n\n\nOnce it uses the one index, it only expects one row to be returned, at \nwhich point it has no need to use a second index. Faster to just look \nat that row and use some CPU time to determine if it matches. Using the \nsecond index for that instead would require some disk access to look up \nthings in it, which will take longer than running the filter. That's \nwhy the second one isn't used.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 21 Jul 2010 10:59:53 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "On 21/07/10 22:59, Greg Smith wrote:\n\n> A useful trick to know is that if you replace the version number\n> with \"current\", you'll get to the latest version most of the time\n> (sometimes the name of the page is changed between versions, too, but\n> this isn't that frequent).\n\nThe docs pages could perhaps benefit from an auto-generated note saying:\n\n\"The current version of Pg is 8.4. This documentation is for version\n8.2. Click [here] for documentation on the current version.\"\n\n... or something to that effect. It'd be a nice (and more user-friendly)\nalternative to url twiddling when searches reveal docs for an old\nversion, and might help push the /current/ pages up in search rank too.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 22 Jul 2010 08:47:57 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "On 7/21/10 5:47 PM, Craig Ringer wrote:\n> On 21/07/10 22:59, Greg Smith wrote:\n>\n>> A useful trick to know is that if you replace the version number\n>> with \"current\", you'll get to the latest version most of the time\n>> (sometimes the name of the page is changed between versions, too, but\n>> this isn't that frequent).\n>\n> The docs pages could perhaps benefit from an auto-generated note saying:\n>\n> \"The current version of Pg is 8.4. This documentation is for version\n> 8.2. Click [here] for documentation on the current version.\"\n>\n> ... or something to that effect. It'd be a nice (and more user-friendly)\n> alternative to url twiddling when searches reveal docs for an old\n> version, and might help push the /current/ pages up in search rank too.\n\nIn addition, why not use symlinks so that the current version is simply called \"current\", as in\n\n http://www.postgresql.org/docs/current/static/sql-insert.html\n\nIf you google for \"postgres insert\", you get this:\n\n http://www.postgresql.org/docs/8.1/static/sql-insert.html\n\nThe problem is that Google ranks pages based on inbound links, so older versions of Postgres *always* come up before the latest version in page ranking. By using \"current\" and encouraging people to link to that, we could quickly change the Google pagerank so that a search for Postgres would turn up the most-recent version of documentation.\n\nCraig\n", "msg_date": "Wed, 21 Jul 2010 18:03:17 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "Craig James wrote:\n> By using \"current\" and encouraging people to link to that, we could \n> quickly change the Google pagerank so that a search for Postgres would \n> turn up the most-recent version of documentation.\n\nHow do you propose to encourage people to do that? If I had a good \nanswer to that question, I'd already be executing on it. I've made a \nhabit of doing that when writing articles on the wiki, which hopefully \nthemselves become popular and then elevate those links (all of the ones \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for \nexample point to current). I don't know how to target \"people who link \nto the PostgreSQL manual\" beyond raising awareness of the issue \nperiodically on these lists, like I did on this thread.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 21 Jul 2010 21:47:45 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "\nOn Jul 21, 2010, at 6:47 PM, Greg Smith wrote:\n\n> Craig James wrote:\n>> By using \"current\" and encouraging people to link to that, we could quickly change the Google pagerank so that a search for Postgres would turn up the most-recent version of documentation.\n> \n> How do you propose to encourage people to do that? If I had a good answer to that question, I'd already be executing on it. \n\nWhen people link to a page, they link to the URL they copy and paste out of the browser address bar.\n\nIf http://postgresql.org/docs/9.0/* were to 302 redirect to http://postgresql.org/docs/current/* while 9.0 is the current release (and similarly for 9.1 and so on) I suspect we'd find many more links to current and fewer links to specific versions after a year or two.\n\n> I've made a habit of doing that when writing articles on the wiki, which hopefully themselves become popular and then elevate those links (all of the ones http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for example point to current). I don't know how to target \"people who link to the PostgreSQL manual\" beyond raising awareness of the issue periodically on these lists, like I did on this thread.\n\nCheers,\n Steve\n\n", "msg_date": "Wed, 21 Jul 2010 19:00:39 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "Steve Atkins wrote:\n> If http://postgresql.org/docs/9.0/* were to 302 redirect to http://postgresql.org/docs/current/* while 9.0 is the current release (and similarly for 9.1 and so on) I suspect we'd find many more links to current and fewer links to specific versions after a year or two.\n> \n\nTrue, but this would leave people with no way to bookmark a permanent \nlink to whatever is the current version, which will represent a \nregression for how some people want the site to work. Also, this and \nthe idea to add a \"this is an old version\" note to each old page will \nend up increasing work for the already overloaded web team managing the \nsite. Neither are unreasonable ideas, there's just some subtle bits to \nmaking either happen that would need to be worked out, and I don't know \nwho would have time to work through everything involved.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 21 Jul 2010 22:27:47 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "\nOn Jul 21, 2010, at 7:27 PM, Greg Smith wrote:\n\n> Steve Atkins wrote:\n>> If http://postgresql.org/docs/9.0/* were to 302 redirect to http://postgresql.org/docs/current/* while 9.0 is the current release (and similarly for 9.1 and so on) I suspect we'd find many more links to current and fewer links to specific versions after a year or two.\n>> \n> \n> True, but this would leave people with no way to bookmark a permanent link to whatever is the current version, which will represent a regression for how some people want the site to work.\n\nWell, they'd still be able to link to the specific version with ../9.0/.. and have that link to the version 9.0 docs forever, just not as easily as a copy/paste. That's the whole point, though, to make the wanted behaviour easier than the unwanted.\n\n> Also, this and the idea to add a \"this is an old version\" note to each old page will end up increasing work for the already overloaded web team managing the site. Neither are unreasonable ideas, there's just some subtle bits to making either happen that would need to be worked out, and I don't know who would have time to work through everything involved.\n\nYup. I'm not convinced it's a great idea either - but it's about the only thing that'd get people to link to ../current/.. by default.\n\nCheers,\n Steve\n\n", "msg_date": "Wed, 21 Jul 2010 19:55:42 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "On 22/07/10 03:27, Greg Smith wrote:\n> Steve Atkins wrote:\n>> If http://postgresql.org/docs/9.0/* were to 302 redirect to\n>> http://postgresql.org/docs/current/* while 9.0 is the current release\n>> (and similarly for 9.1 and so on) I suspect we'd find many more links\n>> to current and fewer links to specific versions after a year or two.\n>\n> True, but this would leave people with no way to bookmark a permanent\n> link to whatever is the current version, which will represent a\n> regression for how some people want the site to work.\n\nHaving a quick look at the website, a simple change might be to have a \nlarge \"CURRENT MANUALS\" link above all the versioned links. That should \nhelp substantially.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 22 Jul 2010 09:35:44 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Using more tha one index per table" }, { "msg_contents": "On Thu, Jul 22, 2010 at 1:35 AM, Richard Huxton <[email protected]> wrote:\n\n> On 22/07/10 03:27, Greg Smith wrote:\n>\n>> Steve Atkins wrote:\n>>\n>>> If http://postgresql.org/docs/9.0/* were to 302 redirect to\n>>> http://postgresql.org/docs/current/* while 9.0 is the current release\n>>> (and similarly for 9.1 and so on) I suspect we'd find many more links\n>>> to current and fewer links to specific versions after a year or two.\n>>>\n>>\n>> True, but this would leave people with no way to bookmark a permanent\n>> link to whatever is the current version, which will represent a\n>> regression for how some people want the site to work.\n>>\n>\n> Having a quick look at the website, a simple change might be to have a\n> large \"CURRENT MANUALS\" link above all the versioned links. That should help\n> substantially.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI suggested a few weeks ago adding a drop down menu for other version of the\nmanual for a page. I have not had time to write a patch, but I think it is\nsomething that MySQL does better that pg.\n\nAs an example take a look at the page on select for MySQL:\nhttp://dev.mysql.com/doc/refman/5.1/en/select.html .\n\nIf you want a earlier or later version they are easily accessible via a link\non the left.\n\n\n-- \nRob Wultsch\[email protected]\n\nOn Thu, Jul 22, 2010 at 1:35 AM, Richard Huxton <[email protected]> wrote:\nOn 22/07/10 03:27, Greg Smith wrote:\n\nSteve Atkins wrote:\n\nIf http://postgresql.org/docs/9.0/* were to 302 redirect to\nhttp://postgresql.org/docs/current/* while 9.0 is the current release\n(and similarly for 9.1 and so on) I suspect we'd find many more links\nto current and fewer links to specific versions after a year or two.\n\n\nTrue, but this would leave people with no way to bookmark a permanent\nlink to whatever is the current version, which will represent a\nregression for how some people want the site to work.\n\n\nHaving a quick look at the website, a simple change might be to have a large \"CURRENT MANUALS\" link above all the versioned links. That should help substantially.\n\n-- \n  Richard Huxton\n  Archonet Ltd\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nI suggested a few weeks ago adding a drop down menu for other version of the manual for a page. I have not had time to write a patch, but I think it is something that MySQL does better that pg.\nAs an example take a look at the page on select for MySQL: http://dev.mysql.com/doc/refman/5.1/en/select.html .If you want a earlier or later version they are easily accessible via a link on the left.\n-- Rob [email protected]", "msg_date": "Thu, 22 Jul 2010 02:09:21 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Using more tha one index per table" }, { "msg_contents": "On 7/21/10 6:47 PM, Greg Smith wrote:\n> Craig James wrote:\n>> By using \"current\" and encouraging people to link to that, we could\n>> quickly change the Google pagerank so that a search for Postgres would\n>> turn up the most-recent version of documentation.\n>\n> How do you propose to encourage people to do that? If I had a good\n> answer to that question, I'd already be executing on it. I've made a\n> habit of doing that when writing articles on the wiki, which hopefully\n> themselves become popular and then elevate those links (all of the ones\n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for\n> example point to current). I don't know how to target \"people who link\n> to the PostgreSQL manual\" beyond raising awareness of the issue\n> periodically on these lists, like I did on this thread.\n\nYou don't have to get everyone to do it. Just get more people to link to \"current\" than to other versions and you win in the Google ranking.\n\nStart by sending an announcement to every PG mailing list. You'd probably get a couple thousand right away, which by itself might do the trick.\n\nOnce \"current\" reaches the top of the Google ranking, it will cascade: People searching for Postgres documentation will find \"current\" first, and will post links to it, which will further reinforce its popularity.\n\nThere will always be people who link to older versions, but since the versions change frequently and \"current\" lasts forever, its ranking will constantly build until it ultimately wins.\n\nThere's no downside to it. It's easy to do. The other ideas (like putting \"out of date\" disclaimers and such into older versions) might also be useful, but might be a lot of work for just a little more gain. Creating a \"current\" link is simple and in the long run will be very effective. The sooner it starts, the sooner it will gain traction.\n\nCraig\n", "msg_date": "Thu, 22 Jul 2010 09:57:06 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "Craig James schrieb:\n\n>>> A useful trick to know is that if you replace the version number\n>>> with \"current\", you'll get to the latest version most of the time\n>>> (sometimes the name of the page is changed between versions, too, but\n>>> this isn't that frequent).\n>>\n>> The docs pages could perhaps benefit from an auto-generated note saying:\n>>\n>> \"The current version of Pg is 8.4. This documentation is for version\n>> 8.2. Click [here] for documentation on the current version.\"\n>>\n>> ... or something to that effect. It'd be a nice (and more user-friendly)\n>> alternative to url twiddling when searches reveal docs for an old\n>> version, and might help push the /current/ pages up in search rank too.\n> \n> In addition, why not use symlinks so that the current version is simply \n> called \"current\", as in\n> \n> http://www.postgresql.org/docs/current/static/sql-insert.html\n> \n> If you google for \"postgres insert\", you get this:\n> \n> http://www.postgresql.org/docs/8.1/static/sql-insert.html\n> \n> The problem is that Google ranks pages based on inbound links, so older \n> versions of Postgres *always* come up before the latest version in page \n> ranking. \n\nSince 2009 you can deal with this by defining the canonical-version. \n(http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html)\n\nGreetings from Germany,\nTorsten\n\n-- \nhttp://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 \nverschiedenen Datenbanksystemen abstrahiert,\nQueries von Applikationen trennt und automatisch die Query-Ergebnisse \nauswerten kann.\n", "msg_date": "Fri, 23 Jul 2010 11:22:18 +0200", "msg_from": "=?ISO-8859-1?Q?Torsten_Z=FChlsdorff?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "On 7/23/10 2:22 AM, Torsten Z�hlsdorff wrote:\n> Craig James schrieb:\n>\n>>>> A useful trick to know is that if you replace the version number\n>>>> with \"current\", you'll get to the latest version most of the time\n>>>> (sometimes the name of the page is changed between versions, too, but\n>>>> this isn't that frequent).\n>>>\n>>> The docs pages could perhaps benefit from an auto-generated note saying:\n>>>\n>>> \"The current version of Pg is 8.4. This documentation is for version\n>>> 8.2. Click [here] for documentation on the current version.\"\n>>>\n>>> ... or something to that effect. It'd be a nice (and more user-friendly)\n>>> alternative to url twiddling when searches reveal docs for an old\n>>> version, and might help push the /current/ pages up in search rank too.\n>>\n>> In addition, why not use symlinks so that the current version is\n>> simply called \"current\", as in\n>>\n>> http://www.postgresql.org/docs/current/static/sql-insert.html\n>>\n>> If you google for \"postgres insert\", you get this:\n>>\n>> http://www.postgresql.org/docs/8.1/static/sql-insert.html\n>>\n>> The problem is that Google ranks pages based on inbound links, so\n>> older versions of Postgres *always* come up before the latest version\n>> in page ranking.\n>\n> Since 2009 you can deal with this by defining the canonical-version.\n> (http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html)\n\nThis is a really cool feature, but it's not what we need. The \"canonical\" refers to the URL, not the web page. It's only supposed to be used if you have multiple URLs that are actually the *same* page; the \"canonical\" URL tells Google \"use only this URL for this page.\"\n\nBut in our case, the Postgres manuals for each release have different URLs *and* different content, so the \"canonical URL\" isn't the right solution.\n\nCraig\n\n> Greetings from Germany,\n> Torsten\n>\n\n", "msg_date": "Fri, 23 Jul 2010 14:47:23 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "Craig James schrieb:\n\n>>> The problem is that Google ranks pages based on inbound links, so\n>>> older versions of Postgres *always* come up before the latest version\n>>> in page ranking.\n>>\n>> Since 2009 you can deal with this by defining the canonical-version.\n>> (http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html) \n>>\n> \n> This is a really cool feature, but it's not what we need. The \n> \"canonical\" refers to the URL, not the web page. It's only supposed to \n> be used if you have multiple URLs that are actually the *same* page; the \n> \"canonical\" URL tells Google \"use only this URL for this page.\"\n> \n> But in our case, the Postgres manuals for each release have different \n> URLs *and* different content, so the \"canonical URL\" isn't the right \n> solution.\n\nThis is true, but the content is allowed to change \"a little\". Of course \ntheir is no percentage of allowed changes. But it can be quite much. \nI've used this feature for some clients, which push their content into \nvery different websites and it does work.\nMost of the content of the documentation doesn't change much between the \nreleases. In most cases the canonical will work the way i suggest.\n\nIn case of big changes even the recommandation of using a \"current\" \nversion won't work. Its true that Google ranks pages based on inbound \nlinks. But there are more than 200 other factores, which influence the \nrankings. Most people do not know, that changing most of a sites content \nmakes the inbound links for a long time useless. After big changes in \nthe documentation the \"current\" entry will be droped for some monthes \nand the old entries will appear. But note, that every single site of the \ndocumentation is ranked for itself. From my experience i would expect \nthe canonical-version with better results, than the current-version.\n\nBut the canonical is not the best solution in my opinion. I often edit \nthe urls of some documentations, because i need it for a special \npostgresql version. The documentation clearly misses a version-switch. \nCombined with an big note, that the current displayed documentation is \nnot the one of the current postgresql-version, this will be the best \ncompromiss in my opinion.\n\nGreetings from Germany,\nTorsten\n", "msg_date": "Sat, 24 Jul 2010 14:57:38 +0200", "msg_from": "=?ISO-8859-1?Q?Torsten_Z=FChlsdorff?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "On 7/24/10 5:57 AM, Torsten Z�hlsdorff wrote:\n> Craig James schrieb:\n>\n>>>> The problem is that Google ranks pages based on inbound links, so\n>>>> older versions of Postgres *always* come up before the latest version\n>>>> in page ranking.\n>>>\n>>> Since 2009 you can deal with this by defining the canonical-version.\n>>> (http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html)\n>>>\n>>\n>> This is a really cool feature, but it's not what we need. The\n>> \"canonical\" refers to the URL, not the web page. It's only supposed to\n>> be used if you have multiple URLs that are actually the *same* page;\n>> the \"canonical\" URL tells Google \"use only this URL for this page.\"\n>>\n>> But in our case, the Postgres manuals for each release have different\n>> URLs *and* different content, so the \"canonical URL\" isn't the right\n>> solution.\n>\n> This is true, but the content is allowed to change \"a little\". Of course\n> their is no percentage of allowed changes. But it can be quite much.\n> I've used this feature for some clients, which push their content into\n> very different websites and it does work.\n> Most of the content of the documentation doesn't change much between the\n> releases. In most cases the canonical will work the way i suggest.\n>\n> In case of big changes even the recommandation of using a \"current\"\n> version won't work. Its true that Google ranks pages based on inbound\n> links. But there are more than 200 other factores, which influence the\n> rankings. Most people do not know, that changing most of a sites content\n> makes the inbound links for a long time useless. After big changes in\n> the documentation the \"current\" entry will be droped for some monthes\n> and the old entries will appear. But note, that every single site of the\n> documentation is ranked for itself. From my experience i would expect\n> the canonical-version with better results, than the current-version.\n>\n> But the canonical is not the best solution in my opinion. I often edit\n> the urls of some documentations, because i need it for a special\n> postgresql version. The documentation clearly misses a version-switch.\n> Combined with an big note, that the current displayed documentation is\n> not the one of the current postgresql-version, this will be the best\n> compromiss in my opinion.\n\nHere's an idea: Use a \"current\" URL, plus a JavaScript embedded in every page that compares its own URL to the \"current\" URL and, if it doesn't match, does a \"document.write()\" indicating how to find the most-current version.\n\nThat would solve three problems:\n\n 1. There would be a \"current\" version that people could link to.\n 2. If someone found an old version, they would know it and could\n instantly be directed to the current version.\n 3. It wouldn't be any burden on the web site maintainers, because\n the JavaScript wouldn't have to be changed.\n\nCraig\n", "msg_date": "Sat, 24 Jul 2010 07:59:36 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> Craig James wrote:\n>> By using \"current\" and encouraging people to link to that, we could\n>> quickly change the Google pagerank so that a search for Postgres would\n>> turn up the most-recent version of documentation.\n>\n> How do you propose to encourage people to do that? \n\nWhat about adding version information in huge letters in the top blue\nbar, with all versions available in lower letters than what you're\nlooking at, and with current version nicely highlighted (color,\nunderlining, subtitle, whatever, we'd have to find a visual hints).\n\nIn other words, make it so big that you don't have to read the page\ncontent to realise what version it is you're looking at. Maybe we would\nneed to have this information stay visible on the page at the same place\nwhen you scroll, too.\n\nRegards,\n-- \ndim\n", "msg_date": "Sun, 25 Jul 2010 10:29:26 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using more tha one index per table" } ]
[ { "msg_contents": "Hi\nI need to configure for high performance and memory usage my postgres 8.4 clusters.\n\nDescription:\nThe database is partitioned in regions. Every region has master and slave db. The application write:read=1:50.\nAt the moment i had delpoyed the app on 3 servers as follows:\nNode1: db master2, db slave1, pgpool2, slony-1 replication of master2 to slave2.\nNode2: db master3, db slave2, pgpool2, slony-1 replication of master3 to slave3.\nNode3: db master1, db slave3, pgpool2, slony-1 replication of master1 to slave1.\n\nThere is apache server on every Node that serve php pages. Read queries are sent to its slave\n only (which is on the same server to prevent network traffic).\nNo i must configure the memory parameters of the postgres for the master and for the slave, because the slave must use much memory and writes the replicated data fast and the master must be fast writing. \nAlso how to configure the max_connections, now i set them to 250, but i think to decrease them to 200 bcause i have pgpool infront of every db server.\nThe database is not so big right now, but it will grow fast. I expect to have 10^5 simultanious users at the end, but when this happens the hardware will be changed, may be and other things.\nNow the servers are dual core CPU 2.6 with 1.7 GB RAM and i standart disk. (the small instance from AWS) \n\nThe slave (read-only) has\n shared_buffers = 128MB\n effective_cache_size = 1GB \n max_connections = 250 \neverything else is default\nThe master(write-only) has default settings\n \n \n\n-----------------------------------------------------------------\nНе бъди безразличен-попълни анкетата за Рак на гърдата\nhttp://utrezavseki.org/campaigns.html#anketa_utre_za_vseki\n\n  HiI need to configure for high performance and memory usage my postgres 8.4 clusters.Description:The database is partitioned in regions. Every region has master and slave db. The application write:read=1:50.At the moment i had delpoyed the app on 3 servers as follows:Node1: db master2, db slave1, pgpool2, slony-1 replication of master2 to slave2.Node2: db master3, db slave2, pgpool2, slony-1 replication of master3 to slave3.Node3: db master1, db slave3, pgpool2, slony-1 replication of master1 to slave1.There is apache server on every Node that serve php pages. Read queries are sent to its slave  only (which is on the same server to prevent network traffic).No i must configure the memory parameters of the postgres for the master and for the slave, because the slave must use much memory and writes the replicated data fast and the master must be fast writing. Also how to configure the max_connections, now i set them to 250, but i think to decrease them to 200 bcause i have pgpool infront of every db server.The database is not so big right now, but it will grow fast. I expect to have 10^5 simultanious users at the end, but when this happens the hardware will be changed, may be and other things.Now the servers are dual core CPU 2.6 with 1.7 GB RAM and i standart disk. (the small instance from AWS) The slave (read-only) hasshared_buffers = 128MBeffective_cache_size = 1GBmax_connections = 250everything else is defaultThe master(write-only) has default settings\n-----------------------------------------------------------------\n\nНе бъди безразличен-попълни анкетата за Рак на гърдата", "msg_date": "Wed, 21 Jul 2010 15:42:04 +0300 (EEST)", "msg_from": "stanimir petrov <[email protected]>", "msg_from_op": true, "msg_subject": "tune memory usage for master/slave nodes in cluster" }, { "msg_contents": "stanimir petrov wrote:\n> Now the servers are dual core CPU 2.6 with 1.7 GB RAM and i standart \n> disk. (the small instance from AWS)\n\nYou're never going to be able to tune for writing data fast on a AWS \nenvironment; there just isn't enough disk throughput available. If this \napplication really does take off the way you expect it to, don't be \nsurprised to find you have to move it to real hardware to keep up. \nDedicated database servers tend to have tens of disks in them to keep up \nwith the sort of load you're expecting, and you just can't get that in a \ncloud environment. You can do some work to improve I/O using multiple \nstorage instances; \nhttp://blog.endpoint.com/2010/02/postgresql-ec2-ebs-raid0-snapshot.html \nis a good introduction to that.\n\nThe basic tuning advice you're looking for is available at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nIf you are trying to get faster writes out of AWS hardware, you may have \nto turn off synchronous_commit to accomplish that. That has some \npotential lost transaction downsides, but simple disks just can't write \ndata that fast so it may be the only way to make this work well.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 21 Jul 2010 11:08:36 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tune memory usage for master/slave nodes in cluster" } ]
[ { "msg_contents": "\nHi all,\n\nI'm using Postgresql 8.4.4 on Debian.\nIn postgresql.conf, constraint_exclusion is set to \"on\"\n\nI have partitioned tables with check constraints.\nMy primary table :\n CREATE TABLE documents\n (\n id serial NOT NULL,\n id_source smallint,\n nod integer,\n num text,\n id_fourniture integer,\n dav date NOT NULL,\n maj timestamp without time zone NOT NULL DEFAULT now(),\n id_location \"char\",\n id_partition smallint,\n mark text\n );\n\nThere is no row in \"only\" documents :\nSQL> select count(*) from only documents;\n -> 0\nSQL> select count(*) from documents;\n -> 160155756\n\nI have one thousand inherited tables like this one (with a different \ncheck constraint on each) :\n CREATE TABLE documents_mond\n (\n CONSTRAINT documents_mond_id_source_check CHECK (id_source = 113)\n )\n INHERITS (documents);\n CREATE INDEX idx_documents_mond_id\n ON documents_mond\n USING btree\n (id);\n\n CREATE INDEX idx_documents_mond_id_partition\n ON documents_mond\n USING btree\n (id_partition);\n\n CREATE INDEX idx_documents_mond_id_source_dav\n ON documents_mond\n USING btree\n (id_source, dav);\n ALTER TABLE documents_mond CLUSTER ON idx_documents_mond_id_source_dav;\n\n CREATE INDEX idx_documents_mond_id_source_nod\n ON documents_mond\n USING btree\n (id_source, nod);\n\n CREATE INDEX idx_documents_mond_id_source_num\n ON documents_mond\n USING btree\n (id_source, num);\n\n CREATE INDEX idx_documents_mond_maj\n ON documents_mond\n USING btree\n (maj);\n\nSQL> select count(*) from documents_mond;\n -> 1053929\n\nWhen i perform this query on the primary table :\nEXPLAIN ANALYZE\n select\n documents.id,\n documents.num,\n sources.name,\n l.name\n from\n documents,\n locations l,\n sources\n where\n documents.id_source = 113 and\n documents.id_location=l.id and\n documents.id_source=sources.id\n order by\n documents.id desc\n limit 5;\n\"Limit (cost=36209.55..36209.57 rows=5 width=24) (actual \ntime=2307.181..2307.185 rows=5 loops=1)\"\n\" -> Sort (cost=36209.55..36512.56 rows=121202 width=24) (actual \ntime=2307.180..2307.180 rows=5 loops=1)\"\n\" Sort Key: public.documents.id\"\n\" Sort Method: top-N heapsort Memory: 17kB\"\n\" -> Nested Loop (cost=1.52..34196.43 rows=121202 width=24) \n(actual time=0.076..1878.189 rows=1053929 loops=1)\"\n\" -> Index Scan using pk_sources on sources \n(cost=0.00..8.27 rows=1 width=8) (actual time=0.013..0.015 rows=1 loops=1)\"\n\" Index Cond: (id = 113)\"\n\" -> Hash Join (cost=1.52..32976.15 rows=121202 width=22) \n(actual time=0.059..1468.982 rows=1053929 loops=1)\"\n\" Hash Cond: (public.documents.id_location = l.id)\"\n\" -> Append (cost=0.00..27810.36 rows=1053932 \nwidth=14) (actual time=0.031..836.280 rows=1053929 loops=1)\"\n\" -> Seq Scan on documents (cost=0.00..18.25 \nrows=3 width=39) (actual time=0.001..0.001 rows=0 loops=1)\"\n\" Filter: (id_source = 113)\"\n\" -> Seq Scan on documents_mond documents \n(cost=0.00..27792.11 rows=1053929 width=14) (actual time=0.030..503.815 \nrows=1053929 loops=1)\"\n\" Filter: (id_source = 113)\"\n\" -> Hash (cost=1.23..1.23 rows=23 width=10) \n(actual time=0.019..0.019 rows=23 loops=1)\"\n\" -> Seq Scan on locations l (cost=0.00..1.23 \nrows=23 width=10) (actual time=0.001..0.007 rows=23 loops=1)\"\n\"Total runtime: 2307.498 ms\"\n\nAnd when i perform the same query directly on the inherited table (CHECK \nid_source=113) :\nEXPLAIN ANALYZE\n select\n documents.id,\n documents.num,\n sources.name,\n l.name\n from\n documents_mond documents,\n locations l,\n sources\n where\n documents.id_source = 113 and\n documents.id_location=l.id and\n documents.id_source=sources.id\n order by\n documents.id desc\n limit 5;\n\"Limit (cost=0.00..43.13 rows=5 width=24) (actual time=0.024..0.050 \nrows=5 loops=1)\"\n\" -> Nested Loop (cost=0.00..9091234.75 rows=1053929 width=24) \n(actual time=0.023..0.049 rows=5 loops=1)\"\n\" -> Nested Loop (cost=0.00..8796038.31 rows=1053929 width=16) \n(actual time=0.020..0.035 rows=5 loops=1)\"\n\" -> Index Scan Backward using idx_documents_mond_id on \ndocuments_mond documents (cost=0.00..71930.23 rows=1053929 width=14) \n(actual time=0.012..0.015 rows=5 loops=1)\"\n\" Filter: (id_source = 113)\"\n\" -> Index Scan using pk_sources on sources \n(cost=0.00..8.27 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=5)\"\n\" Index Cond: (sources.id = 113)\"\n\" -> Index Scan using locations_pkey on locations l \n(cost=0.00..0.27 rows=1 width=10) (actual time=0.001..0.002 rows=1 loops=5)\"\n\" Index Cond: (l.id = documents.id_location)\"\n\"Total runtime: 0.086 ms\"\n\nOR\n\nEXPLAIN ANALYZE\n select\n documents.id,\n documents.num,\n sources.name,\n l.name\n from\n documents_mond documents,\n locations l,\n sources\n where\n /* documents.id_source = 113 and */\n documents.id_location=l.id and\n documents.id_source=sources.id\n order by\n documents.id desc\n limit 5;\n\"Limit (cost=0.00..3.13 rows=5 width=24) (actual time=0.025..0.052 \nrows=5 loops=1)\"\n\" -> Nested Loop (cost=0.00..659850.75 rows=1053929 width=24) (actual \ntime=0.024..0.051 rows=5 loops=1)\"\n\" -> Nested Loop (cost=0.00..364654.31 rows=1053929 width=16) \n(actual time=0.021..0.037 rows=5 loops=1)\"\n\" -> Index Scan Backward using idx_documents_mond_id on \ndocuments_mond documents (cost=0.00..69295.41 rows=1053929 width=14) \n(actual time=0.011..0.013 rows=5 loops=1)\"\n\" -> Index Scan using pk_sources on sources \n(cost=0.00..0.27 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=5)\"\n\" Index Cond: (sources.id = documents.id_source)\"\n\" -> Index Scan using locations_pkey on locations l \n(cost=0.00..0.27 rows=1 width=10) (actual time=0.002..0.002 rows=1 loops=5)\"\n\" Index Cond: (l.id = documents.id_location)\"\n\"Total runtime: 0.091 ms\"\n\nIs it a normal behavior ?\nI need to rewrite all my Perl scripts to have query pointing only on \ninherited tables (when possible) ?\nI was thinking that query pointing on primary table were correctly \ndispatched on inherited tables ... I missing something ?\n\nRegards\n\nPhilippe\n\n\nPs : I'm french, so my english is approximate ... hoping it's understandable\n\n", "msg_date": "Thu, 22 Jul 2010 09:52:40 +0200", "msg_from": "Philippe Rimbault <[email protected]>", "msg_from_op": true, "msg_subject": "Strange explain on partitioned tables" }, { "msg_contents": "Oups! searching on the mailing list show me that it's a known problem ...\n\nhttp://archives.postgresql.org/pgsql-performance/2010-07/msg00063.php\n\nsorry !\n\n\n\nOn 22/07/2010 09:52, Philippe Rimbault wrote:\n>\n> Hi all,\n>\n> I'm using Postgresql 8.4.4 on Debian.\n> In postgresql.conf, constraint_exclusion is set to \"on\"\n>\n> I have partitioned tables with check constraints.\n> My primary table :\n> CREATE TABLE documents\n> (\n> id serial NOT NULL,\n> id_source smallint,\n> nod integer,\n> num text,\n> id_fourniture integer,\n> dav date NOT NULL,\n> maj timestamp without time zone NOT NULL DEFAULT now(),\n> id_location \"char\",\n> id_partition smallint,\n> mark text\n> );\n>\n> There is no row in \"only\" documents :\n> SQL> select count(*) from only documents;\n> -> 0\n> SQL> select count(*) from documents;\n> -> 160155756\n>\n> I have one thousand inherited tables like this one (with a different \n> check constraint on each) :\n> CREATE TABLE documents_mond\n> (\n> CONSTRAINT documents_mond_id_source_check CHECK (id_source = 113)\n> )\n> INHERITS (documents);\n> CREATE INDEX idx_documents_mond_id\n> ON documents_mond\n> USING btree\n> (id);\n>\n> CREATE INDEX idx_documents_mond_id_partition\n> ON documents_mond\n> USING btree\n> (id_partition);\n>\n> CREATE INDEX idx_documents_mond_id_source_dav\n> ON documents_mond\n> USING btree\n> (id_source, dav);\n> ALTER TABLE documents_mond CLUSTER ON \n> idx_documents_mond_id_source_dav;\n>\n> CREATE INDEX idx_documents_mond_id_source_nod\n> ON documents_mond\n> USING btree\n> (id_source, nod);\n>\n> CREATE INDEX idx_documents_mond_id_source_num\n> ON documents_mond\n> USING btree\n> (id_source, num);\n>\n> CREATE INDEX idx_documents_mond_maj\n> ON documents_mond\n> USING btree\n> (maj);\n>\n> SQL> select count(*) from documents_mond;\n> -> 1053929\n>\n> When i perform this query on the primary table :\n> EXPLAIN ANALYZE\n> select\n> documents.id,\n> documents.num,\n> sources.name,\n> l.name\n> from\n> documents,\n> locations l,\n> sources\n> where\n> documents.id_source = 113 and\n> documents.id_location=l.id and\n> documents.id_source=sources.id\n> order by\n> documents.id desc\n> limit 5;\n> \"Limit (cost=36209.55..36209.57 rows=5 width=24) (actual \n> time=2307.181..2307.185 rows=5 loops=1)\"\n> \" -> Sort (cost=36209.55..36512.56 rows=121202 width=24) (actual \n> time=2307.180..2307.180 rows=5 loops=1)\"\n> \" Sort Key: public.documents.id\"\n> \" Sort Method: top-N heapsort Memory: 17kB\"\n> \" -> Nested Loop (cost=1.52..34196.43 rows=121202 width=24) \n> (actual time=0.076..1878.189 rows=1053929 loops=1)\"\n> \" -> Index Scan using pk_sources on sources \n> (cost=0.00..8.27 rows=1 width=8) (actual time=0.013..0.015 rows=1 \n> loops=1)\"\n> \" Index Cond: (id = 113)\"\n> \" -> Hash Join (cost=1.52..32976.15 rows=121202 \n> width=22) (actual time=0.059..1468.982 rows=1053929 loops=1)\"\n> \" Hash Cond: (public.documents.id_location = l.id)\"\n> \" -> Append (cost=0.00..27810.36 rows=1053932 \n> width=14) (actual time=0.031..836.280 rows=1053929 loops=1)\"\n> \" -> Seq Scan on documents \n> (cost=0.00..18.25 rows=3 width=39) (actual time=0.001..0.001 rows=0 \n> loops=1)\"\n> \" Filter: (id_source = 113)\"\n> \" -> Seq Scan on documents_mond documents \n> (cost=0.00..27792.11 rows=1053929 width=14) (actual \n> time=0.030..503.815 rows=1053929 loops=1)\"\n> \" Filter: (id_source = 113)\"\n> \" -> Hash (cost=1.23..1.23 rows=23 width=10) \n> (actual time=0.019..0.019 rows=23 loops=1)\"\n> \" -> Seq Scan on locations l \n> (cost=0.00..1.23 rows=23 width=10) (actual time=0.001..0.007 rows=23 \n> loops=1)\"\n> \"Total runtime: 2307.498 ms\"\n>\n> And when i perform the same query directly on the inherited table \n> (CHECK id_source=113) :\n> EXPLAIN ANALYZE\n> select\n> documents.id,\n> documents.num,\n> sources.name,\n> l.name\n> from\n> documents_mond documents,\n> locations l,\n> sources\n> where\n> documents.id_source = 113 and\n> documents.id_location=l.id and\n> documents.id_source=sources.id\n> order by\n> documents.id desc\n> limit 5;\n> \"Limit (cost=0.00..43.13 rows=5 width=24) (actual time=0.024..0.050 \n> rows=5 loops=1)\"\n> \" -> Nested Loop (cost=0.00..9091234.75 rows=1053929 width=24) \n> (actual time=0.023..0.049 rows=5 loops=1)\"\n> \" -> Nested Loop (cost=0.00..8796038.31 rows=1053929 \n> width=16) (actual time=0.020..0.035 rows=5 loops=1)\"\n> \" -> Index Scan Backward using idx_documents_mond_id on \n> documents_mond documents (cost=0.00..71930.23 rows=1053929 width=14) \n> (actual time=0.012..0.015 rows=5 loops=1)\"\n> \" Filter: (id_source = 113)\"\n> \" -> Index Scan using pk_sources on sources \n> (cost=0.00..8.27 rows=1 width=8) (actual time=0.003..0.003 rows=1 \n> loops=5)\"\n> \" Index Cond: (sources.id = 113)\"\n> \" -> Index Scan using locations_pkey on locations l \n> (cost=0.00..0.27 rows=1 width=10) (actual time=0.001..0.002 rows=1 \n> loops=5)\"\n> \" Index Cond: (l.id = documents.id_location)\"\n> \"Total runtime: 0.086 ms\"\n>\n> OR\n>\n> EXPLAIN ANALYZE\n> select\n> documents.id,\n> documents.num,\n> sources.name,\n> l.name\n> from\n> documents_mond documents,\n> locations l,\n> sources\n> where\n> /* documents.id_source = 113 and */\n> documents.id_location=l.id and\n> documents.id_source=sources.id\n> order by\n> documents.id desc\n> limit 5;\n> \"Limit (cost=0.00..3.13 rows=5 width=24) (actual time=0.025..0.052 \n> rows=5 loops=1)\"\n> \" -> Nested Loop (cost=0.00..659850.75 rows=1053929 width=24) \n> (actual time=0.024..0.051 rows=5 loops=1)\"\n> \" -> Nested Loop (cost=0.00..364654.31 rows=1053929 width=16) \n> (actual time=0.021..0.037 rows=5 loops=1)\"\n> \" -> Index Scan Backward using idx_documents_mond_id on \n> documents_mond documents (cost=0.00..69295.41 rows=1053929 width=14) \n> (actual time=0.011..0.013 rows=5 loops=1)\"\n> \" -> Index Scan using pk_sources on sources \n> (cost=0.00..0.27 rows=1 width=8) (actual time=0.003..0.003 rows=1 \n> loops=5)\"\n> \" Index Cond: (sources.id = documents.id_source)\"\n> \" -> Index Scan using locations_pkey on locations l \n> (cost=0.00..0.27 rows=1 width=10) (actual time=0.002..0.002 rows=1 \n> loops=5)\"\n> \" Index Cond: (l.id = documents.id_location)\"\n> \"Total runtime: 0.091 ms\"\n>\n> Is it a normal behavior ?\n> I need to rewrite all my Perl scripts to have query pointing only on \n> inherited tables (when possible) ?\n> I was thinking that query pointing on primary table were correctly \n> dispatched on inherited tables ... I missing something ?\n>\n> Regards\n>\n> Philippe\n>\n>\n> Ps : I'm french, so my english is approximate ... hoping it's \n> understandable\n>\n>\n\n", "msg_date": "Thu, 22 Jul 2010 10:57:58 +0200", "msg_from": "Philippe Rimbault <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange explain on partitioned tables" }, { "msg_contents": "FYI\n\nI've just installed Postgresql 9 beta 3 (9.0beta3 on i686-pc-linux-gnu, \ncompiled by GCC gcc (Debian 4.4.4-6) 4.4.4, 32-bit)\n\nAfter a pg_upgrade + vacuum analyze, i've got the following results :\n\nQuery on primary table :\nselect\n documents.id,\n documents.num,\n sources.name,\n l.name\n from\n documents,\n locations l,\n sources\n where\n documents.id_source = 113 and\n documents.id_location=l.id and\n documents.id_source=sources.id\n order by\n documents.id desc\n limit 5;\n\"Limit (cost=70356.46..70356.48 rows=5 width=23) (actual \ntime=2362.268..2362.271 rows=5 loops=1)\"\n\" -> Sort (cost=70356.46..72991.29 rows=1053932 width=23) (actual \ntime=2362.267..2362.269 rows=5 loops=1)\"\n\" Sort Key: public.documents.id\"\n\" Sort Method: top-N heapsort Memory: 17kB\"\n\" -> Nested Loop (cost=1.52..52851.03 rows=1053932 width=23) \n(actual time=0.062..1912.826 rows=1053929 loops=1)\"\n\" -> Index Scan using pk_sources on sources \n(cost=0.00..8.27 rows=1 width=8) (actual time=0.006..0.009 rows=1 loops=1)\"\n\" Index Cond: (id = 113)\"\n\" -> Hash Join (cost=1.52..42303.44 rows=1053932 \nwidth=21) (actual time=0.052..1490.353 rows=1053929 loops=1)\"\n\" Hash Cond: (public.documents.id_location = l.id)\"\n\" -> Append (cost=0.00..27810.36 rows=1053932 \nwidth=13) (actual time=0.027..842.627 rows=1053929 loops=1)\"\n\" -> Seq Scan on documents (cost=0.00..18.25 \nrows=3 width=39) (actual time=0.000..0.000 rows=0 loops=1)\"\n\" Filter: (id_source = 113)\"\n\" -> Seq Scan on documents_mond documents \n(cost=0.00..27792.11 rows=1053929 width=13) (actual time=0.025..497.517 \nrows=1053929 loops=1)\"\n\" Filter: (id_source = 113)\"\n\" -> Hash (cost=1.23..1.23 rows=23 width=10) \n(actual time=0.018..0.018 rows=23 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 1kB\"\n\" -> Seq Scan on locations l (cost=0.00..1.23 \nrows=23 width=10) (actual time=0.001..0.010 rows=23 loops=1)\"\n\"Total runtime: 2362.369 ms\"\n\n\nOn inherted table :\n select\n documents.id,\n documents.num,\n sources.name,\n l.name\n from\n documents_mond documents,\n locations l,\n sources\n where\n documents.id_source = 113 and\n documents.id_location=l.id and\n documents.id_source=sources.id\n order by\n documents.id desc\n limit 5;\n\n\"Limit (cost=0.00..1.81 rows=5 width=23) (actual time=0.033..0.056 \nrows=5 loops=1)\"\n\" -> Nested Loop (cost=0.00..381351.92 rows=1053929 width=23) (actual \ntime=0.032..0.052 rows=5 loops=1)\"\n\" -> Nested Loop (cost=0.00..368169.54 rows=1053929 width=21) \n(actual time=0.023..0.037 rows=5 loops=1)\"\n\" -> Index Scan Backward using idx_documents_mond_id on \ndocuments_mond documents (cost=0.00..72973.11 rows=1053929 width=13) \n(actual time=0.014..0.017 rows=5 loops=1)\"\n\" Filter: (id_source = 113)\"\n\" -> Index Scan using locations_pkey on locations l \n(cost=0.00..0.27 rows=1 width=10) (actual time=0.002..0.003 rows=1 loops=5)\"\n\" Index Cond: (l.id = documents.id_location)\"\n\" -> Materialize (cost=0.00..8.27 rows=1 width=8) (actual \ntime=0.002..0.002 rows=1 loops=5)\"\n\" -> Index Scan using pk_sources on sources \n(cost=0.00..8.27 rows=1 width=8) (actual time=0.005..0.006 rows=1 loops=1)\"\n\" Index Cond: (id = 113)\"\n\"Total runtime: 0.095 ms\"\n\n\nResults are better than 8.4 if query is on inherted table but worth if \nquery is on primary table.\n\nSo waiting for 9.0 will not help me so much ! :)\n\n\n\nOn 22/07/2010 10:57, Philippe Rimbault wrote:\n> Oups! searching on the mailing list show me that it's a known problem ...\n>\n> http://archives.postgresql.org/pgsql-performance/2010-07/msg00063.php\n>\n> sorry !\n>\n>\n>\n> On 22/07/2010 09:52, Philippe Rimbault wrote:\n>>\n>> Hi all,\n>>\n>> I'm using Postgresql 8.4.4 on Debian.\n>> In postgresql.conf, constraint_exclusion is set to \"on\"\n>>\n>> I have partitioned tables with check constraints.\n>> My primary table :\n>> CREATE TABLE documents\n>> (\n>> id serial NOT NULL,\n>> id_source smallint,\n>> nod integer,\n>> num text,\n>> id_fourniture integer,\n>> dav date NOT NULL,\n>> maj timestamp without time zone NOT NULL DEFAULT now(),\n>> id_location \"char\",\n>> id_partition smallint,\n>> mark text\n>> );\n>>\n>> There is no row in \"only\" documents :\n>> SQL> select count(*) from only documents;\n>> -> 0\n>> SQL> select count(*) from documents;\n>> -> 160155756\n>>\n>> I have one thousand inherited tables like this one (with a different \n>> check constraint on each) :\n>> CREATE TABLE documents_mond\n>> (\n>> CONSTRAINT documents_mond_id_source_check CHECK (id_source = \n>> 113)\n>> )\n>> INHERITS (documents);\n>> CREATE INDEX idx_documents_mond_id\n>> ON documents_mond\n>> USING btree\n>> (id);\n>>\n>> CREATE INDEX idx_documents_mond_id_partition\n>> ON documents_mond\n>> USING btree\n>> (id_partition);\n>>\n>> CREATE INDEX idx_documents_mond_id_source_dav\n>> ON documents_mond\n>> USING btree\n>> (id_source, dav);\n>> ALTER TABLE documents_mond CLUSTER ON \n>> idx_documents_mond_id_source_dav;\n>>\n>> CREATE INDEX idx_documents_mond_id_source_nod\n>> ON documents_mond\n>> USING btree\n>> (id_source, nod);\n>>\n>> CREATE INDEX idx_documents_mond_id_source_num\n>> ON documents_mond\n>> USING btree\n>> (id_source, num);\n>>\n>> CREATE INDEX idx_documents_mond_maj\n>> ON documents_mond\n>> USING btree\n>> (maj);\n>>\n>> SQL> select count(*) from documents_mond;\n>> -> 1053929\n>>\n>> When i perform this query on the primary table :\n>> EXPLAIN ANALYZE\n>> select\n>> documents.id,\n>> documents.num,\n>> sources.name,\n>> l.name\n>> from\n>> documents,\n>> locations l,\n>> sources\n>> where\n>> documents.id_source = 113 and\n>> documents.id_location=l.id and\n>> documents.id_source=sources.id\n>> order by\n>> documents.id desc\n>> limit 5;\n>> \"Limit (cost=36209.55..36209.57 rows=5 width=24) (actual \n>> time=2307.181..2307.185 rows=5 loops=1)\"\n>> \" -> Sort (cost=36209.55..36512.56 rows=121202 width=24) (actual \n>> time=2307.180..2307.180 rows=5 loops=1)\"\n>> \" Sort Key: public.documents.id\"\n>> \" Sort Method: top-N heapsort Memory: 17kB\"\n>> \" -> Nested Loop (cost=1.52..34196.43 rows=121202 width=24) \n>> (actual time=0.076..1878.189 rows=1053929 loops=1)\"\n>> \" -> Index Scan using pk_sources on sources \n>> (cost=0.00..8.27 rows=1 width=8) (actual time=0.013..0.015 rows=1 \n>> loops=1)\"\n>> \" Index Cond: (id = 113)\"\n>> \" -> Hash Join (cost=1.52..32976.15 rows=121202 \n>> width=22) (actual time=0.059..1468.982 rows=1053929 loops=1)\"\n>> \" Hash Cond: (public.documents.id_location = l.id)\"\n>> \" -> Append (cost=0.00..27810.36 rows=1053932 \n>> width=14) (actual time=0.031..836.280 rows=1053929 loops=1)\"\n>> \" -> Seq Scan on documents \n>> (cost=0.00..18.25 rows=3 width=39) (actual time=0.001..0.001 rows=0 \n>> loops=1)\"\n>> \" Filter: (id_source = 113)\"\n>> \" -> Seq Scan on documents_mond documents \n>> (cost=0.00..27792.11 rows=1053929 width=14) (actual \n>> time=0.030..503.815 rows=1053929 loops=1)\"\n>> \" Filter: (id_source = 113)\"\n>> \" -> Hash (cost=1.23..1.23 rows=23 width=10) \n>> (actual time=0.019..0.019 rows=23 loops=1)\"\n>> \" -> Seq Scan on locations l \n>> (cost=0.00..1.23 rows=23 width=10) (actual time=0.001..0.007 rows=23 \n>> loops=1)\"\n>> \"Total runtime: 2307.498 ms\"\n>>\n>> And when i perform the same query directly on the inherited table \n>> (CHECK id_source=113) :\n>> EXPLAIN ANALYZE\n>> select\n>> documents.id,\n>> documents.num,\n>> sources.name,\n>> l.name\n>> from\n>> documents_mond documents,\n>> locations l,\n>> sources\n>> where\n>> documents.id_source = 113 and\n>> documents.id_location=l.id and\n>> documents.id_source=sources.id\n>> order by\n>> documents.id desc\n>> limit 5;\n>> \"Limit (cost=0.00..43.13 rows=5 width=24) (actual time=0.024..0.050 \n>> rows=5 loops=1)\"\n>> \" -> Nested Loop (cost=0.00..9091234.75 rows=1053929 width=24) \n>> (actual time=0.023..0.049 rows=5 loops=1)\"\n>> \" -> Nested Loop (cost=0.00..8796038.31 rows=1053929 \n>> width=16) (actual time=0.020..0.035 rows=5 loops=1)\"\n>> \" -> Index Scan Backward using idx_documents_mond_id on \n>> documents_mond documents (cost=0.00..71930.23 rows=1053929 width=14) \n>> (actual time=0.012..0.015 rows=5 loops=1)\"\n>> \" Filter: (id_source = 113)\"\n>> \" -> Index Scan using pk_sources on sources \n>> (cost=0.00..8.27 rows=1 width=8) (actual time=0.003..0.003 rows=1 \n>> loops=5)\"\n>> \" Index Cond: (sources.id = 113)\"\n>> \" -> Index Scan using locations_pkey on locations l \n>> (cost=0.00..0.27 rows=1 width=10) (actual time=0.001..0.002 rows=1 \n>> loops=5)\"\n>> \" Index Cond: (l.id = documents.id_location)\"\n>> \"Total runtime: 0.086 ms\"\n>>\n>> OR\n>>\n>> EXPLAIN ANALYZE\n>> select\n>> documents.id,\n>> documents.num,\n>> sources.name,\n>> l.name\n>> from\n>> documents_mond documents,\n>> locations l,\n>> sources\n>> where\n>> /* documents.id_source = 113 and */\n>> documents.id_location=l.id and\n>> documents.id_source=sources.id\n>> order by\n>> documents.id desc\n>> limit 5;\n>> \"Limit (cost=0.00..3.13 rows=5 width=24) (actual time=0.025..0.052 \n>> rows=5 loops=1)\"\n>> \" -> Nested Loop (cost=0.00..659850.75 rows=1053929 width=24) \n>> (actual time=0.024..0.051 rows=5 loops=1)\"\n>> \" -> Nested Loop (cost=0.00..364654.31 rows=1053929 \n>> width=16) (actual time=0.021..0.037 rows=5 loops=1)\"\n>> \" -> Index Scan Backward using idx_documents_mond_id on \n>> documents_mond documents (cost=0.00..69295.41 rows=1053929 width=14) \n>> (actual time=0.011..0.013 rows=5 loops=1)\"\n>> \" -> Index Scan using pk_sources on sources \n>> (cost=0.00..0.27 rows=1 width=8) (actual time=0.003..0.003 rows=1 \n>> loops=5)\"\n>> \" Index Cond: (sources.id = documents.id_source)\"\n>> \" -> Index Scan using locations_pkey on locations l \n>> (cost=0.00..0.27 rows=1 width=10) (actual time=0.002..0.002 rows=1 \n>> loops=5)\"\n>> \" Index Cond: (l.id = documents.id_location)\"\n>> \"Total runtime: 0.091 ms\"\n>>\n>> Is it a normal behavior ?\n>> I need to rewrite all my Perl scripts to have query pointing only on \n>> inherited tables (when possible) ?\n>> I was thinking that query pointing on primary table were correctly \n>> dispatched on inherited tables ... I missing something ?\n>>\n>> Regards\n>>\n>> Philippe\n>>\n>>\n>> Ps : I'm french, so my english is approximate ... hoping it's \n>> understandable\n>>\n>>\n>\n>\n\n", "msg_date": "Thu, 22 Jul 2010 12:03:12 +0200", "msg_from": "Philippe Rimbault <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange explain on partitioned tables" }, { "msg_contents": "Philippe Rimbault wrote:\n> I have one thousand inherited tables like this one (with a different \n> check constraint on each) :\n\nThe PostgreSQL partitioning system is aimed to support perhaps a hundred \ninherited tables. You can expect to get poor performance on queries if \nyou create 1000 of them. That's not the cause of your current problem, \njust pointing out there's a larger design problem here you'll probably \nhave to fix one day.\n\n> EXPLAIN ANALYZE\n> select\n> documents.id,\n> documents.num,\n> sources.name,\n> l.name\n> from\n> documents,\n> locations l,\n> sources\n> where\n> documents.id_source = 113 and\n> documents.id_location=l.id and\n> documents.id_source=sources.id\n> order by\n> documents.id desc\n> limit 5;\n\nPlease don't put your EXPLAIN plans surrounded in \" marks; it makes it \nharder to copy out of your message to analyze them with tools. I put \nthis bad one into http://explain.depesz.com/s/XD and it notes that the \n\"public.documents.id_location = l.id\" search is underestimating the \nnumber of rows by a factor of 8.7. You might get a better plan if you \ncan get better table statistics on that column. Did you run ANALYZE \nsince the partitioning was done? If not, that could be making this \nworse. You might increase the amount of table statistics on this \nspecific column too, not sure what would help without knowing exactly \nwhat's in there.\n\nAnother thing you can try is suggest the optimizer not use a hash join \nhere and see if it does the right thing instead; be a useful bit of \nfeedback to see what that plan turns out to be. Just put \"set \nenable_hashjoin=off;\" before the rest of the query, it will only impact \nthat session.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 22 Jul 2010 09:32:36 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange explain on partitioned tables" }, { "msg_contents": "Greg,\n\nFirst : thank you for you help.\n\n\n\nOn 22/07/2010 15:32, Greg Smith wrote:\n> Philippe Rimbault wrote:\n>> I have one thousand inherited tables like this one (with a different \n>> check constraint on each) :\n>\n> The PostgreSQL partitioning system is aimed to support perhaps a \n> hundred inherited tables. You can expect to get poor performance on \n> queries if you create 1000 of them. That's not the cause of your \n> current problem, just pointing out there's a larger design problem \n> here you'll probably have to fix one day.\n\nRight now, there is only 6 inherited tables, but for performance issue, \nwhere are testing solutions on more partionned systeme (all work fine \nexcept for query with order by).\n\n>\n>> EXPLAIN ANALYZE\n>> select\n>> documents.id,\n>> documents.num,\n>> sources.name,\n>> l.name\n>> from\n>> documents,\n>> locations l,\n>> sources\n>> where\n>> documents.id_source = 113 and\n>> documents.id_location=l.id and\n>> documents.id_source=sources.id\n>> order by\n>> documents.id desc\n>> limit 5;\n>\n> Please don't put your EXPLAIN plans surrounded in \" marks; it makes it \n> harder to copy out of your message to analyze them with tools. I put \n> this bad one into http://explain.depesz.com/s/XD and it notes that the \n> \"public.documents.id_location = l.id\" search is underestimating the \n> number of rows by a factor of 8.7. You might get a better plan if you \n> can get better table statistics on that column. Did you run ANALYZE \n> since the partitioning was done? If not, that could be making this \n> worse. You might increase the amount of table statistics on this \n> specific column too, not sure what would help without knowing exactly \n> what's in there.\n>\n> Another thing you can try is suggest the optimizer not use a hash join \n> here and see if it does the right thing instead; be a useful bit of \n> feedback to see what that plan turns out to be. Just put \"set \n> enable_hashjoin=off;\" before the rest of the query, it will only \n> impact that session.\n>\n\nSorry for the output of the EXPLAIN ...\n\nVACUUM ANALYZE have been done just before test of query.\n\nI think that the optimizer overestimates \"public.documents.id_location = \nl.id\" because it plan on the primary table and not the child ...\nI've change statistics to 1000 for documents.id_location and result is \nthe same.\n\nI've tested \"set enable_hashjoin=off;\" and the result is worst (sorry \ni'm still using 9.0b3) :\n\nLimit (cost=197755.49..197755.50 rows=5 width=23) (actual \ntime=4187.148..4187.150 rows=5 loops=1)\n -> Sort (cost=197755.49..200390.32 rows=1053932 width=23) (actual \ntime=4187.146..4187.147 rows=5 loops=1)\n Sort Key: public.documents.id\n Sort Method: top-N heapsort Memory: 17kB\n -> Nested Loop (cost=151258.55..180250.06 rows=1053932 \nwidth=23) (actual time=1862.214..3769.611 rows=1053929 loops=1)\n -> Index Scan using pk_sources on sources \n(cost=0.00..8.27 rows=1 width=8) (actual time=0.007..0.013 rows=1 loops=1)\n Index Cond: (id = 113)\n -> Merge Join (cost=151258.55..169702.47 rows=1053932 \nwidth=21) (actual time=1862.204..3360.555 rows=1053929 loops=1)\n Merge Cond: (l.id = public.documents.id_location)\n -> Sort (cost=1.75..1.81 rows=23 width=10) \n(actual time=0.028..0.036 rows=21 loops=1)\n Sort Key: l.id\n Sort Method: quicksort Memory: 17kB\n -> Seq Scan on locations l (cost=0.00..1.23 \nrows=23 width=10) (actual time=0.002..0.009 rows=23 loops=1)\n -> Materialize (cost=151256.80..156526.46 \nrows=1053932 width=13) (actual time=1862.162..2841.302 rows=1053929 loops=1)\n -> Sort (cost=151256.80..153891.63 \nrows=1053932 width=13) (actual time=1862.154..2290.881 rows=1053929 loops=1)\n Sort Key: public.documents.id_location\n Sort Method: external merge Disk: 24496kB\n -> Append (cost=0.00..27810.36 \nrows=1053932 width=13) (actual time=0.003..838.644 rows=1053929 loops=1)\n -> Seq Scan on documents \n(cost=0.00..18.25 rows=3 width=39) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: (id_source = 113)\n -> Seq Scan on documents_mond \ndocuments (cost=0.00..27792.11 rows=1053929 width=13) (actual \ntime=0.002..502.345 rows=1053929 loops=1)\n Filter: (id_source = 113)\nTotal runtime: 4198.703 ms\n\n\n", "msg_date": "Thu, 22 Jul 2010 16:10:51 +0200", "msg_from": "Philippe Rimbault <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange explain on partitioned tables" }, { "msg_contents": "\n> The PostgreSQL partitioning system is aimed to support perhaps a \n> hundred inherited tables. You can expect to get poor performance on \n> queries if you create 1000 of them. \n\nHi,\n\nWhy is that you would expect poor performance for say 1000 or more? I \nhave a ~1000 inherited tables and I don't see any significant slowdowns. \nI only ever access a single inherited table at a time though in this \nsituation. I suppose I am using inheritance only for organization in \nthis case...\n\nThanks,\nGerald\n\n\n\n\n", "msg_date": "Fri, 23 Jul 2010 15:03:55 -0700", "msg_from": "Gerald Fontenay <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange explain on partitioned tables" }, { "msg_contents": "On Fri, 2010-07-23 at 15:03 -0700, Gerald Fontenay wrote:\n> > The PostgreSQL partitioning system is aimed to support perhaps a \n> > hundred inherited tables. You can expect to get poor performance on \n> > queries if you create 1000 of them. \n> \n> Hi,\n> \n> Why is that you would expect poor performance for say 1000 or more? I \n> have a ~1000 inherited tables and I don't see any significant slowdowns. \n> I only ever access a single inherited table at a time though in this \n> situation. I suppose I am using inheritance only for organization in \n> this case...\n\nIt is variable based on workload and as I recall has to do with the\nplanning time. As the number of children increases, so does the planning\ntime.\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n", "msg_date": "Fri, 23 Jul 2010 15:16:10 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange explain on partitioned tables" }, { "msg_contents": "Gerald Fontenay wrote:\n>\n>> The PostgreSQL partitioning system is aimed to support perhaps a \n>> hundred inherited tables. You can expect to get poor performance on \n>> queries if you create 1000 of them. \n>\n> Why is that you would expect poor performance for say 1000 or more?\n\nWhen the query planner executes, it has to scan through every child \ntable to run the constraint exclusion algorithm for determining whether \nthat table needs to be included in the query results or not. The time \nthat takes is proportional to the number of partitions. If your queries \ntake a long time to execute relative to how long they take to plan, you \nmay not have noticed this. But for shorter queries, and ones where \nthere are lots of joins that require many plans be evaluated, the \nplanning overhead increase can be significant. The threshold for where \nit becomes painful is obviously workload dependent, but the thing to be \ncareful of is that supporting very large numbers of partitions is not \nsomething that the database query planner has been optimized for yet.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 24 Jul 2010 01:12:28 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange explain on partitioned tables" }, { "msg_contents": "Joshua D. Drake wrote:\n> On Fri, 2010-07-23 at 15:03 -0700, Gerald Fontenay wrote:\n> \n>>> The PostgreSQL partitioning system is aimed to support perhaps a \n>>> hundred inherited tables. You can expect to get poor performance on \n>>> queries if you create 1000 of them. \n>>> \n>> Hi,\n>>\n>> Why is that you would expect poor performance for say 1000 or more? I \n>> have a ~1000 inherited tables and I don't see any significant slowdowns. \n>> I only ever access a single inherited table at a time though in this \n>> situation. I suppose I am using inheritance only for organization in \n>> this case...\n>> \n>\n> It is variable based on workload and as I recall has to do with the\n> planning time. As the number of children increases, so does the planning\n> time.\n>\n> \n \n\n I do not execute queries against the parent and children tables.\nIt became obvious that this can be quite slow.\nI am asking specifically if I can expect problems in *that* situation, again, where I directly query a single child table only and where I may have thousands. Just like a any other single table query, no?\n\nThank you,\nGerald\n\n\n\n", "msg_date": "Fri, 23 Jul 2010 23:29:37 -0700", "msg_from": "Gerald Fontenay <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange explain on partitioned tables" }, { "msg_contents": "Joshua D. Drake wrote:\n> On Fri, 2010-07-23 at 15:03 -0700, Gerald Fontenay wrote:\n> \n>>> The PostgreSQL partitioning system is aimed to support perhaps a \n>>> hundred inherited tables. You can expect to get poor performance on \n>>> queries if you create 1000 of them. \n>>> \n>> Hi,\n>>\n>> Why is that you would expect poor performance for say 1000 or more? I \n>> have a ~1000 inherited tables and I don't see any significant slowdowns. \n>> I only ever access a single inherited table at a time though in this \n>> situation. I suppose I am using inheritance only for organization in \n>> this case...\n>> \n>\n> It is variable based on workload and as I recall has to do with the\n> planning time. As the number of children increases, so does the planning\n> time.\n>\n> Joshua D. Drake\n>\n>\n> \nThank you for your response. So if I query only my target child table, \nthis should be \"just like\" any other single table wrt planning right? I \nhave thousands of these tables. (I suppose that I'm only using \ninheritance for the sake of organization in this situation...)\n\nThanks!\nGerald\n\n\n\n\n\n\nJoshua D. Drake wrote:\n\nOn Fri, 2010-07-23 at 15:03 -0700, Gerald Fontenay wrote:\n \n\n\nThe PostgreSQL partitioning system is aimed to support perhaps a \nhundred inherited tables. You can expect to get poor performance on \nqueries if you create 1000 of them. \n \n\nHi,\n\nWhy is that you would expect poor performance for say 1000 or more? I \nhave a ~1000 inherited tables and I don't see any significant slowdowns. \nI only ever access a single inherited table at a time though in this \nsituation. I suppose I am using inheritance only for organization in \nthis case...\n \n\n\nIt is variable based on workload and as I recall has to do with the\nplanning time. As the number of children increases, so does the planning\ntime.\n\nJoshua D. Drake\n\n\n \n\nThank you\nfor your response. So if I query only my target child table, this\nshould be \"just like\" any other single table wrt planning right? I have\nthousands of these tables. (I suppose that I'm only using inheritance\nfor the sake of organization in this situation...)\n\nThanks!\nGerald", "msg_date": "Mon, 26 Jul 2010 14:26:45 -0700", "msg_from": "Gerald Fontenay <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange explain on partitioned tables" }, { "msg_contents": "On Mon, Jul 26, 2010 at 5:26 PM, Gerald Fontenay <[email protected]> wrote:\n> Thank you for your response. So if I query only my target child table, this\n> should be \"just like\" any other single table wrt planning right? I have\n> thousands of these tables. (I suppose that I'm only using inheritance for\n> the sake of organization in this situation...)\n\nYeah, I wouldn't expect planning time to be affected by whether a\ntable has parents; only whether it has children.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 4 Aug 2010 09:16:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange explain on partitioned tables" } ]
[ { "msg_contents": "I have a table.\n\n\\d email_track\nTable \"public.email_track\"\n Column | Type | Modifiers\n--------+---------+--------------------\n crmid | integer | not null default 0\n mailid | integer | not null default 0\n count | integer |\nIndexes:\n \"email_track_pkey\" PRIMARY KEY, btree (crmid, mailid) CLUSTER\n \"email_track_count_idx\" btree (count)\n\n\nexplain analyze select * from email_track where count > 10 ;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on email_track (cost=12.79..518.05 rows=1941 width=12)\n(actual time=0.430..3.047 rows=1743 loops=1)\n Recheck Cond: (count > 10)\n -> Bitmap Index Scan on email_track_count_idx (cost=0.00..12.79\nrows=1941 width=0) (actual time=0.330..0.330 rows=1743 loops=1)\n Index Cond: (count > 10)\n Total runtime: 4.702 ms\n(5 rows)\n\nexplain analyze select * from email_track where count < 10000 ;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on email_track (cost=0.00..1591.65 rows=88851 width=12) (actual\ntime=0.011..118.499 rows=88852 loops=1)\n Filter: (count < 10000)\n Total runtime: 201.206 ms\n(3 rows)\n\nI don't know why index scan is not working for count < 10000 operation.\nAny idea please.\n\nI have a table.\\d email_trackTable \"public.email_track\" Column |  Type   |     Modifiers      --------+---------+-------------------- crmid  | integer | not null default 0 mailid | integer | not null default 0\n count  | integer | Indexes:    \"email_track_pkey\" PRIMARY KEY, btree (crmid, mailid) CLUSTER    \"email_track_count_idx\" btree (count)explain analyze select * from email_track where count > 10 ;\n                                                                     QUERY PLAN                                                                     ----------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on email_track  (cost=12.79..518.05 rows=1941 width=12) (actual time=0.430..3.047 rows=1743 loops=1)   Recheck Cond: (count > 10)   ->  Bitmap Index Scan on email_track_count_idx  (cost=0.00..12.79 rows=1941 width=0) (actual time=0.330..0.330 rows=1743 loops=1)\n         Index Cond: (count > 10) Total runtime: 4.702 ms(5 rows)explain analyze select * from email_track where count < 10000 ;                                                            QUERY PLAN                                                            \n---------------------------------------------------------------------------------------------------------------------------------- Seq Scan on email_track  (cost=0.00..1591.65 rows=88851 width=12) (actual time=0.011..118.499 rows=88852 loops=1)\n   Filter: (count < 10000) Total runtime: 201.206 ms(3 rows)I don't know why index scan is not working for count < 10000 operation.Any idea please.", "msg_date": "Thu, 22 Jul 2010 14:05:22 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "why index is not working in < operation?" }, { "msg_contents": "2010/7/22 AI Rumman <[email protected]>\n\n> I have a table.\n>\n> \\d email_track\n> Table \"public.email_track\"\n> Column | Type | Modifiers\n> --------+---------+--------------------\n> crmid | integer | not null default 0\n> mailid | integer | not null default 0\n> count | integer |\n> Indexes:\n> \"email_track_pkey\" PRIMARY KEY, btree (crmid, mailid) CLUSTER\n> \"email_track_count_idx\" btree (count)\n>\n>\n> explain analyze select * from email_track where count > 10 ;\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on email_track (cost=12.79..518.05 rows=1941 width=12)\n> (actual time=0.430..3.047 rows=1743 loops=1)\n> Recheck Cond: (count > 10)\n> -> Bitmap Index Scan on email_track_count_idx (cost=0.00..12.79\n> rows=1941 width=0) (actual time=0.330..0.330 rows=1743 loops=1)\n> Index Cond: (count > 10)\n> Total runtime: 4.702 ms\n> (5 rows)\n>\n> explain analyze select * from email_track where count < 10000 ;\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on email_track (cost=0.00..1591.65 rows=88851 width=12) (actual\n> time=0.011..118.499 rows=88852 loops=1)\n> Filter: (count < 10000)\n> Total runtime: 201.206 ms\n> (3 rows)\n>\n> I don't know why index scan is not working for count < 10000 operation.\n> Any idea please.\n>\n\nDatabase knows, due to table statistics, that the query \">10\" would return\nsmall (1941) number of rows, while query \"<10000\" would return big\n(88851) number of rows. The \"small\" and \"big\" is quite relative, but the\nresult is that the database knows, that it would be faster not to use index,\nif the number of returning rows is big.\n\nregards\nSzymon Guz\n\n2010/7/22 AI Rumman <[email protected]>\nI have a table.\\d email_trackTable \"public.email_track\" Column |  Type   |     Modifiers      --------+---------+-------------------- crmid  | integer | not null default 0 mailid | integer | not null default 0\n\n count  | integer | Indexes:    \"email_track_pkey\" PRIMARY KEY, btree (crmid, mailid) CLUSTER    \"email_track_count_idx\" btree (count)explain analyze select * from email_track where count > 10 ;\n\n                                                                     QUERY PLAN                                                                     ----------------------------------------------------------------------------------------------------------------------------------------------------\n\n Bitmap Heap Scan on email_track  (cost=12.79..518.05 rows=1941 width=12) (actual time=0.430..3.047 rows=1743 loops=1)   Recheck Cond: (count > 10)   ->  Bitmap Index Scan on email_track_count_idx  (cost=0.00..12.79 rows=1941 width=0) (actual time=0.330..0.330 rows=1743 loops=1)\n\n         Index Cond: (count > 10) Total runtime: 4.702 ms(5 rows)explain analyze select * from email_track where count < 10000 ;                                                            QUERY PLAN                                                            \n\n---------------------------------------------------------------------------------------------------------------------------------- Seq Scan on email_track  (cost=0.00..1591.65 rows=88851 width=12) (actual time=0.011..118.499 rows=88852 loops=1)\n\n   Filter: (count < 10000) Total runtime: 201.206 ms(3 rows)I don't know why index scan is not working for count < 10000 operation.Any idea please.\nDatabase knows, due to table statistics, that the query \">10\" would return small (1941) number of rows, while query \"<10000\" would return big (88851) number of rows. The \"small\" and \"big\" is quite relative, but the result is that the database knows, that it would be faster not to use index, if the number of returning rows is big.\nregardsSzymon Guz", "msg_date": "Thu, 22 Jul 2010 10:11:34 +0200", "msg_from": "Szymon Guz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why index is not working in < operation?" }, { "msg_contents": "In response to AI Rumman :\n> I have a table.\n> \n> \\d email_track\n> Table \"public.email_track\"\n> �Column |� Type�� |���� Modifiers�����\n> --------+---------+--------------------\n> �crmid� | integer | not null default 0\n> �mailid | integer | not null default 0\n> �count� | integer |\n> Indexes:\n> ��� \"email_track_pkey\" PRIMARY KEY, btree (crmid, mailid) CLUSTER\n> ��� \"email_track_count_idx\" btree (count)\n> \n> \n> explain analyze select * from email_track where count > 10 ;\n> �������������������������������������������������������������������� QUERY\n> PLAN��������������������������������������������������������������������\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> �Bitmap Heap Scan on email_track� (cost=12.79..518.05 rows=1941 width=12)\n> (actual time=0.430..3.047 rows=1743 loops=1)\n> �� Recheck Cond: (count > 10)\n> �� ->� Bitmap Index Scan on email_track_count_idx� (cost=0.00..12.79 rows=1941\n> width=0) (actual time=0.330..0.330 rows=1743 loops=1)\n> �������� Index Cond: (count > 10)\n> �Total runtime: 4.702 ms\n> (5 rows)\n> \n> explain analyze select * from email_track where count < 10000 ;\n> ����������������������������������������������������������� QUERY\n> PLAN�����������������������������������������������������������\n> ----------------------------------------------------------------------------------------------------------------------------------\n> �Seq Scan on email_track� (cost=0.00..1591.65 rows=88851 width=12) (actual time\n> =0.011..118.499 rows=88852 loops=1)\n> �� Filter: (count < 10000)\n> �Total runtime: 201.206 ms\n> (3 rows)\n> \n> I don't know why index scan is not working for count < 10000 operation.\n> Any idea please.\n\nHow many rows contains the table? I think, with your where-condition\ncount < 10000 roughly the whole table in the result, right?\n\nIn this case, a seq-scan is cheaper than an index-scan.\n\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Thu, 22 Jul 2010 10:16:37 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why index is not working in < operation?" } ]
[ { "msg_contents": "Hi all..\nCan any one help me?\nI'd like to know how can we get the following information in\nPostgreSQL:\nExecution plan\nThe I/O physical reads and logical reads, CPU consumption, number of\nDB block used, and any other information relevant to performance.\nTaking into consideration that these information could be extracted\nfrom Oracle by AWR, TKPROF, ...etc.\nThanks.\n", "msg_date": "Thu, 22 Jul 2010 21:06:00 -0700 (PDT)", "msg_from": "std pik <[email protected]>", "msg_from_op": true, "msg_subject": "Execution Plan" }, { "msg_contents": "Hello\n\n2010/7/23 std pik <[email protected]>:\n> Hi all..\n> Can any one help me?\n> I'd like to know how can we get the following information in\n> PostgreSQL:\n> Execution plan\n> The I/O physical reads and logical reads, CPU consumption, number of\n> DB block used, and any other information relevant to performance.\n> Taking into consideration that these information could be extracted\n> from Oracle by AWR, TKPROF, ...etc.\n> Thanks.\n>\n\nIt is depend on version of PostgreSQL that you use. In 9.0 you can\n\nEXPLAIN explain (analyze true, buffers true, costs true) select * from\npg_tables;\n\nother useful info are in table pg_stat_user_tables, pg_stat_user_indexes\n\nBut for example CPU consumption you can see never - PostgreSQL uses\nlittle bit different methods.\n\nRegards\n\nPavel Stehule\n\nmaybe you searching some like http://pgfouine.projects.postgresql.org/\n\n\n\n\n\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 23 Jul 2010 09:43:27 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution Plan" }, { "msg_contents": "On Thu, Jul 22, 2010 at 11:06 PM, std pik <[email protected]> wrote:\n> Hi all..\n> Can any one help me?\n> I'd like to know how can we get the following information in\n> PostgreSQL:\n> Execution plan\n> The I/O physical reads and logical reads, CPU consumption, number of\n> DB block used, and any other information relevant to performance.\n> Taking into consideration that these information could be extracted\n> from Oracle by AWR, TKPROF, ...etc.\n> Thanks.\n\nCheck out pgstatspack:\n\nhttp://blogs.sun.com/glennf/entry/pgstatspack_getting_at_postgres_performance\nhttp://pgfoundry.org/projects/pgstatspack/\n", "msg_date": "Tue, 3 Aug 2010 22:29:57 -0500", "msg_from": "=?UTF-8?Q?Rodrigo_E=2E_De_Le=C3=B3n_Plicet?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution Plan" } ]
[ { "msg_contents": "Hello list,\n\nProbably like many other's I've wondered why no SSD manufacturer puts a \nsmall BBU on a SSD drive. Triggered by Greg Smith's mail \nhttp://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php \nhere, and also anandtech's review at \nhttp://www.anandtech.com/show/2899/1 (see page 6 for pictures of the \ncapacitor) I ordered a SandForce drive and this week it finally arrived.\n\nAnd now I have to test it and was wondering about some things like\n\n* How to test for power failure? I thought by running on the same \nmachine a parallel pgbench setup on two clusters where one runs with \ndata and wal on a rotating disk, the other on the SSD, both without BBU \ncontroller. Then turn off power. Do that a few times. The problem in \nthis scenario is that even when the SSD would show not data loss and the \nrotating disk would for a few times, a dozen tests without failure isn't \nactually proof that the drive can write it's complete buffer to disk \nafter power failure.\n\n* How long should the power be turned off? A minute? 15 minutes?\n\n* What filesystem to use on the SSD? To minimize writes and maximize \nchance for seeing errors I'd choose ext2 here. For the sake of not \ncomparing apples with pears I'd have to go with ext2 on the rotating \ndata disk as well.\n\nDo you guys have any more ideas to properly 'feel this disk at its teeth' ?\n\nregards,\nYeb Havinga\n\n", "msg_date": "Sat, 24 Jul 2010 09:20:50 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Testing Sandforce SSD" }, { "msg_contents": "\n> Do you guys have any more ideas to properly 'feel this disk at its \n> teeth' ?\n\nWhile an 'end-to-end' test using PG is fine, I think it would be easier \nto determine if the drive is behaving correctly by using a simple test \nprogram that emulates the storage semantics the WAL expects. Have it \nwrite a constant stream of records, fsync'ing after each write. Record \nthe highest record number flushed so far in some place that won't be \nlost with the drive under test (e.g. send it over the network to another \nmachine).\n\nKill the power, bring the system back up again and examine what's at the \ntail end of that file. I think this will give you the worst case test \nwith the easiest result discrimination.\n\nIf you want to you could add concurrent random writes to another file \nfor extra realism.\n\nSomeone here may already have a suitable test program. I know I've \nwritten several over the years in order to test I/O performance, prove \nthe existence of kernel bugs, and so on.\n\nI doubt it matters much how long the power is turned of. A second should \nbe plenty time to flush pending writes if the drive is going to do so.\n\n\n", "msg_date": "Sat, 24 Jul 2010 01:37:01 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Sat, 24 Jul 2010, David Boreham wrote:\n\n>> Do you guys have any more ideas to properly 'feel this disk at its teeth' ?\n>\n> While an 'end-to-end' test using PG is fine, I think it would be easier to \n> determine if the drive is behaving correctly by using a simple test program \n> that emulates the storage semantics the WAL expects. Have it write a constant \n> stream of records, fsync'ing after each write. Record the highest record \n> number flushed so far in some place that won't be lost with the drive under \n> test (e.g. send it over the network to another machine).\n>\n> Kill the power, bring the system back up again and examine what's at the tail \n> end of that file. I think this will give you the worst case test with the \n> easiest result discrimination.\n>\n> If you want to you could add concurrent random writes to another file for \n> extra realism.\n>\n> Someone here may already have a suitable test program. I know I've written \n> several over the years in order to test I/O performance, prove the existence \n> of kernel bugs, and so on.\n>\n> I doubt it matters much how long the power is turned of. A second should be \n> plenty time to flush pending writes if the drive is going to do so.\n\nremember that SATA is designed to be hot-plugged, so you don't have to \nkill the entire system to kill power to the drive.\n\nthis is a little more ubrupt than the system loosing power, but in terms \nof loosing data this is about the worst case (while at the same time, it \neliminates the possibility that the OS continues to perform writes to the \ndrive as power dies, which is a completely different class of problems, \nindependant of the drive type)\n\nDavid Lang\n", "msg_date": "Sat, 24 Jul 2010 01:25:38 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Jul 24, 2010, at 12:20 AM, Yeb Havinga wrote:\n\n> The problem in this scenario is that even when the SSD would show not data loss and the rotating disk would for a few times, a dozen tests without failure isn't actually proof that the drive can write it's complete buffer to disk after power failure.\n\nYes, this is always going to be the case with testing like this - you'll never be able to prove that it will always be safe. ", "msg_date": "Sat, 24 Jul 2010 09:50:24 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n> Probably like many other's I've wondered why no SSD manufacturer puts \n> a small BBU on a SSD drive. Triggered by Greg Smith's mail \n> http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php \n> here, and also anandtech's review at \n> http://www.anandtech.com/show/2899/1 (see page 6 for pictures of the \n> capacitor) I ordered a SandForce drive and this week it finally arrived.\n\nNote that not all of the Sandforce drives include a capacitor; I hope \nyou got one that does! I wasn't aware any of the SF drives with a \ncapacitor on them were even shipping yet, all of the ones I'd seen were \nthe chipset that doesn't include one still. Haven't checked in a few \nweeks though.\n\n> * How to test for power failure?\n\nI've had good results using one of the early programs used to \ninvestigate this class of problems: \nhttp://brad.livejournal.com/2116715.html?page=2\n\nYou really need a second \"witness\" server to do this sort of thing \nreliably, which that provides.\n\n> * What filesystem to use on the SSD? To minimize writes and maximize \n> chance for seeing errors I'd choose ext2 here. \n\nI don't consider there to be any reason to deploy any part of a \nPostgreSQL database on ext2. The potential for downtime if the fsck \ndoesn't happen automatically far outweighs the minimal performance \nadvantage you'll actually see in real applications. All of the \nbenchmarks showing large gains for ext2 over ext3 I have seen been \nsynthetic, not real database performance; the internal ones I've run \nusing things like pgbench do not show a significant improvement. (Yes, \nI'm already working on finding time to publicly release those findings)\n\nPut it on ext3, toggle on noatime, and move on to testing. The overhead \nof the metadata writes is the least of the problems when doing \nwrite-heavy stuff on Linux.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 24 Jul 2010 13:14:44 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Sat, Jul 24, 2010 at 3:20 AM, Yeb Havinga <[email protected]> wrote:\n> Hello list,\n>\n> Probably like many other's I've wondered why no SSD manufacturer puts a\n> small BBU on a SSD drive. Triggered by Greg Smith's mail\n> http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php here,\n> and also anandtech's review at http://www.anandtech.com/show/2899/1 (see\n> page 6 for pictures of the capacitor) I ordered a SandForce drive and this\n> week it finally arrived.\n>\n> And now I have to test it and was wondering about some things like\n>\n> * How to test for power failure?\n\nI test like this: write a small program that sends a endless series of\ninserts like this:\n*) on the server:\ncreate table foo (id serial);\n*) from the client:\ninsert into foo default values returning id;\non the client side print the inserted value to the terminal after the\nquery is reported as complete to the client.\n\nRun the program, wait a bit, then pull the plug on the server. The\ndatabase should recover clean and the last reported insert on the\nclient should be there when it restarts. Try restarting immediately a\nfew times then if that works try it and let it simmer overnight. If\nit makes it at least 24-48 hours that's a very promising sign.\n\nmerlin\n", "msg_date": "Sat, 24 Jul 2010 14:06:01 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Greg Smith wrote:\n> Note that not all of the Sandforce drives include a capacitor; I hope \n> you got one that does! I wasn't aware any of the SF drives with a \n> capacitor on them were even shipping yet, all of the ones I'd seen \n> were the chipset that doesn't include one still. Haven't checked in a \n> few weeks though.\nI think I did, it was expensive enough, though while ordering its very \neasy to order the wrong one, all names on the product category page look \nthe same. (OCZ Vertex 2 Pro)\n>> * How to test for power failure?\n>\n> I've had good results using one of the early programs used to \n> investigate this class of problems: \n> http://brad.livejournal.com/2116715.html?page=2\nA great tool, thanks for the link!\n\n diskchecker: running 34 sec, 4.10% coverage of 500 MB (1342 writes; 39/s)\n diskchecker: running 35 sec, 4.24% coverage of 500 MB (1390 writes; 39/s)\n diskchecker: running 36 sec, 4.35% coverage of 500 MB (1427 writes; 39/s)\n diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes; 39/s)\ndidn't get 'ok' from server (11387 316950), msg=[] = Connection reset by \npeer at ./diskchecker.pl line 132.\n\nhere's where I removed the power and left it off for about a minute. \nThen started again then did the verify\n\nyeb@a:~$ ./diskchecker.pl -s client45.eemnes verify test_file\n verifying: 0.00%\nTotal errors: 0\n\n:-)\nthis was on ext2\n\n>> * What filesystem to use on the SSD? To minimize writes and maximize \n>> chance for seeing errors I'd choose ext2 here. \n>\n> I don't consider there to be any reason to deploy any part of a \n> PostgreSQL database on ext2. The potential for downtime if the fsck \n> doesn't happen automatically far outweighs the minimal performance \n> advantage you'll actually see in real applications.\nHmm.. wouldn't that apply for other filesystems as well? I know that JFS \nalso won't mount if booted unclean, it somehow needs a marker from the \nfsck. Don't know for ext3, xfs etc.\n> All of the benchmarks showing large gains for ext2 over ext3 I have \n> seen been synthetic, not real database performance; the internal ones \n> I've run using things like pgbench do not show a significant \n> improvement. (Yes, I'm already working on finding time to publicly \n> release those findings)\nThe reason I'd choose ext2 on the SSD was mainly to decrease the number \nof writes, not for performance. Maybe I should ultimately do tests for \nboth journalled and ext2 filesystems and compare the amount of data per \nx pgbench transactions.\n> Put it on ext3, toggle on noatime, and move on to testing. The \n> overhead of the metadata writes is the least of the problems when \n> doing write-heavy stuff on Linux.\nWill surely do and post the results.\n\nthanks,\nYeb Havinga\n", "msg_date": "Sat, 24 Jul 2010 21:49:46 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n> diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes; 39/s)\n> Total errors: 0\n>\n> :-)\nOTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer.\n\n\n", "msg_date": "Sat, 24 Jul 2010 21:54:58 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Greg Smith wrote:\n> Note that not all of the Sandforce drives include a capacitor; I hope \n> you got one that does! I wasn't aware any of the SF drives with a \n> capacitor on them were even shipping yet, all of the ones I'd seen \n> were the chipset that doesn't include one still. Haven't checked in a \n> few weeks though.\n\nAnswer my own question here: the drive Yeb got was the brand spanking \nnew OCZ Vertex 2 Pro, selling for $649 at Newegg for example: \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16820227535 and with \nthe supercacitor listed right in the main production specifications \nthere. This is officially the first inexpensive (relatively) SSD with a \nbattery-backed write cache built into it. If Yeb's test results prove \nit works as it's supposed to under PostgreSQL, I'll be happy to finally \nhave a moderately priced SSD I can recommend to people for database \nuse. And I fear I'll be out of excuses to avoid buying one as a toy for \nmy home system.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 24 Jul 2010 16:21:21 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote:\n> Greg Smith wrote:\n> > Note that not all of the Sandforce drives include a capacitor; I hope \n> > you got one that does! I wasn't aware any of the SF drives with a \n> > capacitor on them were even shipping yet, all of the ones I'd seen \n> > were the chipset that doesn't include one still. Haven't checked in a \n> > few weeks though.\n> \n> Answer my own question here: the drive Yeb got was the brand spanking \n> new OCZ Vertex 2 Pro, selling for $649 at Newegg for example: \n> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227535 and with \n> the supercacitor listed right in the main production specifications \n> there. This is officially the first inexpensive (relatively) SSD with a \n> battery-backed write cache built into it. If Yeb's test results prove \n> it works as it's supposed to under PostgreSQL, I'll be happy to finally \n> have a moderately priced SSD I can recommend to people for database \n> use. And I fear I'll be out of excuses to avoid buying one as a toy for \n> my home system.\n\nThat is quite the toy. I can get 4 SATA-II with RAID Controller, with\nbattery backed cache, for the same price or less :P\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n", "msg_date": "Sat, 24 Jul 2010 14:09:29 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Joshua D. Drake wrote:\n> That is quite the toy. I can get 4 SATA-II with RAID Controller, with\n> battery backed cache, for the same price or less :P\n> \n\nTrue, but if you look at tests like \nhttp://www.anandtech.com/show/2899/12 it suggests there's probably at \nleast a 6:1 performance speedup for workloads with a lot of random I/O \nto them. And I'm really getting sick of the power/noise/heat that the 6 \ndrives in my home server produces.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 24 Jul 2010 17:14:55 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n> Yeb Havinga wrote:\n>> diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes; \n>> 39/s)\n>> Total errors: 0\n>>\n>> :-)\n> OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer.\nWhen playing with it a bit more, I couldn't get the test_file to be \ncreated in the right place on the test system. It turns out I had the \ndiskchecker config switched and 39 write/s was the speed of the \nnot-rebooted system, sorry.\n\nI did several diskchecker.pl tests this time with the testfile on the \nSSD, none of the tests have returned an error :-)\n\nWrites/s start low but quickly converge to a number in the range of 1200 \nto 1800. The writes diskchecker does are 16kB writes. Making this 4kB \nwrites does not increase writes/s. 32kB seems a little less, 64kB is \nabout two third of initial writes/s and 128kB is half.\n\nSo no BBU speeds here for writes, but still ~ factor 10 improvement of \niops for a rotating SATA disk.\n\nregards,\nYeb Havinga\n\nPS: hdparm showed write cache was on. I did tests with both ext2 and \nxfs, where xfs tests I did with both barrier and nobarrier.\n\n", "msg_date": "Sat, 24 Jul 2010 23:22:12 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n> Writes/s start low but quickly converge to a number in the range of \n> 1200 to 1800. The writes diskchecker does are 16kB writes. Making this \n> 4kB writes does not increase writes/s. 32kB seems a little less, 64kB \n> is about two third of initial writes/s and 128kB is half.\n\nLet's turn that into MB/s numbers:\n\n4k * 1200 = 4.7 MB/s\n8k * 1200 = 9.4 MB/s\n16k * 1200 = 18.75 MB/s\n64kb * 1200 * 2/3 [800] = 37.5 MB/s\n128kb * 1200 / 2 [600] = 75 MB/s\n\nFor comparison sake, a 7200 RPM drive running PostgreSQL will do <120 \ncommits/second without a BBWC, so at an 8K block size that's <1 MB/s. \nIf you put a cache in the middle, I'm used to seeing about 5000 8K \ncommits/second, which is around 40 MB/s. So this is sitting right in \nthe middle of those two. Sequential writes with a commit after each one \nlike this are basically the worst case for the SSD, so if it can provide \nreasonable performance on that I'd be happy.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 24 Jul 2010 18:01:00 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Greg Smith wrote:\n> Put it on ext3, toggle on noatime, and move on to testing. The \n> overhead of the metadata writes is the least of the problems when \n> doing write-heavy stuff on Linux.\nI ran a pgbench run and power failure test during pgbench with a 3 year \nold computer\n\n8GB DDR ?\nIntel Core 2 duo 6600 @ 2.40GHz\nIntel Corporation 82801IB (ICH9) 2 port SATA IDE Controller\n64 bit 2.6.31-22-server (Ubuntu karmic), kernel option elevator=deadline\nsysctl options besides increasing shm:\nfs.file-max=327679\nfs.aio-max-nr=3145728\nvm.swappiness=0\nvm.dirty_background_ratio = 3\nvm.dirty_expire_centisecs = 500\nvm.dirty_writeback_centisecs = 100\nvm.dirty_ratio = 15\n\nFilesystem on SSD with postgresql data: ext3 mounted with \nnoatime,nodiratime,relatime\nPostgresql cluster: did initdb with C locale. Data and pg_xlog together \non the same ext3 filesystem.\n\nChanged in postgresql.conf: settings with pgtune for OLTP and 15 connections\nmaintenance_work_mem = 480MB # pgtune wizard 2010-07-25\ncheckpoint_completion_target = 0.9 # pgtune wizard 2010-07-25\neffective_cache_size = 5632MB # pgtune wizard 2010-07-25\nwork_mem = 512MB # pgtune wizard 2010-07-25\nwal_buffers = 8MB # pgtune wizard 2010-07-25\ncheckpoint_segments = 31 # pgtune said 16 here\nshared_buffers = 1920MB # pgtune wizard 2010-07-25\nmax_connections = 15 # pgtune wizard 2010-07-25\n\nInitialized with scale 800 with is about 12GB. I especially went beyond \nan in RAM size for this machine (that would be ~ 5GB), so random reads \nwould weigh in the result. Then let pgbench run the tcp benchmark with \n-M prepared, 10 clients and -T 3600 (one hour) and 10 clients, after \nthat loaded the logfile in a db and did some queries. Then realized the \npgbench result page was not in screen buffer anymore so I cannot copy it \nhere, but hey, those can be edited as well right ;-)\n\nselect count(*),count(*)/3600,avg(time),stddev(time) from log;\n count | ?column? | avg | stddev \n---------+----------+-----------------------+----------------\n 4939212 | 1372 | 7282.8581978258880161 | 11253.96967962\n(1 row)\n\nJudging from the latencys in the logfiles I did not experience serious \nlagging (time is in microseconds):\n\nselect * from log order by time desc limit 3;\n client_id | tx_no | time | file_no | epoch | time_us\n-----------+-------+---------+---------+------------+---------\n 3 | 33100 | 1229503 | 0 | 1280060345 | 866650\n 9 | 39990 | 1077519 | 0 | 1280060345 | 858702\n 2 | 55323 | 1071060 | 0 | 1280060519 | 750861\n(3 rows)\n\nselect * from log order by time desc limit 3 OFFSET 1000;\n client_id | tx_no | time | file_no | epoch | time_us\n-----------+--------+--------+---------+------------+---------\n 5 | 262466 | 245953 | 0 | 1280062074 | 513789\n 1 | 267519 | 245867 | 0 | 1280062074 | 513301\n 7 | 273662 | 245532 | 0 | 1280062078 | 378932\n(3 rows)\n\nselect * from log order by time desc limit 3 OFFSET 10000;\n client_id | tx_no | time | file_no | epoch | time_us\n-----------+--------+-------+---------+------------+---------\n 5 | 123011 | 82854 | 0 | 1280061036 | 743986\n 6 | 348967 | 82853 | 0 | 1280062687 | 776317\n 8 | 439789 | 82848 | 0 | 1280063109 | 552928\n(3 rows)\n\nThen I started pgbench again with the same setting, let it run for a few \nminutes and in another console did CHECKPOINT and then turned off power. \nAfter restarting, the database recovered without a problem.\n\nLOG: database system was interrupted; last known up at 2010-07-25 \n10:14:15 EDT\nLOG: database system was not properly shut down; automatic recovery in \nprogress\nLOG: redo starts at F/98008610\nLOG: record with zero length at F/A2BAC040\nLOG: redo done at F/A2BAC010\nLOG: last completed transaction was at log time 2010-07-25 \n10:14:16.151037-04\n\nregards,\nYeb Havinga\n", "msg_date": "Sun, 25 Jul 2010 20:35:11 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n>\n> 8GB DDR2 something..\n(lots of details removed)\n\nGraph of TPS at http://tinypic.com/r/b96aup/3 and latency at \nhttp://tinypic.com/r/x5e846/3\n\nThanks http://www.westnet.com/~gsmith/content/postgresql/pgbench.htm for \nthe gnuplot and psql scripts!\n\n", "msg_date": "Sun, 25 Jul 2010 21:13:16 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n> Greg Smith wrote:\n>> Put it on ext3, toggle on noatime, and move on to testing. The \n>> overhead of the metadata writes is the least of the problems when \n>> doing write-heavy stuff on Linux.\n> I ran a pgbench run and power failure test during pgbench with a 3 \n> year old computer\n>\nOn the same config more tests.\n\nscale 10 read only and read/write tests. note: only 240 s.\n\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 10\nquery mode: prepared\nnumber of clients: 10\nduration: 240 s\nnumber of transactions actually processed: 8208115\ntps = 34197.109896 (including connections establishing)\ntps = 34200.658720 (excluding connections establishing)\n\nyeb@client45:~$ pgbench -c 10 -l -M prepared -T 240 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nquery mode: prepared\nnumber of clients: 10\nduration: 240 s\nnumber of transactions actually processed: 809271\ntps = 3371.147020 (including connections establishing)\ntps = 3371.518611 (excluding connections establishing)\n\n----------\nscale 300 (just fits in RAM) read only and read/write tests\n\npgbench -c 10 -M prepared -T 300 -S test\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 300\nquery mode: prepared\nnumber of clients: 10\nduration: 300 s\nnumber of transactions actually processed: 9219279\ntps = 30726.931095 (including connections establishing)\ntps = 30729.692823 (excluding connections establishing)\n\nThe test above doesn't really test the drive but shows the CPU/RAM limit.\n\npgbench -c 10 -l -M prepared -T 3600 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 300\nquery mode: prepared\nnumber of clients: 10\nduration: 3600 s\nnumber of transactions actually processed: 8838200\ntps = 2454.994217 (including connections establishing)\ntps = 2455.012480 (excluding connections establishing)\n\n------\nscale 2000\n\npgbench -c 10 -M prepared -T 300 -S test\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 2000\nquery mode: prepared\nnumber of clients: 10\nduration: 300 s\nnumber of transactions actually processed: 755772\ntps = 2518.547576 (including connections establishing)\ntps = 2518.762476 (excluding connections establishing)\n\nSo the test above tests the random seek performance. Iostat on the drive \nshowed a steady just over 4000 read io's/s:\navg-cpu: %user %nice %system %iowait %steal %idle\n 11.39 0.00 13.37 60.40 0.00 14.85\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 0.00 4171.00 0.00 60624.00 0.00 \n29.07 11.81 2.83 0.24 100.00\nsdb 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\n\npgbench -c 10 -l -M prepared -T 24000 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 2000\nquery mode: prepared\nnumber of clients: 10\nduration: 24000 s\nnumber of transactions actually processed: 30815691\ntps = 1283.979098 (including connections establishing)\ntps = 1283.980446 (excluding connections establishing)\n\nNote the duration of several hours. No long waits occurred - of this \nlast test the latency png is at http://yfrog.com/f/0vlatencywp/ and the \nTPS graph at http://yfrog.com/f/b5tpsp/\n\nregards,\nYeb Havinga\n\n", "msg_date": "Mon, 26 Jul 2010 11:22:34 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Sun, 25 Jul 2010, Yeb Havinga wrote:\n> Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at \n> http://tinypic.com/r/x5e846/3\n\nDoes your latency graph really have milliseconds as the y axis? If so, \nthis device is really slow - some requests have a latency of more than a \nsecond!\n\nMatthew\n\n-- \n The early bird gets the worm, but the second mouse gets the cheese.\n", "msg_date": "Mon, 26 Jul 2010 10:47:14 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Sun, 25 Jul 2010, Yeb Havinga wrote:\n>> Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at \n>> http://tinypic.com/r/x5e846/3\n>\n> Does your latency graph really have milliseconds as the y axis?\nYes\n> If so, this device is really slow - some requests have a latency of \n> more than a second!\nI try to just give the facts. Please remember that particular graphs are \nfrom a read/write pgbench run on a bigger than RAM database that ran for \nsome time (so with checkpoints), on a *single* $435 50GB drive without \nBBU raid controller. Also, this is a picture with a few million points: \nthe ones above 200ms are perhaps a hundred and hence make up a very \nsmall fraction.\n\nSo far I'm pretty impressed with this drive. Lets be fair to OCZ and the \nSandForce guys and do not shoot from the hip things like \"really slow\", \nwithout that being backed by a graphed pgbench run together with it's \ncost, so we can compare numbers with numbers.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Mon, 26 Jul 2010 12:29:42 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Matthew Wakeling wrote:\n> Does your latency graph really have milliseconds as the y axis? If so, \n> this device is really slow - some requests have a latency of more than \n> a second!\n\nHave you tried that yourself? If you generate one of those with \nstandard hard drives and a BBWC under Linux, I expect you'll discover \nthose latencies to be >5 seconds long. I recently saw >100 *seconds* \nrunning a large pgbench test due to latency flushing things to disk, on \na system with 72GB of RAM. Takes a long time to flush >3GB of random \nI/O out to disk when the kernel will happily cache that many writes \nuntil checkpoint time.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 26 Jul 2010 11:28:02 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n> Please remember that particular graphs are from a read/write pgbench \n> run on a bigger than RAM database that ran for some time (so with \n> checkpoints), on a *single* $435 50GB drive without BBU raid controller.\n\nTo get similar *average* performance results you'd need to put about 4 \ndrives and a BBU into a server. The worst-case latency on that solution \nis pretty bad though, when a lot of random writes are queued up; I \nsuspect that's where the SSD will look much better.\n\nBy the way: if you want to run a lot more tests in an organized \nfashion, that's what http://github.com/gregs1104/pgbench-tools was \nwritten to do. That will spit out graphs by client and by scale showing \nhow sensitive the test results are to each.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 26 Jul 2010 11:34:00 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Mon, 26 Jul 2010, Greg Smith wrote:\n> Matthew Wakeling wrote:\n>> Does your latency graph really have milliseconds as the y axis? If so, this \n>> device is really slow - some requests have a latency of more than a second!\n>\n> Have you tried that yourself? If you generate one of those with standard \n> hard drives and a BBWC under Linux, I expect you'll discover those latencies \n> to be >5 seconds long. I recently saw >100 *seconds* running a large pgbench \n> test due to latency flushing things to disk, on a system with 72GB of RAM. \n> Takes a long time to flush >3GB of random I/O out to disk when the kernel \n> will happily cache that many writes until checkpoint time.\n\nApologies, I was interpreting the graph as the latency of the device, not \nall the layers in-between as well. There isn't any indication in the email \nwith the graph as to what the test conditions or software are. Obviously \nif you factor in checkpoints and the OS writing out everything, then you \nwould have to expect some large latency operations. However, if the device \nitself behaved as in the graph, I would be most unhappy and send it back.\n\nYeb also made the point - there are far too many points on that graph to \nreally tell what the average latency is. It'd be instructive to have a few \nfigures, like \"only x% of requests took longer than y\".\n\nMatthew\n\n-- \n I wouldn't be so paranoid if you weren't all out to get me!!\n", "msg_date": "Mon, 26 Jul 2010 16:42:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Matthew Wakeling wrote:\n> Apologies, I was interpreting the graph as the latency of the device, \n> not all the layers in-between as well. There isn't any indication in \n> the email with the graph as to what the test conditions or software are.\nThat info was in the email preceding the graph mail, but I see now I \nforgot to mention it was a 8.4.4 postgres version.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Mon, 26 Jul 2010 18:26:30 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Mon, Jul 26, 2010 at 10:26 AM, Yeb Havinga <[email protected]> wrote:\n\n> Matthew Wakeling wrote:\n>\n>> Apologies, I was interpreting the graph as the latency of the device, not\n>> all the layers in-between as well. There isn't any indication in the email\n>> with the graph as to what the test conditions or software are.\n>>\n> That info was in the email preceding the graph mail, but I see now I forgot\n> to mention it was a 8.4.4 postgres version.\n>\n>\nSpeaking of the layers in-between, has this test been done with the ext3\njournal on a different device? Maybe the purpose is wrong for the SSD. Use\nthe SSD for the ext3 journal and the spindled drives for filesystem?\nAnother possibility is to use ext2 on the SSD.\n\nGreg\n\nOn Mon, Jul 26, 2010 at 10:26 AM, Yeb Havinga <[email protected]> wrote:\nMatthew Wakeling wrote:\n\nApologies, I was interpreting the graph as the latency of the device, not all the layers in-between as well. There isn't any indication in the email with the graph as to what the test conditions or software are.\n\nThat info was in the email preceding the graph mail, but I see now I forgot to mention it was a 8.4.4 postgres version.\n\nSpeaking of the layers in-between, has this test been done with the\next3 journal on a different device?  Maybe the purpose is wrong for the\nSSD.  Use the SSD for the ext3 journal and the spindled drives for\nfilesystem?  Another possibility is to use ext2 on the SSD.\n\nGreg", "msg_date": "Mon, 26 Jul 2010 10:26:45 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Matthew Wakeling wrote:\n> Yeb also made the point - there are far too many points on that graph \n> to really tell what the average latency is. It'd be instructive to \n> have a few figures, like \"only x% of requests took longer than y\".\n\nAverage latency is the inverse of TPS. So if the result is, say, 1200 \nTPS, that means the average latency is 1 / (1200 transactions/second) = \n0.83 milliseconds/transaction. The average TPS figure is normally on a \nmore useful scale as far as being able to compare them in ways that make \nsense to people.\n\npgbench-tools derives average, worst-case, and 90th percentile figures \nfor latency from the logs. I have 37MB worth of graphs from a system \nshowing how all this typically works for regular hard drives I've been \ngiven permission to publish; just need to find a place to host it at \ninternally and I'll make the whole stack available to the world. So far \nYeb's data is showing that a single SSD is competitive with a small \narray on average, but with better worst-case behavior than I'm used to \nseeing.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 26 Jul 2010 14:34:39 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Greg Spiegelberg wrote:\n> Speaking of the layers in-between, has this test been done with the \n> ext3 journal on a different device? Maybe the purpose is wrong for \n> the SSD. Use the SSD for the ext3 journal and the spindled drives for \n> filesystem? \n\nThe main disk bottleneck on PostgreSQL databases are the random seeks \nfor reading and writing to the main data blocks. The journal \ninformation is practically noise in comparison--it barely matters \nbecause it's so much less difficult to keep up with. This is why I \ndon't really find ext2 interesting either.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 26 Jul 2010 14:40:43 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Greg Smith <[email protected]> wrote:\n \n> Yeb's data is showing that a single SSD is competitive with a\n> small array on average, but with better worst-case behavior than\n> I'm used to seeing.\n \nSo, how long before someone benchmarks a small array of SSDs? :-)\n \n-Kevin\n", "msg_date": "Mon, 26 Jul 2010 13:42:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Greg Smith wrote:\n> Yeb Havinga wrote:\n>> Please remember that particular graphs are from a read/write pgbench \n>> run on a bigger than RAM database that ran for some time (so with \n>> checkpoints), on a *single* $435 50GB drive without BBU raid controller.\n>\n> To get similar *average* performance results you'd need to put about 4 \n> drives and a BBU into a server. The worst-case latency on that \n> solution is pretty bad though, when a lot of random writes are queued \n> up; I suspect that's where the SSD will look much better.\n>\n> By the way: if you want to run a lot more tests in an organized \n> fashion, that's what http://github.com/gregs1104/pgbench-tools was \n> written to do. That will spit out graphs by client and by scale \n> showing how sensitive the test results are to each.\nGot it, running the default config right now.\n\nWhen you say 'comparable to a small array' - could you give a ballpark \nfigure for 'small'?\n\nregards,\nYeb Havinga\n\nPS: Some update on the testing: I did some ext3,ext4,xfs,jfs and also \next2 tests on the just-in-memory read/write test. (scale 300) No real \nwinners or losers, though ext2 isn't really faster and the manual need \nfor fix (y) during boot makes it impractical in its standard \nconfiguration. I did some poweroff tests with barriers explicitily off \nin ext3, ext4 and xfs, still all recoveries went ok.\n\n", "msg_date": "Mon, 26 Jul 2010 21:42:03 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n>> To get similar *average* performance results you'd need to put about \n>> 4 drives and a BBU into a server. The \n>\nPlease forget this question, I now see it in the mail i'm replying to. \nSorry for the spam!\n\n-- Yeb\n\n", "msg_date": "Mon, 26 Jul 2010 21:43:45 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory \n> read/write test. (scale 300) No real winners or losers, though ext2 \n> isn't really faster and the manual need for fix (y) during boot makes \n> it impractical in its standard configuration.\n\nThat's what happens every time I try it too. The theoretical benefits \nof ext2 for hosting PostgreSQL just don't translate into significant \nperformance increases on database oriented tests, certainly not ones \nthat would justify the downside of having fsck issues come back again. \nGlad to see that holds true on this hardware too.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 26 Jul 2010 15:45:13 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Mon, Jul 26, 2010 at 12:40 PM, Greg Smith <[email protected]> wrote:\n> Greg Spiegelberg wrote:\n>>\n>> Speaking of the layers in-between, has this test been done with the ext3\n>> journal on a different device?  Maybe the purpose is wrong for the SSD.  Use\n>> the SSD for the ext3 journal and the spindled drives for filesystem?\n>\n> The main disk bottleneck on PostgreSQL databases are the random seeks for\n> reading and writing to the main data blocks.  The journal information is\n> practically noise in comparison--it barely matters because it's so much less\n> difficult to keep up with.  This is why I don't really find ext2 interesting\n> either.\n\nNote that SSDs aren't usually real fast at large sequential writes\nthough, so it might be worth putting pg_xlog on a spinning pair in a\nmirror and seeing how much, if any, the SSD drive speeds up when not\nhaving to do pg_xlog.\n", "msg_date": "Mon, 26 Jul 2010 13:47:14 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Mon, 2010-07-26 at 14:34 -0400, Greg Smith wrote:\n> Matthew Wakeling wrote:\n> > Yeb also made the point - there are far too many points on that graph \n> > to really tell what the average latency is. It'd be instructive to \n> > have a few figures, like \"only x% of requests took longer than y\".\n> \n> Average latency is the inverse of TPS. So if the result is, say, 1200 \n> TPS, that means the average latency is 1 / (1200 transactions/second) = \n> 0.83 milliseconds/transaction. \n\nThis is probably only true if you run all transactions sequentially in\none connection? \n\nIf you run 10 parallel threads and get 1200 sec, the average transaction\ntime (latency?) is probably closer to 8.3 ms ?\n\n> The average TPS figure is normally on a \n> more useful scale as far as being able to compare them in ways that make \n> sense to people.\n> \n> pgbench-tools derives average, worst-case, and 90th percentile figures \n> for latency from the logs. I have 37MB worth of graphs from a system \n> showing how all this typically works for regular hard drives I've been \n> given permission to publish; just need to find a place to host it at \n> internally and I'll make the whole stack available to the world. So far \n> Yeb's data is showing that a single SSD is competitive with a small \n> array on average, but with better worst-case behavior than I'm used to \n> seeing.\n> \n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n> \n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Mon, 26 Jul 2010 23:40:49 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith <[email protected]> wrote:\n\n> Yeb Havinga wrote:\n>\n>> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory\n>> read/write test. (scale 300) No real winners or losers, though ext2 isn't\n>> really faster and the manual need for fix (y) during boot makes it\n>> impractical in its standard configuration.\n>>\n>\n> That's what happens every time I try it too. The theoretical benefits of\n> ext2 for hosting PostgreSQL just don't translate into significant\n> performance increases on database oriented tests, certainly not ones that\n> would justify the downside of having fsck issues come back again. Glad to\n> see that holds true on this hardware too.\n>\n>\nI know I'm talking development now but is there a case for a pg_xlog block\ndevice to remove the file system overhead and guaranteeing your data is\nwritten sequentially every time?\n\nGreg\n\nOn Mon, Jul 26, 2010 at 1:45 PM, Greg Smith <[email protected]> wrote:\nYeb Havinga wrote:\n\nI did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory read/write test. (scale 300) No real winners or losers, though ext2 isn't really faster and the manual need for fix (y) during boot makes it impractical in its standard configuration.\n\n\nThat's what happens every time I try it too.  The theoretical benefits of ext2 for hosting PostgreSQL just don't translate into significant performance increases on database oriented tests, certainly not ones that would justify the downside of having fsck issues come back again.  Glad to see that holds true on this hardware too.\nI know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time?\nGreg", "msg_date": "Mon, 26 Jul 2010 15:23:20 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:\n> On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith <[email protected]> wrote:\n> > Yeb Havinga wrote:\n> >> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory\n> >> read/write test. (scale 300) No real winners or losers, though ext2 isn't\n> >> really faster and the manual need for fix (y) during boot makes it\n> >> impractical in its standard configuration.\n> >>\n> >\n> > That's what happens every time I try it too. The theoretical benefits of\n> > ext2 for hosting PostgreSQL just don't translate into significant\n> > performance increases on database oriented tests, certainly not ones that\n> > would justify the downside of having fsck issues come back again. Glad to\n> > see that holds true on this hardware too.\n> I know I'm talking development now but is there a case for a pg_xlog block\n> device to remove the file system overhead and guaranteeing your data is\n> written sequentially every time?\nFor one I doubt that its a relevant enough efficiency loss in\ncomparison with a significantly significantly complex implementation\n(for one you cant grow/shrink, for another you have to do more\ncomplex, hw-dependent things like rounding to hardware boundaries,\npage size etc to stay efficient) for another my experience is that at\na relatively low point XlogInsert gets to be the bottleneck - so I\ndon't see much point in improving at that low level (yet at least).\n\nWhere I would like to do some hw dependent measuring (because I see\nsignificant improvements there) would be prefetching for seqscan,\nindexscans et al. using blktrace... But I currently dont have the\ntime. And its another topic ;-)\n\nAndres\n", "msg_date": "Tue, 27 Jul 2010 00:28:04 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Greg Spiegelberg wrote:\n> I know I'm talking development now but is there a case for a pg_xlog \n> block device to remove the file system overhead and guaranteeing your \n> data is written sequentially every time?\n\nIt's possible to set the PostgreSQL wal_sync_method parameter in the \ndatabase to open_datasync or open_sync, and if you have an operating \nsystem that supports direct writes it will use those and bypass things \nlike the OS write cache. That's close to what you're suggesting, \nsupposedly portable, and it does show some significant benefit when it's \nproperly supported. Problem has been, the synchronous writing code on \nLinux in particular hasn't ever worked right against ext3, and the \nPostgreSQL code doesn't make the right call at all on Solaris. So \nthere's two popular platforms that it just plain doesn't work on, even \nthough it should.\n\nWe've gotten reports that there are bleeding edge Linux kernel and \nlibrary versions available now that finally fix that issue, and that \nPostgreSQL automatically takes advantage of them when it's compiled on \none of them. But I'm not aware of any distribution that makes this easy \nto try out that's available yet, paint is still wet on the code I think.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 26 Jul 2010 19:00:55 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Mon, Jul 26, 2010 at 01:47:14PM -0600, Scott Marlowe wrote:\n>Note that SSDs aren't usually real fast at large sequential writes\n>though, so it might be worth putting pg_xlog on a spinning pair in a\n>mirror and seeing how much, if any, the SSD drive speeds up when not\n>having to do pg_xlog.\n\nxlog is also where I use ext2; it does bench faster for me in that \nconfig, and the fsck issues don't really exist because you're not in a \nsituation with a lot of files being created/removed.\n\nMike Stone\n", "msg_date": "Wed, 28 Jul 2010 08:21:52 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:\n>I know I'm talking development now but is there a case for a pg_xlog block\n>device to remove the file system overhead and guaranteeing your data is\n>written sequentially every time?\n\nIf you dedicate a partition to xlog, you already get that in practice \nwith no extra devlopment.\n\nMike Stone\n", "msg_date": "Wed, 28 Jul 2010 08:24:00 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Michael Stone wrote:\n> On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:\n>> I know I'm talking development now but is there a case for a pg_xlog \n>> block\n>> device to remove the file system overhead and guaranteeing your data is\n>> written sequentially every time?\n>\n> If you dedicate a partition to xlog, you already get that in practice \n> with no extra devlopment.\nDue to the LBA remapping of the SSD, I'm not sure of putting files that \nare sequentially written in a different partition (together with e.g. \ntables) would make a difference: in the end the SSD will have a set new \nblocks in it's buffer and somehow arrange them into sets of 128KB of \n256KB writes for the flash chips. See also \nhttp://www.anandtech.com/show/2899/2\n\nBut I ran out of ideas to test, so I'm going to test it anyway.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Wed, 28 Jul 2010 15:45:23 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n> Michael Stone wrote:\n>> On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:\n>>> I know I'm talking development now but is there a case for a pg_xlog \n>>> block\n>>> device to remove the file system overhead and guaranteeing your data is\n>>> written sequentially every time?\n>>\n>> If you dedicate a partition to xlog, you already get that in practice \n>> with no extra devlopment.\n> Due to the LBA remapping of the SSD, I'm not sure of putting files \n> that are sequentially written in a different partition (together with \n> e.g. tables) would make a difference: in the end the SSD will have a \n> set new blocks in it's buffer and somehow arrange them into sets of \n> 128KB of 256KB writes for the flash chips. See also \n> http://www.anandtech.com/show/2899/2\n>\n> But I ran out of ideas to test, so I'm going to test it anyway.\nSame machine config as mentioned before, with data and xlog on separate \npartitions, ext3 with barrier off (save on this SSD).\n\npgbench -c 10 -M prepared -T 3600 -l test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 300\nquery mode: prepared\nnumber of clients: 10\nduration: 3600 s\nnumber of transactions actually processed: 10856359\ntps = 3015.560252 (including connections establishing)\ntps = 3015.575739 (excluding connections establishing)\n\nThis is about 25% faster than data and xlog combined on the same filesystem.\n\nBelow is output from iostat -xk 1 -p /dev/sda, which shows each second \nper partition statistics.\nsda2 is data, sda3 is xlog In the third second a checkpoint seems to start.\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 63.50 0.00 30.50 2.50 0.00 3.50\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 6518.00 36.00 2211.00 148.00 35524.00 \n31.75 0.28 0.12 0.11 25.00\nsda1 0.00 2.00 0.00 5.00 0.00 636.00 \n254.40 0.03 6.00 2.00 1.00\nsda2 0.00 218.00 36.00 40.00 148.00 1032.00 \n31.05 0.00 0.00 0.00 0.00\nsda3 0.00 6298.00 0.00 2166.00 0.00 33856.00 \n31.26 0.25 0.12 0.12 25.00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 60.50 0.00 37.50 0.50 0.00 1.50\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 6514.00 33.00 2283.00 140.00 35188.00 \n30.51 0.32 0.14 0.13 29.00\nsda1 0.00 0.00 0.00 3.00 0.00 12.00 \n8.00 0.00 0.00 0.00 0.00\nsda2 0.00 0.00 33.00 2.00 140.00 8.00 \n8.46 0.03 0.86 0.29 1.00\nsda3 0.00 6514.00 0.00 2278.00 0.00 35168.00 \n30.88 0.29 0.13 0.13 29.00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 33.00 0.00 34.00 18.00 0.00 15.00\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 3782.00 7.00 7235.00 28.00 44068.00 \n12.18 69.52 9.46 0.09 62.00\nsda1 0.00 0.00 0.00 1.00 0.00 4.00 \n8.00 0.00 0.00 0.00 0.00\nsda2 0.00 322.00 7.00 6018.00 28.00 25360.00 \n8.43 69.22 11.33 0.08 47.00\nsda3 0.00 3460.00 0.00 1222.00 0.00 18728.00 \n30.65 0.30 0.25 0.25 30.00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 9.00 0.00 36.00 22.50 0.00 32.50\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 1079.00 3.00 11110.00 12.00 49060.00 \n8.83 120.64 10.95 0.08 86.00\nsda1 0.00 2.00 0.00 2.00 0.00 320.00 \n320.00 0.12 60.00 35.00 7.00\nsda2 0.00 30.00 3.00 10739.00 12.00 43076.00 \n8.02 120.49 11.30 0.08 83.00\nsda3 0.00 1047.00 0.00 363.00 0.00 5640.00 \n31.07 0.03 0.08 0.08 3.00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 62.00 0.00 31.00 2.00 0.00 5.00\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 6267.00 51.00 2493.00 208.00 35040.00 \n27.71 1.80 0.71 0.12 31.00\nsda1 0.00 0.00 0.00 3.00 0.00 12.00 \n8.00 0.00 0.00 0.00 0.00\nsda2 0.00 123.00 51.00 344.00 208.00 1868.00 \n10.51 1.50 3.80 0.10 4.00\nsda3 0.00 6144.00 0.00 2146.00 0.00 33160.00 \n30.90 0.30 0.14 0.14 30.00\n\n", "msg_date": "Wed, 28 Jul 2010 17:18:27 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Wed, Jul 28, 2010 at 9:18 AM, Yeb Havinga <[email protected]> wrote:\n\n> Yeb Havinga wrote:\n>\n>> Due to the LBA remapping of the SSD, I'm not sure of putting files that\n>> are sequentially written in a different partition (together with e.g.\n>> tables) would make a difference: in the end the SSD will have a set new\n>> blocks in it's buffer and somehow arrange them into sets of 128KB of 256KB\n>> writes for the flash chips. See also http://www.anandtech.com/show/2899/2\n>>\n>> But I ran out of ideas to test, so I'm going to test it anyway.\n>>\n> Same machine config as mentioned before, with data and xlog on separate\n> partitions, ext3 with barrier off (save on this SSD).\n>\n> pgbench -c 10 -M prepared -T 3600 -l test\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 300\n> query mode: prepared\n> number of clients: 10\n> duration: 3600 s\n> number of transactions actually processed: 10856359\n> tps = 3015.560252 (including connections establishing)\n> tps = 3015.575739 (excluding connections establishing)\n>\n> This is about 25% faster than data and xlog combined on the same\n> filesystem.\n>\n>\nThe trick may be in kjournald for which there is 1 for each ext3 journalled\nfile system. I learned back in Red Hat 4 pre U4 kernels there was a problem\nwith kjournald that would either cause 30 second hangs or lock up my server\ncompletely when pg_xlog and data were on the same file system plus a few\nother \"right\" things going on.\n\nGiven the multicore world we have today, I think it makes sense that\nmultiple ext3 file systems, and the kjournald's that service them, is faster\nthan a single combined file system.\n\n\nGreg\n\nOn Wed, Jul 28, 2010 at 9:18 AM, Yeb Havinga <[email protected]> wrote:\nYeb Havinga wrote:\nDue to the LBA remapping of the SSD, I'm not sure of putting files that are sequentially written in a different partition (together with e.g. tables) would make a difference: in the end the SSD will have a set new blocks in it's buffer and somehow arrange them into sets of 128KB of 256KB writes for the flash chips. See also http://www.anandtech.com/show/2899/2\n\nBut I ran out of ideas to test, so I'm going to test it anyway.\n\nSame machine config as mentioned before, with data and xlog on separate partitions, ext3 with barrier off (save on this SSD).\n\npgbench -c 10 -M prepared -T 3600 -l test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 300\nquery mode: prepared\nnumber of clients: 10\nduration: 3600 s\nnumber of transactions actually processed: 10856359\ntps = 3015.560252 (including connections establishing)\ntps = 3015.575739 (excluding connections establishing)\n\nThis is about 25% faster than data and xlog combined on the same filesystem.\nThe trick may be in kjournald for which there is 1 for each ext3 journalled file system.  I learned back in Red Hat 4 pre U4 kernels there was a problem with kjournald that would either cause 30 second hangs or lock up my server completely when pg_xlog and data were on the same file system plus a few other \"right\" things going on.\nGiven the multicore world we have today, I think it makes sense that multiple ext3 file systems, and the kjournald's that service them, is faster than a single combined file system.Greg", "msg_date": "Wed, 28 Jul 2010 20:07:14 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Wed, Jul 28, 2010 at 03:45:23PM +0200, Yeb Havinga wrote:\n>Due to the LBA remapping of the SSD, I'm not sure of putting files \n>that are sequentially written in a different partition (together with \n>e.g. tables) would make a difference: in the end the SSD will have a \n>set new blocks in it's buffer and somehow arrange them into sets of \n>128KB of 256KB writes for the flash chips. See also \n>http://www.anandtech.com/show/2899/2\n\nIt's not a question of the hardware side, it's the software. The xlog\nneeds to by synchronized, and the things the filesystem has to do to \nmake that happen penalize the non-xlog disk activity. That's why my \npreferred config is xlog on ext2, rest on xfs. That allows the \nsynchronous activity to happen with minimal overhead, while the parts \nthat benefit from having more data in flight can do that freely.\n\nMike Stone\n", "msg_date": "Thu, 29 Jul 2010 10:45:34 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Greg Smith wrote:\n> Greg Smith wrote:\n>> Note that not all of the Sandforce drives include a capacitor; I hope \n>> you got one that does! I wasn't aware any of the SF drives with a \n>> capacitor on them were even shipping yet, all of the ones I'd seen \n>> were the chipset that doesn't include one still. Haven't checked in \n>> a few weeks though.\n>\n> Answer my own question here: the drive Yeb got was the brand spanking \n> new OCZ Vertex 2 Pro, selling for $649 at Newegg for example: \n> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227535 and \n> with the supercacitor listed right in the main production \n> specifications there. This is officially the first inexpensive \n> (relatively) SSD with a battery-backed write cache built into it. If \n> Yeb's test results prove it works as it's supposed to under \n> PostgreSQL, I'll be happy to finally have a moderately priced SSD I \n> can recommend to people for database use. And I fear I'll be out of \n> excuses to avoid buying one as a toy for my home system.\n>\nHello list,\n\nAfter a week testing I think I can answer the question above: does it \nwork like it's supposed to under PostgreSQL?\n\nYES\n\nThe drive I have tested is the $435,- 50GB OCZ Vertex 2 Pro, \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16820227534\n\n* it is safe to mount filesystems with barrier off, since it has a \n'supercap backed cache'. That data is not lost is confirmed by a dozen \npower switch off tests while running either diskchecker.pl or pgbench.\n* the above implies its also safe to use this SSD with barriers, though \nthat will perform less, since this drive obeys write trough commands.\n* the highest pgbench tps number for the TPC-B test for a scale 300 \ndatabase (~5GB) I could get was over 6700. Judging from the iostat \naverage util of ~40% on the xlog partition, I believe that this number \nis limited by other factors than the SSD, like CPU, core count, core \nMHz, memory size/speed, 8.4 pgbench without threads. Unfortunately I \ndon't have a faster/more core machines available for testing right now.\n* pgbench numbers for a larger than RAM database, read only was over \n25000 tps (details are at the end of this post), during which iostat \nreported ~18500 read iops and 100% utilization.\n* pgbench max reported latencies are 20% of comparable BBWC setups.\n* how reliable it is over time, and how it performs over time I cannot \nsay, since I tested it only for a week.\n\nregards,\nYeb Havinga\n\nPS: ofcourse all claims I make here are without any warranty. All \ninformation in this mail is for reference purposes, I do not claim it is \nsuitable for your database setup.\n\nSome info on configuration:\nBOOT_IMAGE=/boot/vmlinuz-2.6.32-22-server elevator=deadline\nquad core AMD Phenom(tm) II X4 940 Processor on 3.0GHz\n16GB RAM 667MHz DDR2\n\nDisk/ filesystem settings.\nModel Family: OCZ Vertex SSD\nDevice Model: OCZ VERTEX2-PRO\nFirmware Version: 1.10\n\nhdparm: did not change standard settings: write cache is on, as well as \nreadahead.\n hdparm -AW /dev/sdc\n/dev/sdc:\n look-ahead = 1 (on)\n write-caching = 1 (on)\n\nUntuned ext4 filesystem.\nMount options\n/dev/sdc2 on /data type ext4 \n(rw,noatime,nodiratime,relatime,barrier=0,discard)\n/dev/sdc3 on /xlog type ext4 \n(rw,noatime,nodiratime,relatime,barrier=0,discard)\nNote the -o discard: this means use of the automatic SSD trimming on a \nnew linux kernel.\nAlso, per core per filesystem there now is a [ext4-dio-unwrit] process - \nwhich suggest something like 'directio'? I haven't investigated this any \nfurther.\n\nSysctl:\n(copied from a larger RAM database machine)\nkernel.core_uses_pid = 1\nfs.file-max = 327679\nnet.ipv4.ip_local_port_range = 1024 65000\nkernel.msgmni = 2878\nkernel.msgmax = 8192\nkernel.msgmnb = 65536\nkernel.sem = 250 32000 100 142\nkernel.shmmni = 4096\nkernel.sysrq = 1\nkernel.shmmax = 33794121728\nkernel.shmall = 16777216\nnet.core.rmem_default = 262144\nnet.core.rmem_max = 2097152\nnet.core.wmem_default = 262144\nnet.core.wmem_max = 262144\nfs.aio-max-nr = 3145728\nvm.swappiness = 0\nvm.dirty_background_ratio = 3\nvm.dirty_expire_centisecs = 500\nvm.dirty_writeback_centisecs = 100\nvm.dirty_ratio = 15\n\nPostgres settings:\n8.4.4\n--with-blocksize=4\nI saw about 10% increase in performance compared to 8KB blocksizes.\n\nPostgresql.conf:\nchanged from default config are:\nmaintenance_work_mem = 480MB # pgtune wizard 2010-07-25\ncheckpoint_completion_target = 0.9 # pgtune wizard 2010-07-25\neffective_cache_size = 5632MB # pgtune wizard 2010-07-25\nwork_mem = 512MB # pgtune wizard 2010-07-25\nwal_buffers = 8MB # pgtune wizard 2010-07-25\ncheckpoint_segments = 128 # pgtune said 16 here\nshared_buffers = 1920MB # pgtune wizard 2010-07-25\nmax_connections = 100\n\ninitdb with data on sda2 and xlog on sda3, C locale\n\nRead write test on ~5GB database:\n$ pgbench -v -c 20 -M prepared -T 3600 test\nstarting vacuum...end.\nstarting vacuum pgbench_accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 300\nquery mode: prepared\nnumber of clients: 20\nduration: 3600 s\nnumber of transactions actually processed: 24291875\ntps = 6747.665859 (including connections establishing)\ntps = 6747.721665 (excluding connections establishing)\n\nRead only test on larger than RAM ~23GB database (server has 16GB \nfysical RAM) :\n$ pgbench -c 20 -M prepared -T 300 -S test\nstarting vacuum...end.\ntransaction type: SELECT only\n*scaling factor: 1500*\nquery mode: prepared\nnumber of clients: 20\nduration: 300 s\nnumber of transactions actually processed: 7556469\ntps = 25184.056498 (including connections establishing)\ntps = 25186.336911 (excluding connections establishing)\n\nIOstat reports ~18500 reads/s and ~185 read MB/s during this read only \ntest on the data partition with 100% util.\n\n", "msg_date": "Fri, 30 Jul 2010 17:01:43 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "6700tps?! Wow......\n\nOk, I'm impressed. May wait a bit for prices to come somewhat, but that\nsounds like two of those are going in one of my production machines\n(Raid 1, of course)\n\nYeb Havinga wrote:\n> Greg Smith wrote:\n>> Greg Smith wrote:\n>>> Note that not all of the Sandforce drives include a capacitor; I\n>>> hope you got one that does! I wasn't aware any of the SF drives\n>>> with a capacitor on them were even shipping yet, all of the ones I'd\n>>> seen were the chipset that doesn't include one still. Haven't\n>>> checked in a few weeks though.\n>>\n>> Answer my own question here: the drive Yeb got was the brand\n>> spanking new OCZ Vertex 2 Pro, selling for $649 at Newegg for\n>> example: \n>> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227535 and\n>> with the supercacitor listed right in the main production\n>> specifications there. This is officially the first inexpensive\n>> (relatively) SSD with a battery-backed write cache built into it. If\n>> Yeb's test results prove it works as it's supposed to under\n>> PostgreSQL, I'll be happy to finally have a moderately priced SSD I\n>> can recommend to people for database use. And I fear I'll be out of\n>> excuses to avoid buying one as a toy for my home system.\n>>\n> Hello list,\n>\n> After a week testing I think I can answer the question above: does it\n> work like it's supposed to under PostgreSQL?\n>\n> YES\n>\n> The drive I have tested is the $435,- 50GB OCZ Vertex 2 Pro,\n> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227534\n>\n> * it is safe to mount filesystems with barrier off, since it has a\n> 'supercap backed cache'. That data is not lost is confirmed by a dozen\n> power switch off tests while running either diskchecker.pl or pgbench.\n> * the above implies its also safe to use this SSD with barriers,\n> though that will perform less, since this drive obeys write trough\n> commands.\n> * the highest pgbench tps number for the TPC-B test for a scale 300\n> database (~5GB) I could get was over 6700. Judging from the iostat\n> average util of ~40% on the xlog partition, I believe that this number\n> is limited by other factors than the SSD, like CPU, core count, core\n> MHz, memory size/speed, 8.4 pgbench without threads. Unfortunately I\n> don't have a faster/more core machines available for testing right now.\n> * pgbench numbers for a larger than RAM database, read only was over\n> 25000 tps (details are at the end of this post), during which iostat\n> reported ~18500 read iops and 100% utilization.\n> * pgbench max reported latencies are 20% of comparable BBWC setups.\n> * how reliable it is over time, and how it performs over time I cannot\n> say, since I tested it only for a week.\n>\n> regards,\n> Yeb Havinga\n>\n> PS: ofcourse all claims I make here are without any warranty. All\n> information in this mail is for reference purposes, I do not claim it\n> is suitable for your database setup.\n>\n> Some info on configuration:\n> BOOT_IMAGE=/boot/vmlinuz-2.6.32-22-server elevator=deadline\n> quad core AMD Phenom(tm) II X4 940 Processor on 3.0GHz\n> 16GB RAM 667MHz DDR2\n>\n> Disk/ filesystem settings.\n> Model Family: OCZ Vertex SSD\n> Device Model: OCZ VERTEX2-PRO\n> Firmware Version: 1.10\n>\n> hdparm: did not change standard settings: write cache is on, as well\n> as readahead.\n> hdparm -AW /dev/sdc\n> /dev/sdc:\n> look-ahead = 1 (on)\n> write-caching = 1 (on)\n>\n> Untuned ext4 filesystem.\n> Mount options\n> /dev/sdc2 on /data type ext4\n> (rw,noatime,nodiratime,relatime,barrier=0,discard)\n> /dev/sdc3 on /xlog type ext4\n> (rw,noatime,nodiratime,relatime,barrier=0,discard)\n> Note the -o discard: this means use of the automatic SSD trimming on a\n> new linux kernel.\n> Also, per core per filesystem there now is a [ext4-dio-unwrit] process\n> - which suggest something like 'directio'? I haven't investigated this\n> any further.\n>\n> Sysctl:\n> (copied from a larger RAM database machine)\n> kernel.core_uses_pid = 1\n> fs.file-max = 327679\n> net.ipv4.ip_local_port_range = 1024 65000\n> kernel.msgmni = 2878\n> kernel.msgmax = 8192\n> kernel.msgmnb = 65536\n> kernel.sem = 250 32000 100 142\n> kernel.shmmni = 4096\n> kernel.sysrq = 1\n> kernel.shmmax = 33794121728\n> kernel.shmall = 16777216\n> net.core.rmem_default = 262144\n> net.core.rmem_max = 2097152\n> net.core.wmem_default = 262144\n> net.core.wmem_max = 262144\n> fs.aio-max-nr = 3145728\n> vm.swappiness = 0\n> vm.dirty_background_ratio = 3\n> vm.dirty_expire_centisecs = 500\n> vm.dirty_writeback_centisecs = 100\n> vm.dirty_ratio = 15\n>\n> Postgres settings:\n> 8.4.4\n> --with-blocksize=4\n> I saw about 10% increase in performance compared to 8KB blocksizes.\n>\n> Postgresql.conf:\n> changed from default config are:\n> maintenance_work_mem = 480MB # pgtune wizard 2010-07-25\n> checkpoint_completion_target = 0.9 # pgtune wizard 2010-07-25\n> effective_cache_size = 5632MB # pgtune wizard 2010-07-25\n> work_mem = 512MB # pgtune wizard 2010-07-25\n> wal_buffers = 8MB # pgtune wizard 2010-07-25\n> checkpoint_segments = 128 # pgtune said 16 here\n> shared_buffers = 1920MB # pgtune wizard 2010-07-25\n> max_connections = 100\n>\n> initdb with data on sda2 and xlog on sda3, C locale\n>\n> Read write test on ~5GB database:\n> $ pgbench -v -c 20 -M prepared -T 3600 test\n> starting vacuum...end.\n> starting vacuum pgbench_accounts...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 300\n> query mode: prepared\n> number of clients: 20\n> duration: 3600 s\n> number of transactions actually processed: 24291875\n> tps = 6747.665859 (including connections establishing)\n> tps = 6747.721665 (excluding connections establishing)\n>\n> Read only test on larger than RAM ~23GB database (server has 16GB\n> fysical RAM) :\n> $ pgbench -c 20 -M prepared -T 300 -S test\n> starting vacuum...end.\n> transaction type: SELECT only\n> *scaling factor: 1500*\n> query mode: prepared\n> number of clients: 20\n> duration: 300 s\n> number of transactions actually processed: 7556469\n> tps = 25184.056498 (including connections establishing)\n> tps = 25186.336911 (excluding connections establishing)\n>\n> IOstat reports ~18500 reads/s and ~185 read MB/s during this read only\n> test on the data partition with 100% util.\n>\n>", "msg_date": "Fri, 30 Jul 2010 10:14:43 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga <[email protected]> wrote:\n> After a week testing I think I can answer the question above: does it work\n> like it's supposed to under PostgreSQL?\n>\n> YES\n>\n> The drive I have tested is the $435,- 50GB OCZ Vertex 2 Pro,\n> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227534\n>\n> * it is safe to mount filesystems with barrier off, since it has a 'supercap\n> backed cache'. That data is not lost is confirmed by a dozen power switch\n> off tests while running either diskchecker.pl or pgbench.\n> * the above implies its also safe to use this SSD with barriers, though that\n> will perform less, since this drive obeys write trough commands.\n> * the highest pgbench tps number for the TPC-B test for a scale 300 database\n> (~5GB) I could get was over 6700. Judging from the iostat average util of\n> ~40% on the xlog partition, I believe that this number is limited by other\n> factors than the SSD, like CPU, core count, core MHz, memory size/speed, 8.4\n> pgbench without threads. Unfortunately I don't have a faster/more core\n> machines available for testing right now.\n> * pgbench numbers for a larger than RAM database, read only was over 25000\n> tps (details are at the end of this post), during which iostat reported\n> ~18500 read iops and 100% utilization.\n> * pgbench max reported latencies are 20% of comparable BBWC setups.\n> * how reliable it is over time, and how it performs over time I cannot say,\n> since I tested it only for a week.\n\nThank you very much for posting this analysis. This has IMNSHO the\npotential to be a game changer. There are still some unanswered\nquestions in terms of how the drive wears, reliability, errors, and\nlifespan but 6700 tps off of a single 400$ device with decent fault\ntolerance is amazing (Intel, consider yourself upstaged). Ever since\nthe first samsung SSD hit the market I've felt the days of the\nspinning disk have been numbered. Being able to build a 100k tps\nserver on relatively inexpensive hardware without an entire rack full\nof drives is starting to look within reach.\n\n> Postgres settings:\n> 8.4.4\n> --with-blocksize=4\n> I saw about 10% increase in performance compared to 8KB blocksizes.\n\nThat's very interesting -- we need more testing in that department...\n\nregards (and thanks again)\nmerlin\n", "msg_date": "Mon, 2 Aug 2010 10:26:39 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Merlin Moncure wrote:\n> On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga <[email protected]> wrote:\n> \n>> Postgres settings:\n>> 8.4.4\n>> --with-blocksize=4\n>> I saw about 10% increase in performance compared to 8KB blocksizes.\n>> \n>\n> That's very interesting -- we need more testing in that department...\n> \nDefinately - that 10% number was on the old-first hardware (the core 2 \nE6600). After reading my post and the 185MBps with 18500 reads/s number \nI was a bit suspicious whether I did the tests on the new hardware with \n4K, because 185MBps / 18500 reads/s is ~10KB / read, so I thought thats \na lot closer to 8KB than 4KB. I checked with show block_size and it was \n4K. Then I redid the tests on the new server with the default 8KB \nblocksize and got about 4700 tps (TPC-B/300)... 67/47 = 1.47. So it \nseems that on newer hardware, the difference is larger than 10%.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Mon, 02 Aug 2010 17:00:27 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "\n> Definately - that 10% number was on the old-first hardware (the core 2\n> E6600). After reading my post and the 185MBps with 18500 reads/s number\n> I was a bit suspicious whether I did the tests on the new hardware with\n> 4K, because 185MBps / 18500 reads/s is ~10KB / read, so I thought thats\n> a lot closer to 8KB than 4KB. I checked with show block_size and it was\n> 4K. Then I redid the tests on the new server with the default 8KB\n> blocksize and got about 4700 tps (TPC-B/300)... 67/47 = 1.47. So it\n> seems that on newer hardware, the difference is larger than 10%.\n\nThat doesn't make much sense unless there's some special advantage to a\n4K blocksize with the hardware itself. Can you just do a basic\nfilesystem test (like Bonnie++) with a 4K vs. 8K blocksize?\n\nAlso, are you running your pgbench tests more than once, just to account\nfor randomizing?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 02 Aug 2010 15:21:41 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Josh Berkus wrote:\n> That doesn't make much sense unless there's some special advantage to a\n> 4K blocksize with the hardware itself.\n\nGiven that pgbench is always doing tiny updates to blocks, I wouldn't be \nsurprised if switching to smaller blocks helps it in a lot of situations \nif one went looking for them. Also, as you point out, pgbench runtime \nvaries around wildly enough that 10% would need more investigation to \nreally prove that means something. But I think Yeb has done plenty of \ninvestigation into the most interesting part here, the durability claims. \n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 02 Aug 2010 20:07:43 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith <[email protected]> wrote:\n> Josh Berkus wrote:\n>>\n>> That doesn't make much sense unless there's some special advantage to a\n>> 4K blocksize with the hardware itself.\n>\n> Given that pgbench is always doing tiny updates to blocks, I wouldn't be\n> surprised if switching to smaller blocks helps it in a lot of situations if\n> one went looking for them.  Also, as you point out, pgbench runtime varies\n> around wildly enough that 10% would need more investigation to really prove\n> that means something.  But I think Yeb has done plenty of investigation into\n> the most interesting part here, the durability claims.\n\nRunning the tests for longer helps a lot on reducing the noisy\nresults. Also letting them runs longer means that the background\nwriter and autovacuum start getting involved, so the test becomes\nsomewhat more realistic.\n", "msg_date": "Mon, 2 Aug 2010 18:12:41 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Scott Marlowe wrote:\n> On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith <[email protected]> wrote:\n> \n>> Josh Berkus wrote:\n>> \n>>> That doesn't make much sense unless there's some special advantage to a\n>>> 4K blocksize with the hardware itself.\n>>> \n>> Given that pgbench is always doing tiny updates to blocks, I wouldn't be\n>> surprised if switching to smaller blocks helps it in a lot of situations if\n>> one went looking for them. Also, as you point out, pgbench runtime varies\n>> around wildly enough that 10% would need more investigation to really prove\n>> that means something. But I think Yeb has done plenty of investigation into\n>> the most interesting part here, the durability claims.\n>> \nPlease note that the 10% was on a slower CPU. On a more recent CPU the \ndifference was 47%, based on tests that ran for an hour. That's why I \nabsolutely agree with Merlin Moncure that more testing in this \ndepartment is welcome, preferably by others since after all I could be \non the pay roll of OCZ :-)\n\nI looked a bit into Bonnie++ but fail to see how I could do a test that \nsomehow matches the PostgreSQL setup during the pgbench tests (db that \nfits in memory, so the test is actually how fast the ssd can capture \nsequential WAL writes and fsync without barriers, mixed with an \noccasional checkpoint with random write IO on another partition). Since \nthe WAL writing is the same for both block_size setups, I decided to \ncompare random writes to a file of 5GB with Oracle's Orion tool:\n\n=== 4K test summary ====\nORION VERSION 11.1.0.7.0\n\nCommandline:\n-testname test -run oltp -size_small 4 -size_large 1024 -write 100\n\nThis maps to this test:\nTest: test\nSmall IO size: 4 KB\nLarge IO size: 1024 KB\nIO Types: Small Random IOs, Large Random IOs\nSimulated Array Type: CONCAT\nWrite: 100%\nCache Size: Not Entered\nDuration for each Data Point: 60 seconds\nSmall Columns:, 1, 2, 3, 4, 5, 6, \n7, 8, 9, 10, 11, 12, 13, 14, 15, \n16, 17, 18, 19, 20\nLarge Columns:, 0\nTotal Data Points: 21\n\nName: /mnt/data/5gb Size: 5242880000\n1 FILEs found.\n\nMaximum Small IOPS=86883 @ Small=8 and Large=0\nMinimum Small Latency=0.01 @ Small=1 and Large=0\n\n=== 8K test summary ====\n\nORION VERSION 11.1.0.7.0\n\nCommandline:\n-testname test -run oltp -size_small 8 -size_large 1024 -write 100\n\nThis maps to this test:\nTest: test\nSmall IO size: 8 KB\nLarge IO size: 1024 KB\nIO Types: Small Random IOs, Large Random IOs\nSimulated Array Type: CONCAT\nWrite: 100%\nCache Size: Not Entered\nDuration for each Data Point: 60 seconds\nSmall Columns:, 1, 2, 3, 4, 5, 6, \n7, 8, 9, 10, 11, 12, 13, 14, 15, \n16, 17, 18, 19, 20\nLarge Columns:, 0\nTotal Data Points: 21\n\nName: /mnt/data/5gb Size: 5242880000\n1 FILEs found.\n\nMaximum Small IOPS=48798 @ Small=11 and Large=0\nMinimum Small Latency=0.02 @ Small=1 and Large=0\n> Running the tests for longer helps a lot on reducing the noisy\n> results. Also letting them runs longer means that the background\n> writer and autovacuum start getting involved, so the test becomes\n> somewhat more realistic.\n> \nYes, that's why I did a lot of the TPC-B tests with -T 3600 so they'd \nrun for an hour. (also the 4K vs 8K blocksize in postgres).\n\nregards,\nYeb Havinga\n\n", "msg_date": "Tue, 03 Aug 2010 10:40:29 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Tue, 2010-08-03 at 10:40 +0200, Yeb Havinga wrote:\n> se note that the 10% was on a slower CPU. On a more recent CPU the \n> difference was 47%, based on tests that ran for an hour.\n\nI am not surprised at all that reading and writing almost twice as much\ndata from/to disk takes 47% longer. If less time is spent on seeking the\namount of data starts playing bigger role.\n\n> That's why I \n> absolutely agree with Merlin Moncure that more testing in this \n> department is welcome, preferably by others since after all I could be \n> on the pay roll of OCZ :-)\n\n:)\n\n\n> I looked a bit into Bonnie++ but fail to see how I could do a test that \n> somehow matches the PostgreSQL setup during the pgbench tests (db that \n> fits in memory, \n\nDid it fit in shared_buffers, or system cache ?\n\nOnce we are in high tps ground, the time it takes to move pages between\nuserspace and system cache starts to play bigger role.\n\nI first noticed this several years ago, when doing a COPY to a large\ntable with indexes took noticably longer (2-3 times longer) when the\nindexes were in system cache than when they were in shared_buffers.\n\n> so the test is actually how fast the ssd can capture \n> sequential WAL writes and fsync without barriers, mixed with an \n> occasional checkpoint with random write IO on another partition). Since \n> the WAL writing is the same for both block_size setups, I decided to \n> compare random writes to a file of 5GB with Oracle's Orion tool:\n\nAre you sure that you are not writing full WAL pages ?\n\nDo you have any stats on how much WAL is written for 8kb and 4kb test\ncases ?\n\nAnd for other disk i/o during the tests ?\n\n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Tue, 03 Aug 2010 12:08:42 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Hannu Krosing wrote:\n> Did it fit in shared_buffers, or system cache ?\n> \nDatabase was ~5GB, server has 16GB, shared buffers was set to 1920MB.\n> I first noticed this several years ago, when doing a COPY to a large\n> table with indexes took noticably longer (2-3 times longer) when the\n> indexes were in system cache than when they were in shared_buffers.\n> \nI read this as a hint: try increasing shared_buffers. I'll redo the \npgbench run with increased shared_buffers.\n>> so the test is actually how fast the ssd can capture \n>> sequential WAL writes and fsync without barriers, mixed with an \n>> occasional checkpoint with random write IO on another partition). Since \n>> the WAL writing is the same for both block_size setups, I decided to \n>> compare random writes to a file of 5GB with Oracle's Orion tool:\n>> \n>\n> Are you sure that you are not writing full WAL pages ?\n> \nI'm not sure I understand this question.\n> Do you have any stats on how much WAL is written for 8kb and 4kb test\n> cases ?\n> \nWould some iostat -xk 1 for each partition suffice?\n> And for other disk i/o during the tests ?\n> \nNot existent.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Tue, 03 Aug 2010 13:19:57 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n> Small IO size: 4 KB\n> Maximum Small IOPS=86883 @ Small=8 and Large=0\n>\n> Small IO size: 8 KB\n> Maximum Small IOPS=48798 @ Small=11 and Large=0\n\nConclusion: you can write 4KB blocks almost twice as fast as 8KB ones. \nThis is a useful observation about the effectiveness of the write cache \non the unit, but not really a surprise. On ideal hardware performance \nshould double if you halve the write size. I already wagered the \ndifference in pgbench results is caused by the same math.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 03 Aug 2010 10:02:19 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Yeb Havinga wrote:\n> Hannu Krosing wrote:\n>> Did it fit in shared_buffers, or system cache ?\n>> \n> Database was ~5GB, server has 16GB, shared buffers was set to 1920MB.\n>> I first noticed this several years ago, when doing a COPY to a large\n>> table with indexes took noticably longer (2-3 times longer) when the\n>> indexes were in system cache than when they were in shared_buffers.\n>> \n> I read this as a hint: try increasing shared_buffers. I'll redo the \n> pgbench run with increased shared_buffers.\nShared buffers raised from 1920MB to 3520MB:\n\n pgbench -v -l -c 20 -M prepared -T 1800 test\nstarting vacuum...end.\nstarting vacuum pgbench_accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 300\nquery mode: prepared\nnumber of clients: 20\nduration: 1800 s\nnumber of transactions actually processed: 12971714\ntps = 7206.244065 (including connections establishing)\ntps = 7206.349947 (excluding connections establishing)\n\n:-)\n", "msg_date": "Tue, 03 Aug 2010 17:37:36 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "On Tue, Aug 3, 2010 at 11:37 AM, Yeb Havinga <[email protected]> wrote:\n> Yeb Havinga wrote:\n>>\n>> Hannu Krosing wrote:\n>>>\n>>> Did it fit in shared_buffers, or system cache ?\n>>>\n>>\n>> Database was ~5GB, server has 16GB, shared buffers was set to 1920MB.\n>>>\n>>> I first noticed this several years ago, when doing a COPY to a large\n>>> table with indexes took noticably longer (2-3 times longer) when the\n>>> indexes were in system cache than when they were in shared_buffers.\n>>>\n>>\n>> I read this as a hint: try increasing shared_buffers. I'll redo the\n>> pgbench run with increased shared_buffers.\n>\n> Shared buffers raised from 1920MB to 3520MB:\n>\n> pgbench -v -l -c 20 -M prepared -T 1800 test\n> starting vacuum...end.\n> starting vacuum pgbench_accounts...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 300\n> query mode: prepared\n> number of clients: 20\n> duration: 1800 s\n> number of transactions actually processed: 12971714\n> tps = 7206.244065 (including connections establishing)\n> tps = 7206.349947 (excluding connections establishing)\n>\n> :-)\n\n1) what can we comparing this against (changing only the\nshared_buffers setting)?\n\n2) I've heard that some SSD have utilities that you can use to query\nthe write cycles in order to estimate lifespan. Does this one, and is\nit possible to publish the output (an approximation of the amount of\nwork along with this would be wonderful)?\n\nmerlin\n", "msg_date": "Tue, 3 Aug 2010 12:27:55 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "[email protected] (Greg Smith) writes:\n> Yeb Havinga wrote:\n>> * What filesystem to use on the SSD? To minimize writes and maximize\n>> chance for seeing errors I'd choose ext2 here. \n>\n> I don't consider there to be any reason to deploy any part of a\n> PostgreSQL database on ext2. The potential for downtime if the fsck\n> doesn't happen automatically far outweighs the minimal performance\n> advantage you'll actually see in real applications. \n\nAh, but if the goal is to try to torture the SSD as cruelly as possible,\nthese aren't necessarily downsides (important or otherwise).\n\nI don't think ext2 helps much in \"maximizing chances of seeing errors\"\nin notably useful ways, as the extra \"torture\" that takes place as part\nof the post-remount fsck isn't notably PG-relevant. (It's not obvious\nthat errors encountered would be readily mapped to issues relating to\nPostgreSQL.)\n\nI think the WAL-oriented test would be *way* more useful; inducing work\nwhose \"brokenness\" can be measured in one series of files in one\ndirectory should be way easier than trying to find changes across a\nwhole PG cluster. I don't expect the filesystem choice to be terribly\nsignificant to that.\n-- \n\"cbbrowne\",\"@\",\"gmail.com\"\n\"Heuristics (from the French heure, \"hour\") limit the amount of time\nspent executing something. [When using heuristics] it shouldn't take\nlonger than an hour to do something.\"\n", "msg_date": "Tue, 03 Aug 2010 12:58:30 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "[email protected] (\"Joshua D. Drake\") writes:\n> On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote:\n>> Greg Smith wrote:\n>> > Note that not all of the Sandforce drives include a capacitor; I hope \n>> > you got one that does! I wasn't aware any of the SF drives with a \n>> > capacitor on them were even shipping yet, all of the ones I'd seen \n>> > were the chipset that doesn't include one still. Haven't checked in a \n>> > few weeks though.\n>> \n>> Answer my own question here: the drive Yeb got was the brand spanking \n>> new OCZ Vertex 2 Pro, selling for $649 at Newegg for example: \n>> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227535 and with \n>> the supercacitor listed right in the main production specifications \n>> there. This is officially the first inexpensive (relatively) SSD with a \n>> battery-backed write cache built into it. If Yeb's test results prove \n>> it works as it's supposed to under PostgreSQL, I'll be happy to finally \n>> have a moderately priced SSD I can recommend to people for database \n>> use. And I fear I'll be out of excuses to avoid buying one as a toy for \n>> my home system.\n>\n> That is quite the toy. I can get 4 SATA-II with RAID Controller, with\n> battery backed cache, for the same price or less :P\n\nSure, but it:\n- Fits into a single slot\n- Is quiet\n- Consumes little power\n- Generates little heat\n- Is likely to be about as quick as the 4-drive array\n\nIt doesn't have the extra 4TB of storage, but if you're building big-ish\ndatabases, metrics have to change anyways.\n\nThis is a pretty slick answer for the small OLTP server.\n-- \noutput = reverse(\"moc.liamg\" \"@\" \"enworbbc\")\nhttp://linuxfinances.info/info/postgresql.html\nChaotic Evil means never having to say you're sorry.\n", "msg_date": "Tue, 03 Aug 2010 13:04:46 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "\nOn Jul 26, 2010, at 12:45 PM, Greg Smith wrote:\n\n> Yeb Havinga wrote:\n>> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory \n>> read/write test. (scale 300) No real winners or losers, though ext2 \n>> isn't really faster and the manual need for fix (y) during boot makes \n>> it impractical in its standard configuration.\n> \n> That's what happens every time I try it too. The theoretical benefits \n> of ext2 for hosting PostgreSQL just don't translate into significant \n> performance increases on database oriented tests, certainly not ones \n> that would justify the downside of having fsck issues come back again. \n> Glad to see that holds true on this hardware too.\n> \n\next2 is slow for many reasons. ext4 with no journal is significantly faster than ext2. ext4 with a journal is faster than ext2.\n\n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 4 Aug 2010 12:43:02 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "\nOn Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote:\n\n> On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga <[email protected]> wrote:\n>> After a week testing I think I can answer the question above: does it work\n>> like it's supposed to under PostgreSQL?\n>> \n>> YES\n>> \n>> The drive I have tested is the $435,- 50GB OCZ Vertex 2 Pro,\n>> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227534\n>> \n>> * it is safe to mount filesystems with barrier off, since it has a 'supercap\n>> backed cache'. That data is not lost is confirmed by a dozen power switch\n>> off tests while running either diskchecker.pl or pgbench.\n>> * the above implies its also safe to use this SSD with barriers, though that\n>> will perform less, since this drive obeys write trough commands.\n>> * the highest pgbench tps number for the TPC-B test for a scale 300 database\n>> (~5GB) I could get was over 6700. Judging from the iostat average util of\n>> ~40% on the xlog partition, I believe that this number is limited by other\n>> factors than the SSD, like CPU, core count, core MHz, memory size/speed, 8.4\n>> pgbench without threads. Unfortunately I don't have a faster/more core\n>> machines available for testing right now.\n>> * pgbench numbers for a larger than RAM database, read only was over 25000\n>> tps (details are at the end of this post), during which iostat reported\n>> ~18500 read iops and 100% utilization.\n>> * pgbench max reported latencies are 20% of comparable BBWC setups.\n>> * how reliable it is over time, and how it performs over time I cannot say,\n>> since I tested it only for a week.\n> \n> Thank you very much for posting this analysis. This has IMNSHO the\n> potential to be a game changer. There are still some unanswered\n> questions in terms of how the drive wears, reliability, errors, and\n> lifespan but 6700 tps off of a single 400$ device with decent fault\n> tolerance is amazing (Intel, consider yourself upstaged). Ever since\n> the first samsung SSD hit the market I've felt the days of the\n> spinning disk have been numbered. Being able to build a 100k tps\n> server on relatively inexpensive hardware without an entire rack full\n> of drives is starting to look within reach.\n\nIntel's next gen 'enterprise' SSD's are due out later this year. I have heard from those with access to to test samples that they really like them -- these people rejected the previous versions because of the data loss on power failure.\n\nSo, hopefully there will be some interesting competition later this year in the medium price range enterprise ssd market.\n\n> \n>> Postgres settings:\n>> 8.4.4\n>> --with-blocksize=4\n>> I saw about 10% increase in performance compared to 8KB blocksizes.\n> \n> That's very interesting -- we need more testing in that department...\n> \n> regards (and thanks again)\n> merlin\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 4 Aug 2010 12:49:34 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "\nOn Aug 3, 2010, at 9:27 AM, Merlin Moncure wrote:\n> \n> 2) I've heard that some SSD have utilities that you can use to query\n> the write cycles in order to estimate lifespan. Does this one, and is\n> it possible to publish the output (an approximation of the amount of\n> work along with this would be wonderful)?\n> \n\nOn the intel drives, its available via SMART. Plenty of hits on how to read the data from google. Sandforce drives probably have it exposed via SMART as well.\n\nI have had over 50 X25-M's (80GB G1's) in production for 22 months that write ~100GB a day and SMART reports they have 78% of their write cycles left. Plus, when it dies from usage it supposedly enters a read-only state. (these only have recoverable data so data loss on power failure is not a concern for me).\n\nSo if Sandforce has low write amplification like Intel (they claim to be better) longevity should be fine.\n\n> merlin\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 4 Aug 2010 12:58:39 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": " On 10-08-04 03:49 PM, Scott Carey wrote:\n> On Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote:\n>\n>> On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga<[email protected]> wrote:\n>>> After a week testing I think I can answer the question above: does it work\n>>> like it's supposed to under PostgreSQL?\n>>>\n>>> YES\n>>>\n>>> The drive I have tested is the $435,- 50GB OCZ Vertex 2 Pro,\n>>> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227534\n>>>\n>>> * it is safe to mount filesystems with barrier off, since it has a 'supercap\n>>> backed cache'. That data is not lost is confirmed by a dozen power switch\n>>> off tests while running either diskchecker.pl or pgbench.\n>>> * the above implies its also safe to use this SSD with barriers, though that\n>>> will perform less, since this drive obeys write trough commands.\n>>> * the highest pgbench tps number for the TPC-B test for a scale 300 database\n>>> (~5GB) I could get was over 6700. Judging from the iostat average util of\n>>> ~40% on the xlog partition, I believe that this number is limited by other\n>>> factors than the SSD, like CPU, core count, core MHz, memory size/speed, 8.4\n>>> pgbench without threads. Unfortunately I don't have a faster/more core\n>>> machines available for testing right now.\n>>> * pgbench numbers for a larger than RAM database, read only was over 25000\n>>> tps (details are at the end of this post), during which iostat reported\n>>> ~18500 read iops and 100% utilization.\n>>> * pgbench max reported latencies are 20% of comparable BBWC setups.\n>>> * how reliable it is over time, and how it performs over time I cannot say,\n>>> since I tested it only for a week.\n>> Thank you very much for posting this analysis. This has IMNSHO the\n>> potential to be a game changer. There are still some unanswered\n>> questions in terms of how the drive wears, reliability, errors, and\n>> lifespan but 6700 tps off of a single 400$ device with decent fault\n>> tolerance is amazing (Intel, consider yourself upstaged). Ever since\n>> the first samsung SSD hit the market I've felt the days of the\n>> spinning disk have been numbered. Being able to build a 100k tps\n>> server on relatively inexpensive hardware without an entire rack full\n>> of drives is starting to look within reach.\n> Intel's next gen 'enterprise' SSD's are due out later this year. I have heard from those with access to to test samples that they really like them -- these people rejected the previous versions because of the data loss on power failure.\n>\n> So, hopefully there will be some interesting competition later this year in the medium price range enterprise ssd market.\n>\n\nI'll be doing some testing on Enterprise grade SSD's this year. I'll \nalso be looking at some hybrid storage products that use as SSD's as \naccelerators mixed with lower cost storage.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Thu, 05 Aug 2010 08:40:37 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" }, { "msg_contents": "Greg Smith wrote:\n> > * How to test for power failure?\n> \n> I've had good results using one of the early programs used to \n> investigate this class of problems: \n> http://brad.livejournal.com/2116715.html?page=2\n\nFYI, this tool is mentioned in the Postgres documentation:\n\n\thttp://www.postgresql.org/docs/9.0/static/wal-reliability.html\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Wed, 11 Aug 2010 18:31:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing Sandforce SSD" } ]
[ { "msg_contents": " Hello,\n\nI have a simple table which has a cube column and a cube GiST index. The \ncolumn contains 3-dimensional points (not cubes or intervals). The \nproblem is that I'm getting very slow queries when I'm using the index. \nThe table has about 130,000 rows and is full-vacuumed after any \nupdates/inserts (it is updated only once every 24 hours).\n\n\nTable definition:\nCREATE TABLE picof.photo_colors\n(\n photo_id integer NOT NULL,\n color_percent real NOT NULL,\n lab_color picof.cube NOT NULL\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX photo_colors_index\n ON picof.photo_colors\n USING gist\n (lab_color);\n\n\nMy query:\nSELECT photo_id FROM photo_colors\nWHERE lab_color <@ cube_enlarge('0, 0, 0', 10, 3)\n\n\nExplain analyze:\n\"Bitmap Heap Scan on photo_colors (cost=13.40..421.55 rows=135 width=4) \n(actual time=7.958..15.493 rows=14313 loops=1)\"\n\" Recheck Cond: (lab_color <@ '(-10, -10, -10),(10, 10, 10)'::cube)\"\n\" -> Bitmap Index Scan on photo_colors_index (cost=0.00..13.36 \nrows=135 width=0) (actual time=7.556..7.556 rows=14313 loops=1)\"\n\" Index Cond: (lab_color <@ '(-10, -10, -10),(10, 10, 10)'::cube)\"\n\"Total runtime: 16.849 ms\"\n(Executed in PostgreSQL 8.4.4 on Windows and CentOS - same query plan)\n\n\nNow, while it might not seem much, this is part of a bigger query in \nwhich several such subqueries are being joined. The cost really adds up.\n\nMy question is: Why is it doing a Bitmap Heap Scan / Recheck Cond? I've \nexecuted dozens of such queries and not once did the rechecking remove \nany rows. Is there any way to disable it, or do you have any other \nsuggestions for optimizations (because I'm all out of ideas)?\n\nThank you in advance!\n\n---\nLiviu Mirea\n", "msg_date": "Sun, 25 Jul 2010 13:32:49 +0300", "msg_from": "Liviu Mirea-Ghiban <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query using the Cube contrib module." }, { "msg_contents": "Liviu Mirea-Ghiban wrote:\n>\n> My question is: Why is it doing a Bitmap Heap Scan / Recheck Cond? \n> I've executed dozens of such queries and not once did the rechecking \n> remove any rows. Is there any way to disable it, or do you have any \n> other suggestions for optimizations (because I'm all out of ideas)?\nIt's probably because the index nodes store data values with a lossy \ncompression, which means that the index scan returns more rows than \nwanted, and that in turn is filtered out by the rescanning. See the \ncomments for the 'RECHECK' parameter of CREATE OPERATOR CLASS \n(http://www.postgresql.org/docs/8.4/static/sql-createopclass.html). Its \nunwise to alter this behaviour without taking a look/modifying the \nunderlying index implementation. The gist index scann part could perhaps \nbe made a bit faster by using a smaller blocksize, but I'm not sure if \nor how the recheck part can be improved. Maybe rewriting the top query \nto not do bitmap heap scans in subqueries or inner loops?\n\nregards,\nYeb Havinga\n", "msg_date": "Tue, 27 Jul 2010 22:06:16 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query using the Cube contrib module." }, { "msg_contents": "Yeb Havinga <[email protected]> writes:\n> Liviu Mirea-Ghiban wrote:\n>> My question is: Why is it doing a Bitmap Heap Scan / Recheck Cond? \n>> I've executed dozens of such queries and not once did the rechecking \n>> remove any rows. Is there any way to disable it, or do you have any \n>> other suggestions for optimizations (because I'm all out of ideas)?\n\n> It's probably because the index nodes store data values with a lossy \n> compression, which means that the index scan returns more rows than \n> wanted, and that in turn is filtered out by the rescanning.\n\nThe recheck expression is only executed if the index reports that it's\nnot executed the search exactly. If you don't see any difference\nbetween the indexscan and bitmapscan output counts, it's probably\nbecause the index can do the case exactly, so the recheck expression\nisn't really getting used. The planner has to include the expression\nin the plan anyway, because the decision about lossiness is not known\nuntil runtime. But it's not costing any runtime.\n\nThe OP is mistaken to think there's anything wrong with this plan choice\n---- more than likely, it's the best available plan. The reason there's\na significant gap between the indexscan runtime and the bitmapscan\nruntime is that that's the cost of going and actually fetching all those\nrows from the table. The only way to fix that is to buy a faster disk\nor get more RAM so that more of the table can be held in memory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 2010 18:18:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query using the Cube contrib module. " } ]
[ { "msg_contents": "Hello,\n\nI've found strange problem in my database (8.4.4, but also 9.0beta3,\ndefault postgresql.conf, shared_buffers raised to 256MB).\n\nEXPLAIN ANALYZE SELECT ...\nTotal runtime: 4.782 ms\nTime: 25,970 ms\n\nSELECT ...\n...\n(21 rows)\n\nTime: 23,042 ms\n\nTest done in psql connected by socket to server (same host, using\n\\timing to get runtime).\n\nDoes big difference in \"Total runtime\" and \"Time\" is normal?\n\nI've notice, that removing one index (not used in query,ocache_*_ukey,\nused by me only to have data integrity) gives me:\n\nEXPLAIN ANALYZE SELECT ...\nTotal runtime: 3.439 ms\nTime: 13,324 ms\n\nWhy such big difference in timing (vs. query with this index)?\n\nQuery is:\nselect oc_h_id,oc_duration,SUM(oc_count) FROM ocache WHERE\noc_date_from >= '2010-07-22'::date AND oc_date_from >=\n'2010-07-24'::date AND oc_h_id =\nANY('{\"32842\",\"3095\",\"27929\",\"2229\",\"22769\",\"3098\",\"33433\",\"22559\",\"226\",\"2130\",\"226\",\"2130\",\"2229\",\"3095\",\"3098\",\"22559\",\"22769\",\"27929\",\"32842\",\"33433\"}'::int[])\nGROUP BY oc_h_id, oc_duration;\n\nEXPLAIN ANALYZE is:\n\n HashAggregate (cost=42060.58..42095.53 rows=2796 width=12) (actual\ntime=4.357..4.368 rows=21 loops=1)\n -> Append (cost=0.00..41850.91 rows=27955 width=12) (actual\ntime=0.432..3.928 rows=1439 loops=1)\n -> Seq Scan on ocache (cost=0.00..17.20 rows=6 width=12)\n(actual time=0.002..0.002 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Bitmap Heap Scan on ocache_2010_7 ocache\n(cost=357.41..8117.29 rows=5466 width=12) (actual time=0.430..0.582\nrows=196 loops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2010_7_oc_h_id_key\n(cost=0.00..356.04 rows=16397 width=0) (actual time=0.174..0.174\nrows=1156 loops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2010_8 ocache\n(cost=370.91..9067.15 rows=6060 width=12) (actual time=0.175..0.615\nrows=562 loops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2010_8_oc_h_id_key\n(cost=0.00..369.39 rows=18181 width=0) (actual time=0.156..0.156\nrows=1124 loops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2010_9 ocache\n(cost=389.47..9891.79 rows=6703 width=12) (actual time=0.158..0.513\nrows=448 loops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2010_9_oc_h_id_key\n(cost=0.00..387.80 rows=20108 width=0) (actual time=0.140..0.140\nrows=896 loops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2010_10 ocache\n(cost=268.42..6735.90 rows=4721 width=12) (actual time=0.107..0.300\nrows=229 loops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2010_10_oc_h_id_key\n(cost=0.00..267.24 rows=14162 width=0) (actual time=0.096..0.096\nrows=458 loops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2010_11 ocache\n(cost=139.48..3340.84 rows=2395 width=12) (actual time=0.046..0.047\nrows=4 loops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2010_11_oc_h_id_key\n(cost=0.00..138.88 rows=7184 width=0) (actual time=0.040..0.040 rows=8\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2010_12 ocache\n(cost=108.78..1766.50 rows=1223 width=12) (actual time=0.041..0.041\nrows=0 loops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2010_12_oc_h_id_key\n(cost=0.00..108.48 rows=3668 width=0) (actual time=0.040..0.040 rows=0\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2011_1 ocache\n(cost=70.63..432.15 rows=246 width=12) (actual time=0.036..0.036\nrows=0 loops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2011_1_oc_h_id_key\n(cost=0.00..70.57 rows=738 width=0) (actual time=0.035..0.035 rows=0\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2011_2 ocache\n(cost=65.72..368.20 rows=204 width=12) (actual time=0.038..0.038\nrows=0 loops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2011_2_oc_h_id_key\n(cost=0.00..65.67 rows=612 width=0) (actual time=0.038..0.038 rows=0\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2011_3 ocache\n(cost=60.36..290.04 rows=147 width=12) (actual time=0.037..0.037\nrows=0 loops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2011_3_oc_h_id_key\n(cost=0.00..60.32 rows=442 width=0) (actual time=0.036..0.036 rows=0\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2011_4 ocache\n(cost=59.75..243.87 rows=118 width=12) (actual time=0.034..0.034\nrows=0 loops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2011_4_oc_h_id_key\n(cost=0.00..59.72 rows=353 width=0) (actual time=0.033..0.033 rows=0\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2011_5 ocache\n(cost=54.99..190.27 rows=86 width=12) (actual time=0.032..0.032 rows=0\nloops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2011_5_oc_h_id_key\n(cost=0.00..54.97 rows=257 width=0) (actual time=0.031..0.031 rows=0\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2011_6 ocache\n(cost=32.64..182.12 rows=80 width=12) (actual time=1.299..1.299 rows=0\nloops=1)\n Recheck Cond: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n Filter: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Index Scan on ocache_2011_6_ukey\n(cost=0.00..32.62 rows=837 width=0) (actual time=0.224..0.224\nrows=2510 loops=1)\n Index Cond: ((oc_date_from >= '2010-07-22'::date)\nAND (oc_date_from >= '2010-07-24'::date))\n -> Bitmap Heap Scan on ocache_2011_7 ocache\n(cost=55.15..195.99 rows=90 width=12) (actual time=0.033..0.033 rows=0\nloops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2011_7_oc_h_id_key\n(cost=0.00..55.12 rows=271 width=0) (actual time=0.033..0.033 rows=0\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2011_8 ocache\n(cost=54.99..192.43 rows=87 width=12) (actual time=0.033..0.033 rows=0\nloops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2011_8_oc_h_id_key\n(cost=0.00..54.97 rows=261 width=0) (actual time=0.033..0.033 rows=0\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2011_9 ocache\n(cost=54.99..188.23 rows=85 width=12) (actual time=0.033..0.033 rows=0\nloops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2011_9_oc_h_id_key\n(cost=0.00..54.97 rows=256 width=0) (actual time=0.032..0.032 rows=0\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Bitmap Heap Scan on ocache_2011_10 ocache\n(cost=54.84..183.72 rows=82 width=12) (actual time=0.032..0.032 rows=0\nloops=1)\n Recheck Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date))\n -> Bitmap Index Scan on ocache_2011_10_oc_h_id_key\n(cost=0.00..54.82 rows=247 width=0) (actual time=0.032..0.032 rows=0\nloops=1)\n Index Cond: (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[]))\n -> Seq Scan on ocache_2011_11 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2011_12 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_1 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_2 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_3 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_4 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_5 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_6 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_7 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_8 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_9 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_10 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_11 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2012_12 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_1 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_2 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_3 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_4 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_5 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_6 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_7 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_8 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_9 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_10 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_11 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n -> Seq Scan on ocache_2013_12 ocache (cost=0.00..17.20\nrows=6 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((oc_date_from >= '2010-07-22'::date) AND\n(oc_date_from >= '2010-07-24'::date) AND (oc_h_id = ANY\n('{32842,3095,27929,2229,22769,3098,33433,22559,226,2130,226,2130,2229,3095,3098,22559,22769,27929,32842,33433}'::integer[])))\n Total runtime: 4.725 ms\n(137 rows)\n\nParitions are rather small, some of them are empty:\n\n> select count(*) from ocache;\n count\n--------\n 907688\n(1 row)\n\n\n> \\d ocache\n Table \"public.ocache\"\n Column | Type | Modifiers\n-----------------------+-----------------------------+-----------\n oc_count | integer |\n oc_to_id | integer |\n oc_hc_ids | integer[] |\n oc_h_id | integer |\n oc_hg_id | integer |\n oc_hg_category | numeric(2,1) |\n oc_r_id | integer |\n oc_date_from | date |\n oc_date_to | date |\n oc_duration | integer |\n oc_transport | integer |\n oc_price_min | numeric |\n oc_price_max | numeric |\n oc_rc_ids | integer[] |\n oc_orc_ids | integer[] |\n oc_fc_ids | integer[] |\n oc_ofc_id_1 | integer |\n oc_ap_ids_from1 | integer[] |\n oc_bc_ids_from | integer[] |\n oc_obc_ids_from | integer[] |\n oc_o_ids | integer[] |\n oc_price_avg | numeric |\n oc_o_date_updated_min | timestamp without time zone |\n oc_o_date_updated_max | timestamp without time zone |\nNumber of child tables: 48 (Use \\d+ to list them.)\n\nData partitioned by month (oc_date_from column, tables created for\nyears: 2010, 2011, 2012, 2013), example child table:\n\n> \\d+ ocache_2010_12\n Table \"public.ocache_2010_12\"\n Column | Type | Modifiers |\nStorage | Description\n-----------------------+-----------------------------+-----------+----------+-------------\n oc_count | integer | | plain |\n oc_to_id | integer | | plain |\n oc_hc_ids | integer[] | | extended |\n oc_h_id | integer | | plain |\n oc_hg_id | integer | | plain |\n oc_hg_category | numeric(2,1) | | main |\n oc_r_id | integer | | plain |\n oc_date_from | date | | plain |\n oc_date_to | date | | plain |\n oc_duration | integer | | plain |\n oc_transport | integer | | plain |\n oc_price_min | numeric | | main |\n oc_price_max | numeric | | main |\n oc_rc_ids | integer[] | | extended |\n oc_orc_ids | integer[] | | extended |\n oc_fc_ids | integer[] | | extended |\n oc_ofc_id_1 | integer | | plain |\n oc_ap_ids_from1 | integer[] | | extended |\n oc_bc_ids_from | integer[] | | extended |\n oc_obc_ids_from | integer[] | | extended |\n oc_o_ids | integer[] | | extended |\n oc_price_avg | numeric | | main |\n oc_o_date_updated_min | timestamp without time zone | | plain |\n oc_o_date_updated_max | timestamp without time zone | | plain |\nIndexes:\n \"ocache_2010_12_oc_ap_ids_from1_key\" gist (oc_ap_ids_from1)\n \"ocache_2010_12_oc_h_id_key\" btree (oc_h_id)\n \"ocache_2010_12_oc_hg_id_key\" btree (oc_hg_id)\n \"ocache_2010_12_oc_obc_ids_from_key\" gist (oc_obc_ids_from)\n \"ocache_2010_12_oc_r_id_key\" btree (oc_r_id)\n \"ocache_2010_12_oc_to_id_key\" btree (oc_to_id)\nCheck constraints:\n \"ocache_2010_12_oc_date_from_check\" CHECK (oc_date_from >=\n'2010-12-01'::date AND oc_date_from <= '2011-01-01'::date)\nInherits: ocache\nHas OIDs: no\nOptions: fillfactor=80\n\n-- \nPiotr Gasidło\n", "msg_date": "Mon, 26 Jul 2010 10:35:43 +0200", "msg_from": "=?UTF-8?Q?Piotr_Gasid=C5=82o?= <[email protected]>", "msg_from_op": true, "msg_subject": "Big difference in time returned by EXPLAIN ANALYZE SELECT ... AND\n\tSELECT ..." }, { "msg_contents": "W dniu 26 lipca 2010 10:35 użytkownik Piotr Gasidło\n<[email protected]> napisał:\n>> \\d+ ocache_2010_12\n>                              Table \"public.ocache_2010_12\"\n> Indexes:\n> (...)\n\nMissed index in listing:\n \"ocache_2010_12_ukey\" UNIQUE, btree (oc_date_from, oc_date_to,\noc_h_id, oc_transport, oc_ofc_id_1) WITH (fillfactor=80)\n\n-- \nPiotr Gasidło\n", "msg_date": "Mon, 26 Jul 2010 10:44:15 +0200", "msg_from": "=?UTF-8?Q?Piotr_Gasid=C5=82o?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big difference in time returned by EXPLAIN ANALYZE SELECT ... AND\n\tSELECT ..." }, { "msg_contents": "On 26/07/10 16:35, Piotr Gasidło wrote:\n> Hello,\n> \n> I've found strange problem in my database (8.4.4, but also 9.0beta3,\n> default postgresql.conf, shared_buffers raised to 256MB).\n> \n> EXPLAIN ANALYZE SELECT ...\n> Total runtime: 4.782 ms\n> Time: 25,970 ms\n> \n> SELECT ...\n> ...\n> (21 rows)\n> \n> Time: 23,042 ms\n> \n> Test done in psql connected by socket to server (same host, using\n> \\timing to get runtime).\n> \n> Does big difference in \"Total runtime\" and \"Time\" is normal?\n\nGiven that EXPLAIN ANALYZE doesn't transfer large rowsets to the client,\nit can't really be time taken to transfer the data, which is the usual\ndifference between 'explain analyze' timings and psql client-side timings.\n\nGiven that, I'm wondering if the difference in this case is planning\ntime. I can't really imagine the query planner taking 20 seconds (!!) to\nrun, though, no matter how horrifyingly complicated the query and table\nstructure were, unless there was something going wrong.\n\nAnother possibility, then, is that for some reason queries are being\ndelayed from starting or delayed before results are being returned, so\nthe server completes them in a short amount of time but it takes a while\nfor psql to find out they're finished.\n\nIn your position, at this point I'd be doing things like hooking a\ndebugger up to the postgres backend and interrupting its execution\nperiodically to see what it's up to while this query runs. I'd also be\nusing wireshark to look at network activity to see if there were any\nclues there. I'd be using \"top\", \"vmstat\" and \"iostat\" to examine\nsystem-level load if it was practical to leave the system otherwise\nidle, so I could see if CPU/memory/disk were in demand, and for how long.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 26 Jul 2010 17:15:12 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big difference in time returned by EXPLAIN ANALYZE\n\tSELECT ... AND \tSELECT ..." }, { "msg_contents": "26.07.10 12:15, Craig Ringer написав(ла):\n> On 26/07/10 16:35, Piotr Gasidło wrote:\n> \n>> Hello,\n>>\n>> I've found strange problem in my database (8.4.4, but also 9.0beta3,\n>> default postgresql.conf, shared_buffers raised to 256MB).\n>>\n>> EXPLAIN ANALYZE SELECT ...\n>> Total runtime: 4.782 ms\n>> Time: 25,970 ms\n>>\n>> SELECT ...\n>> ...\n>> (21 rows)\n>>\n>> Time: 23,042 ms\n>>\n>> Test done in psql connected by socket to server (same host, using\n>> \\timing to get runtime).\n>>\n>> Does big difference in \"Total runtime\" and \"Time\" is normal?\n>> \n> Given that EXPLAIN ANALYZE doesn't transfer large rowsets to the client,\n> it can't really be time taken to transfer the data, which is the usual\n> difference between 'explain analyze' timings and psql client-side timings.\n>\n> Given that, I'm wondering if the difference in this case is planning\n> time. I can't really imagine the query planner taking 20 seconds (!!) to\n> run, though, no matter how horrifyingly complicated the query and table\n> structure were, unless there was something going wrong.\n> \nActually it's 20ms, so I suspect your point about planning time is correct.\nPiotr: You can try preparing your statement and then analyzing execute \ntime to check if this is planning time.\n\nBest regards, Vitalii Tymchyshyn\n", "msg_date": "Mon, 26 Jul 2010 12:25:18 +0300", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big difference in time returned by EXPLAIN ANALYZE\n\tSELECT ... AND \tSELECT ..." }, { "msg_contents": "On 26/07/10 17:25, Vitalii Tymchyshyn wrote:\n> 26.07.10 12:15, Craig Ringer написав(ла):\n>> On 26/07/10 16:35, Piotr Gasidło wrote:\n>> \n>>> Hello,\n>>>\n>>> I've found strange problem in my database (8.4.4, but also 9.0beta3,\n>>> default postgresql.conf, shared_buffers raised to 256MB).\n>>>\n>>> EXPLAIN ANALYZE SELECT ...\n>>> Total runtime: 4.782 ms\n>>> Time: 25,970 ms\n\n>> Given that, I'm wondering if the difference in this case is planning\n>> time. I can't really imagine the query planner taking 20 seconds (!!) to\n>> run, though, no matter how horrifyingly complicated the query and table\n>> structure were, unless there was something going wrong.\n>> \n> Actually it's 20ms, so I suspect your point about planning time is correct.\n\nOh, a commas-as-fraction-separator locale.\n\nThat makes sense. Thanks for the catch.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 26 Jul 2010 17:56:57 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big difference in time returned by EXPLAIN ANALYZE\n\tSELECT ... AND \tSELECT ..." }, { "msg_contents": "2010/7/26 Vitalii Tymchyshyn <[email protected]>:\n> 26.07.10 12:15, Craig Ringer написав(ла):\n> (...)\n> Piotr: You can try preparing your statement and then analyzing execute time\n> to check if this is planning time.\n\nYou are right.\n\nI've done simple PREPARE (without params, etc).\n\n> REPARE query as select oc_h_id,oc_duration,SUM(oc_count) FROM ocache WHERE oc_date_from >= '2010-07-22'::date AND oc_date_from >= '2010-07-24'::date AND oc_h_id = ANY('{\"32842\",\"3095\",\"27929\",\"2229\",\"22769\",\"3098\",\"33433\",\"22559\",\"226\",\"2130\",\"226\",\"2130\",\"2229\",\"3095\",\"3098\",\"22559\",\"22769\",\"27929\",\"32842\",\"33433\"}'::int[]) GROUP BY oc_h_id, oc_duration;\nPREPARE\nTime: 19,873 ms\n\n> EXPLAIN ANALYZE EXECUTE query;\n...\nTotal runtime: 3.237 ms\nTime: 5,118 ms\n\n> EXECUTE query;\n oc_h_id | oc_duration | sum\n---------+-------------+------\n 27929 | 7 | 546\n 3098 | 7 | 552\n 27929 | 14 | 512\n 3098 | 14 | 444\n 22769 | 14 | 984\n 32842 | 14 | 444\n 27929 | 22 | 4\n 27929 | 15 | 44\n 32842 | 7 | 552\n 22769 | 7 | 1356\n 2229 | 7 | 496\n 226 | 14 | 536\n 2130 | 7 | 536\n 2130 | 14 | 448\n 226 | 7 | 584\n 2229 | 14 | 400\n 33433 | 14 | 444\n 3095 | 7 | 552\n 33433 | 7 | 552\n 3095 | 14 | 444\n 27929 | 8 | 40\n(21 rows)\n\nTime: 3,494 ms\n\nThe time matches EXPLAIN ANALYZE runtime.\n\nCompared to not prepared query, its big difference!\n\n> select oc_h_id,oc_duration,SUM(oc_count) FROM ocache WHERE oc_date_from >= '2010-07-22'::date AND oc_date_from >= '2010-07-24'::date AND oc_h_id = ANY('{\"32842\",\"3095\",\"27929\",\"2229\",\"22769\",\"3098\",\"33433\",\"22559\",\"226\",\"2130\",\"226\",\"2130\",\"2229\",\"3095\",\"3098\",\"22559\",\"22769\",\"27929\",\"32842\",\"33433\"}'::int[]) GROUP BY oc_h_id, oc_duration;\n oc_h_id | oc_duration | sum\n---------+-------------+------\n 27929 | 7 | 546\n 3098 | 7 | 552\n 27929 | 14 | 512\n 3098 | 14 | 444\n 22769 | 14 | 984\n 32842 | 14 | 444\n 27929 | 22 | 4\n 27929 | 15 | 44\n 32842 | 7 | 552\n 22769 | 7 | 1356\n 2229 | 7 | 496\n 226 | 14 | 536\n 2130 | 7 | 536\n 2130 | 14 | 448\n 226 | 7 | 584\n 2229 | 14 | 400\n 33433 | 14 | 444\n 3095 | 7 | 552\n 33433 | 7 | 552\n 3095 | 14 | 444\n 27929 | 8 | 40\n(21 rows)\n\nTime: 22,571 ms\n\nOk. Is there any way to tune postgresql, to shorten planning time for\nsuch queries?\n\n-- \nPiotr Gasidło\n", "msg_date": "Mon, 26 Jul 2010 12:04:13 +0200", "msg_from": "=?UTF-8?Q?Piotr_Gasid=C5=82o?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big difference in time returned by EXPLAIN ANALYZE\n\tSELECT ... AND SELECT ..." }, { "msg_contents": "Piotr Gasidło wrote:\n>>>> EXPLAIN ANALYZE SELECT ...\n>>>> Total runtime: 4.782 ms\n>>>> Time: 25,970 ms\n\nVitalii Tymchyshyn wrote:\n>> Actually it's 20ms, so I suspect your point about planning time is correct.\n\nCraig Ringer wrote:\n> Oh, a commas-as-fraction-separator locale.\n>\n> That makes sense. Thanks for the catch.\n\nStrangely, the runtime is shown with a period for the separator, though.\n\n-- \nLew\n", "msg_date": "Mon, 26 Jul 2010 19:03:58 -0400", "msg_from": "Lew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big difference in time returned by EXPLAIN ANALYZE\n\tSELECT ... AND SELECT ..." }, { "msg_contents": "=?UTF-8?Q?Piotr_Gasid=C5=82o?= <[email protected]> writes:\n> Ok. Is there any way to tune postgresql, to shorten planning time for\n> such queries?\n\nYou've got a ridiculously large number of partitions. Use fewer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 2010 00:19:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big difference in time returned by EXPLAIN ANALYZE SELECT ... AND\n\tSELECT ..." }, { "msg_contents": "27.07.10 02:03, Lew написав(ла):\n> Piotr Gasidło wrote:\n>>>>> EXPLAIN ANALYZE SELECT ...\n>>>>> Total runtime: 4.782 ms\n>>>>> Time: 25,970 ms\n>\n> Strangely, the runtime is shown with a period for the separator, though.\n>\nOne value is calculated on server by EXPLAIN ANALYZE command, another is \ncalculated by psql itself.\n\nBest regards, Vitalii Tymchyshyn\n\n", "msg_date": "Wed, 28 Jul 2010 14:07:08 +0300", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big difference in time returned by EXPLAIN ANALYZE\n\tSELECT ... AND SELECT ..." } ]
[ { "msg_contents": "There is a partitioned table with 2 partitions:\n\ndrop table if exists p cascade;\n\ncreate table p (\n id bigint not null primary key,\n ts timestamp);\n\ncreate table p_actual ( check (ts is null) ) inherits (p);\ncreate table p_historical ( check (ts is not null) ) inherits (p);\n\n-- I skipped the triggers and rules creation\n\ninsert into p (id, ts) values (1, '2000-01-01');\ninsert into p (id, ts) values (2, null);\ninsert into p (id, ts) values (3, '2001-01-01');\ninsert into p (id, ts) values (4, '2005-01-01');\n\nanalyze p;\nanalyze p_actual;\nanalyze p_historical;\n\nHere is the explain output for the query 'select * from p where ts is null'\n\nResult (cost=0.00..188.10 rows=10 width=16) (actual time=0.028..0.038 \nrows=1 loops=1)\n -> Append (cost=0.00..188.10 rows=10 width=16) (actual \ntime=0.023..0.029 rows=1 loops=1)\n -> Seq Scan on p (cost=0.00..187.00 rows=9 width=16) (actual \ntime=0.002..0.002 rows=0 loops=1)\n Filter: (ts IS NULL)\n -> Seq Scan on p_actual p (cost=0.00..1.10 rows=1 width=16) \n(actual time=0.014..0.016 rows=1 loops=1)\n Filter: (ts IS NULL)\nTotal runtime: 0.080 ms\n\nYou can notice that the optimizer expects 10 rows in the table p and as \na result of this assumption the whole query is estimated as 10 rows. \nWhether it will cause a performance impact further? pg_stats does not \ncontain any statistics on the table 'p'. Is this a cause of such behaviour?\nThe estimation is worse for some other queries, for example 'select * \nfrom p where ts is not null'\n\nResult (cost=0.00..188.30 rows=1764 width=16) (actual time=0.021..0.049 \nrows=3 loops=1)\n -> Append (cost=0.00..188.30 rows=1764 width=16) (actual \ntime=0.016..0.032 rows=3 loops=1)\n -> Seq Scan on p (cost=0.00..187.00 rows=1761 width=16) \n(actual time=0.003..0.003 rows=0 loops=1)\n Filter: (ts IS NOT NULL)\n -> Seq Scan on p_historical p (cost=0.00..1.30 rows=3 \nwidth=16) (actual time=0.008..0.015 rows=3 loops=1)\n Filter: (ts IS NOT NULL)\nTotal runtime: 0.095 ms\n\n", "msg_date": "Mon, 26 Jul 2010 17:47:00 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Explains of queries to partitioned tables" }, { "msg_contents": "On Mon, Jul 26, 2010 at 4:47 AM, Vlad Arkhipov <[email protected]> wrote:\n> There is a partitioned table with 2 partitions:\n>\n> drop table if exists p cascade;\n>\n> create table p (\n>  id bigint not null primary key,\n>  ts timestamp);\n>\n> create table p_actual ( check (ts is null) ) inherits (p);\n> create table p_historical ( check (ts is not null) ) inherits (p);\n>\n> -- I skipped the triggers and rules creation\n>\n> insert into p (id, ts) values (1, '2000-01-01');\n> insert into p (id, ts) values (2, null);\n> insert into p (id, ts) values (3, '2001-01-01');\n> insert into p (id, ts) values (4, '2005-01-01');\n>\n> analyze p;\n> analyze p_actual;\n> analyze p_historical;\n>\n> Here is the explain output for the query 'select * from p where ts is null'\n>\n> Result  (cost=0.00..188.10 rows=10 width=16) (actual time=0.028..0.038\n> rows=1 loops=1)\n>  ->  Append  (cost=0.00..188.10 rows=10 width=16) (actual time=0.023..0.029\n> rows=1 loops=1)\n>        ->  Seq Scan on p  (cost=0.00..187.00 rows=9 width=16) (actual\n> time=0.002..0.002 rows=0 loops=1)\n>              Filter: (ts IS NULL)\n>        ->  Seq Scan on p_actual p  (cost=0.00..1.10 rows=1 width=16) (actual\n> time=0.014..0.016 rows=1 loops=1)\n>              Filter: (ts IS NULL)\n> Total runtime: 0.080 ms\n>\n> You can notice that the optimizer expects 10 rows in the table p and as a\n> result of this assumption the whole query is estimated as 10 rows. Whether\n> it will cause a performance impact further? pg_stats does not contain any\n> statistics on the table 'p'. Is this a cause of such behaviour?\n> The estimation is worse for some other queries, for example 'select * from p\n> where ts is not null'\n>\n> Result  (cost=0.00..188.30 rows=1764 width=16) (actual time=0.021..0.049\n> rows=3 loops=1)\n>  ->  Append  (cost=0.00..188.30 rows=1764 width=16) (actual\n> time=0.016..0.032 rows=3 loops=1)\n>        ->  Seq Scan on p  (cost=0.00..187.00 rows=1761 width=16) (actual\n> time=0.003..0.003 rows=0 loops=1)\n>              Filter: (ts IS NOT NULL)\n>        ->  Seq Scan on p_historical p  (cost=0.00..1.30 rows=3 width=16)\n> (actual time=0.008..0.015 rows=3 loops=1)\n>              Filter: (ts IS NOT NULL)\n> Total runtime: 0.095 ms\n\nIt would be easier to comment on this if you mentioned things like\nwhich version of PG you're using, and what you have\nconstraint_exclusion set to, but as a general comment analyze doesn't\nstore statistics for any tables that are empty, because it assumes\nthat at some point you're going to put data in them. So in this case\np_historical is probably using fake stats. But it's not clear that it\nreally matters: you haven't got any relevant indices, so a sequential\nscan is the only possible plan; and even if you did have some, there's\nonly 4 rows, so a sequential scan is probably the only plan that makes\nsense anyway. And your query ran in a tenth of a millisecond, which\nis pretty zippy. So I'm not really sure what the problem is. If this\nisn't the real data, post an example with the real data and ask for\nhelp about that.\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 30 Jul 2010 22:21:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explains of queries to partitioned tables" } ]
[ { "msg_contents": "I'm involved with the setup of replacement hardware for one of our \nsystems. It is going to be using Ubuntu Lucid Server (kernel 2.6.32 I \nbelieve). The issue of filesystems has raised its head again.\n\nI note that ext4 is now the default for Lucid, what do folks think about \nusing it: stable enough now? Also xfs has seen quite a bit of \ndevelopment in these later kernels, any thoughts on that?\n\nCheers\n\nMark\n\nP.s: We are quite keen to move away from ext3, as we have encountered \nits tendency to hit a wall under heavy load and leave us waiting for \nkjournald and pdflush to catch up....\n", "msg_date": "Tue, 27 Jul 2010 17:04:47 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Linux Filesystems again - Ubuntu this time" }, { "msg_contents": "Mark Kirkwood <[email protected]> wrote:\n \n> Also xfs has seen quite a bit of development in these later\n> kernels, any thoughts on that?\n \nWe've been using xfs for a few years now with good performance and\nno problems other than needing to disable write barriers to get good\nperformance out of our battery-backed RAID adapter.\n \n-Kevin\n", "msg_date": "Tue, 27 Jul 2010 08:48:54 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux Filesystems again - Ubuntu this time" }, { "msg_contents": "Kevin,\n\nWhile we're on the topic, do you also diable fsync?\n\nWe use xfs with battery-backed raid as well. We have had no issues with xfs.\n\nI'm curious whether anyone can comment on his experience (good or bad)\nusing xfs/battery-backed-cache/fsync=off.\n\nThanks,\nWhit\n\n\nOn Tue, Jul 27, 2010 at 9:48 AM, Kevin Grittner\n<[email protected]> wrote:\n> Mark Kirkwood <[email protected]> wrote:\n>\n>> Also xfs has seen quite a bit of development in these later\n>> kernels, any thoughts on that?\n>\n> We've been using xfs for a few years now with good performance and\n> no problems other than needing to disable write barriers to get good\n> performance out of our battery-backed RAID adapter.\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 27 Jul 2010 12:57:18 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux Filesystems again - Ubuntu this time" }, { "msg_contents": "Whit Armstrong <[email protected]> wrote:\n \n> While we're on the topic, do you also diable fsync?\n \nWe only disable fsync during bulk loads, where we would be starting\nover anyway if there was a failure. Basically, you should never use\nfsync unless you are OK with losing everything in the database\nserver if you have an OS or hardware failure. We have a few\ndatabases where we would consider that if performance wasn't\notherwise acceptable, since they are consolidated replicas of\noff-side source databases, and we have four identical ones in two\nseparate buildings; however, since performance is good with fsync on\nand it would be a bother to have to copy from one of the other\nservers in the event of an OS crash, we leave it on.\n \n-Kevin\n", "msg_date": "Tue, 27 Jul 2010 12:10:57 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux Filesystems again - Ubuntu this time" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> Basically, you should never use fsync unless you are OK with\n> losing everything in the database server if you have an OS or\n> hardware failure.\n \ns/use/disable/\n \n-Kevin\n", "msg_date": "Tue, 27 Jul 2010 12:19:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux Filesystems again - Ubuntu this time" }, { "msg_contents": "Thanks.\n\nBut there is no such risk to turning off write barriers?\n\nI'm only specifying noatime for xfs at the moment.\n\nDid you get a substantial performace boost from disabling write\nbarriers? like 10x or more like 2x?\n\nThanks,\nWhit\n\n\n\nOn Tue, Jul 27, 2010 at 1:19 PM, Kevin Grittner\n<[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> wrote:\n>\n>> Basically, you should never use fsync unless you are OK with\n>> losing everything in the database server if you have an OS or\n>> hardware failure.\n>\n> s/use/disable/\n>\n> -Kevin\n>\n", "msg_date": "Tue, 27 Jul 2010 14:20:20 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux Filesystems again - Ubuntu this time" }, { "msg_contents": "Whit Armstrong <[email protected]> wrote:\n \n> But there is no such risk to turning off write barriers?\n \nSupposedly not:\n \nhttp://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F\n \n> Did you get a substantial performace boost from disabling write\n> barriers? like 10x or more like 2x?\n \nIt made a huge difference on creation and deletion of disk files. \nUnfortunately we have some procedures which use a cursor and loop\nthrough rows calling a function which creates and drops a temporary\ntable. While I would like to see those transactions rewritten to\nuse sane techniques, they run fast enough without the write barriers\nto be acceptable to the users, which puts the issue pretty low on\nthe priority list. I don't have the numbers anymore, but I'm sure\nit was closer to 100 times slower than 10 times. In some workloads\nyou might not notice the difference, although I would watch out for\ncheckpoint behavior.\n \n-Kevin\n", "msg_date": "Tue, 27 Jul 2010 13:32:18 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux Filesystems again - Ubuntu this time" } ]
[ { "msg_contents": "I have spent the last couple of weeks digging into a Postgres performance\nproblem that ultimately boiled down to this: the planner was choosing to\nuse hash joins on a set of join keys that were much larger than the\nconfigured work_mem. We found we could make the performance much better by\neither\n1) increasing work_mem to 500MB or more, or\n2) forcing the planner to choose index-backed nested loops by turning off\nhash and merge joins as well as bitmap and sequential scans.\n\nNow we are trying to decide which of these paths to choose, and asking why\nthe planner doesn't handle this for us.\n\nBackground: LabKey builds an open source platform for biomedical research\ndata. The platform consists of a tomcat web application and a relational\ndatabase. we support two databases, Postgres and SQL Server. We started\nwith SQL Server because we were very familiar with it. Two of our technical\nteam came from the SQL Server development team. We chose Postgres because\nwe assessed that it was the open source database most likely to be able to\nhandle our application requirements for capacity and complex, nested,\ngenerated SQL handling. Postgres is now the default database for our\nplatform and most of our key customers use it. In general we've been very\nsatisfied with Postgres' performance and compatibility, but our customers\nare starting to hit situations where we really need to be able to understand\nwhy a particular operation is slow. We are currently recommending version\n8.4 and using that ourselves.\n\nThe core of the problem query was\n\nSELECT * INTO snapshot_table FROM\n (SELECT ... FROM tableA A LEFT OUTER JOIN tableB B ON (A.lsid = B.lsid)\nand A.datasetid = ? ) query1\n\nthe join column, lsid, is a poor choice for a join column as it is a long\nvarchar value (avg length 101 characters) that us only gets unique way out\non the right hand side. But we are stuck with this choice. I can post the\nSQL query and table definitions if it will help, but changes to either of\nthose would be risky and difficult, whereas setting the work_mem value or\nforcing nested loop joins is less risky.\n\nThe Performance curve looks something like this\n\nJoin Type work_mem(MB) time to populate snapshot (min)\n______________________________________________________________\nHash 50 85\nHash 200 38\nHash 400 21\nHash 500 12\nHash 1000 12\n_______________________________________________________________\nNestedLoop 50 15\nNestedLoop 200 11\nNestedLoop 400 11\nNestedLoop 500 10\nNestedLoop 1000 10\n________________________________________________________\n\nTable A contains about 3.5 million rows, and table B contains about 4.4\nmillion rows. By looking at the EXPLAIN ANALYZE reports I concluded that\nthe planner seemed to be accurately determining the approximate number of\nrows returned on each side of the join node. I also noticed that at the\nwork_mem = 50 test, the hash join query execution was using over a GB of\nspace in the pgsql_tmp, space that grew and shrank slowly over the course of\nthe test.\n\nNow for the questions:\n1) If we tell the customer to set his work_mem value to 500MB or 1GB in\npostgres.config, what problems might they see? the documentation and the\nguidelines we received from Rupinder Singh in support suggest a much lower\nvalue, e.g. a max work_mem of 10MB. Other documentation such as the \"Guide\nto Posting Slow Query Questions\" suggest at least testing up to 1GB. What\nis a reasonable maximum to configure for all connnections?\n\n2) How is work_mem used by a query execution? For example, does each hash\ntable in an execution get allocated a full work_mem's worth of memory ? Is\nthis memory released when the query is finished, or does it stay attached to\nthe connection or some other object?\n\n3) is there a reason why the planner doesn't seem to recognize the condition\nwhen the hash table won't fit in the current work_mem, and choose a\nlow-memory plan instead?\n\nExcuse the long-winded post; I was trying to give the facts and nothing but\nthe facts.\n\nThanks,\nPeter Hussey\nLabKey Software\n\nI have spent the last couple of weeks digging into a Postgres performance problem that ultimately boiled down to this:  the planner was choosing to use hash joins on a set of join keys that were much larger than the configured work_mem.  We found we could make the  performance much better by either \n\n1) increasing work_mem to 500MB or more, or 2) forcing the planner to choose index-backed nested loops by turning off hash and merge joins as well as bitmap and sequential scans.  Now we are trying to decide which of these paths to choose, and asking why the planner doesn't handle this for us.\nBackground:  LabKey builds an open source platform for biomedical research data.  The platform consists of a tomcat web application and a relational database.  we support two databases, Postgres and SQL Server.  We started with SQL Server because we were very familiar with it.  Two of our technical team came from the SQL Server development team.  We chose Postgres because we assessed that it was the open source database most likely to be able to handle our application  requirements for capacity and complex, nested, generated SQL handling.  Postgres is now the default database for our platform and most of our key customers use it.  In general we've been very satisfied with Postgres' performance and compatibility, but our customers are starting to hit situations where we really need to be able to understand why a particular operation is slow.  We are currently recommending version 8.4 and using that ourselves.  \nThe core of the problem query was SELECT * INTO snapshot_table FROM  (SELECT ... FROM  tableA A LEFT  OUTER JOIN tableB B ON (A.lsid = B.lsid) and A.datasetid = ? )  query1the join column, lsid, is a poor choice for a join column as it is a long varchar value (avg length 101 characters) that us only gets unique way out on the right hand side.  But we are stuck with this choice.  I can post the SQL query and table definitions if it will help, but changes to either of those would be risky and difficult, whereas setting the work_mem value or forcing nested loop joins is less risky.  \nThe Performance curve looks something like thisJoin Type      work_mem(MB)     time to populate snapshot (min)______________________________________________________________\nHash              50                        85 Hash              200                       38\nHash              400                       21Hash              500                       12\nHash             1000                       12_______________________________________________________________NestedLoop        50                        15\nNestedLoop        200                       11\nNestedLoop        400                       11NestedLoop        500                       10\nNestedLoop       1000 \n                      10________________________________________________________\nTable A contains about 3.5 million rows, and table B contains about 4.4 million rows.  By looking at the EXPLAIN ANALYZE reports I concluded that the planner seemed to be accurately determining the approximate number of rows returned on each side of the join node.  I also noticed that at the work_mem = 50 test, the hash join query execution was using over a GB of space in the pgsql_tmp, space that grew and shrank slowly over the course of the test.\nNow for the questions:\n1)  If we tell the customer to set his work_mem value to 500MB or 1GB in postgres.config, what problems might they see?  the documentation and the guidelines we received from Rupinder Singh in support suggest a much lower value, e.g. a max work_mem of 10MB.  Other documentation such as the \"Guide to Posting Slow Query Questions\" suggest at least testing up to 1GB.  What is a reasonable maximum to configure for all connnections? \n2) How is work_mem used by a query execution?  For example, does each hash table in an execution get allocated a full work_mem's worth of memory ?   Is this memory released when the query is finished, or does it stay attached to the connection or some other object?\n3) is there a reason why the planner doesn't seem to recognize the condition when the hash table won't fit in the current work_mem, and choose a low-memory plan instead?Excuse the long-winded post; I was trying to give the facts and nothing but the facts.\nThanks,Peter HusseyLabKey Software", "msg_date": "Tue, 27 Jul 2010 16:08:16 -0700", "msg_from": "Peter Hussey <[email protected]>", "msg_from_op": true, "msg_subject": "Questions on query planner, join types, and work_mem" }, { "msg_contents": "Hi,\n\nOn Tue, Jul 27, 2010 at 04:08:16PM -0700, Peter Hussey wrote:\n> Now for the questions:\n> 1) If we tell the customer to set his work_mem value to 500MB or 1GB in\n> postgres.config, what problems might they see? the documentation and the\n> guidelines we received from Rupinder Singh in support suggest a much lower\n> value, e.g. a max work_mem of 10MB. Other documentation such as the \"Guide\n> to Posting Slow Query Questions\" suggest at least testing up to 1GB. What\n> is a reasonable maximum to configure for all connnections?\nWell. That depends on the amount of expected concurrency and available\nmemory. Obviously you can set it way much higher in an OLAPish, low\nconcurrency setting than in an OLTP environment.\n\nThat setting is significantly complex to estimate in my opinion. For\none the actualy usage depends on the complexity of the queries, for\nanother to be halfway safe you have to use avail_mem/(max_connections\n* max_nodes_of_most_complex_query). Which is often a very pessimistic\nand unusably low estimate.\n\n> 2) How is work_mem used by a query execution? For example, does each hash\n> table in an execution get allocated a full work_mem's worth of memory ? Is\n> this memory released when the query is finished, or does it stay attached to\n> the connection or some other object?\nEach Node of the query can use one work_mem worth of data (sometimes a\nbit more). The memory is released after the query finished (or\npossibly earlier, dependent of the structure of the query).\nThe specific allocation pattern and implementation details (of malloc)\ninfluence how and when that memory is actually returned to the os.\n\n> 3) is there a reason why the planner doesn't seem to recognize the condition\n> when the hash table won't fit in the current work_mem, and choose a\n> low-memory plan instead?\nHard to say without more information. Bad estimates maybe? Best show\nyour query plan (EXPLAIN ANALYZE), the table definition and some\ndetails about common hardware (i.e. whether it has 1GB of memory or\n256GB).\n\nAndres\n", "msg_date": "Wed, 28 Jul 2010 01:57:41 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Peter Hussey <[email protected]> writes:\n> I have spent the last couple of weeks digging into a Postgres performance\n> problem that ultimately boiled down to this: the planner was choosing to\n> use hash joins on a set of join keys that were much larger than the\n> configured work_mem.\n\nWhat Postgres version is this, exactly? (\"8.4\" is not the answer I want.)\n\n> the join column, lsid, is a poor choice for a join column as it is a long\n> varchar value (avg length 101 characters) that us only gets unique way out\n> on the right hand side.\n\nHm, but it is unique eventually? It's not necessarily bad for hashing\nas long as that's so.\n\n> 1) If we tell the customer to set his work_mem value to 500MB or 1GB in\n> postgres.config, what problems might they see?\n\nThat would almost certainly be disastrous. If you have to follow the\nhack-work_mem path, I'd suggest increasing it locally in the session\nexecuting the problem query, and only for the duration of that query.\nUse SET, or even SET LOCAL.\n\n> 2) How is work_mem used by a query execution?\n\nWell, the issue you're hitting is that the executor is dividing the\nquery into batches to keep the size of the in-memory hash table below\nwork_mem. The planner should expect that and estimate the cost of\nthe hash technique appropriately, but seemingly it's failing to do so.\nSince you didn't provide EXPLAIN ANALYZE output, though, it's hard\nto be sure.\n\n> 3) is there a reason why the planner doesn't seem to recognize the condition\n> when the hash table won't fit in the current work_mem, and choose a\n> low-memory plan instead?\n\nThat's the question, all right. I wonder if it's got something to do\nwith the wide-varchar nature of the join key ... but again that's just\nspeculation with no facts. Please show us EXPLAIN ANALYZE results\nfor the hash plan with both small and large work_mem, as well as for\nthe nestloop plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jul 2010 20:05:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem " }, { "msg_contents": "Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:\n> Peter Hussey <[email protected]> writes:\n\n> > 2) How is work_mem used by a query execution?\n> \n> Well, the issue you're hitting is that the executor is dividing the\n> query into batches to keep the size of the in-memory hash table below\n> work_mem. The planner should expect that and estimate the cost of\n> the hash technique appropriately, but seemingly it's failing to do so.\n> Since you didn't provide EXPLAIN ANALYZE output, though, it's hard\n> to be sure.\n\nHmm, I wasn't aware that hash joins worked this way wrt work_mem. Is\nthis visible in the explain output? If it's something subtle (like an\nincreased total cost), may I suggest that it'd be a good idea to make it\nexplicit somehow in the machine-readable outputs?\n", "msg_date": "Wed, 28 Jul 2010 00:01:51 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Hello,\n> the join column, lsid, is a poor choice for a join column as it is a\n> long varchar value (avg length 101 characters) that us only gets \n> unique way out on the right hand side.\nWould a join on subtring on the 'way out on the right hand side' (did you \nmean 'rightmost characters' or 'only when we take almost all the 101 \ncharacters'?) together with a function based index help?\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Wed, 28 Jul 2010 09:57:29 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:\n>> Well, the issue you're hitting is that the executor is dividing the\n>> query into batches to keep the size of the in-memory hash table below\n>> work_mem. The planner should expect that and estimate the cost of\n>> the hash technique appropriately, but seemingly it's failing to do so.\n\n> Hmm, I wasn't aware that hash joins worked this way wrt work_mem. Is\n> this visible in the explain output?\n\nAs of 9.0, any significant difference between \"Hash Batches\" and\n\"Original Hash Batches\" would be a cue that the planner blew the\nestimate. For Peter's problem, we're just going to have to look\nto see if the estimated cost changes in a sane way between the\nsmall-work_mem and large-work_mem cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 2010 00:39:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem " }, { "msg_contents": "Peter Hussey <[email protected]> writes:\n> Using the default of 1MB work_mem, the planner chooses a hash join plan :\n> \"Hash Left Join (cost=252641.82..11847353.87 rows=971572 width=111) (actual\n> time=124196.670..280461.604 rows=968080 loops=1)\"\n> ...\n> For the same default 1MB work_mem, a nested loop plan is better\n> \"Nested Loop Left Join (cost=8.27..15275401.19 rows=971572 width=111)\n> (actual time=145.015..189957.023 rows=968080 loops=1)\"\n> ...\n\nHm. A nestloop with nearly a million rows on the outside is pretty\nscary. The fact that you aren't unhappy with that version of the plan,\nrather than the hash, indicates that the \"object\" table must be \nfully cached in memory, otherwise the repeated indexscans would be a\nlot slower than this:\n\n> \" -> Index Scan using uq_object on object obj (cost=0.00..3.51 rows=1\n> width=95) (actual time=0.168..0.170 rows=1 loops=968080)\"\n> \" Index Cond: ((sd.lsid)::text = (obj.objecturi)::text)\"\n\nMy take on it is that the estimate of the hash plan's cost isn't bad;\nwhat's bad is that the planner is mistakenly estimating the nestloop as\nbeing worse. What you need to do is adjust the planner's cost\nparameters so that it has a better idea of the true cost of repeated\nindex probes in your environment. Crank up effective_cache_size if\nyou didn't already, and experiment with lowering random_page_cost.\nSee the list archives for more discussion of these parameters.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Jul 2010 10:03:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem " }, { "msg_contents": "I already had effective_cache_size set to 500MB.\n\nI experimented with lowering random_page_cost to 3 then 2. It made no\ndifference in the choice of plan that I could see. In the explain analyze\noutput the estimated costs of nested loop were in fact lowererd, but so were\nthe costs of the hash join plan, and the hash join remained the lowest\npredicted costs in all tests i tried.\n\nWhat seems wrong to me is that the hash join strategy shows almost no\ndifference in estimated costs as work_mem goes from 1MB to 500MB. The cost\nfunction decreases by 1%, but the actual time for the query to execute\ndecreases by 86% as work_mem goes from 1MB to 500MB.\n\nMy questions are still\n1) Does the planner have any component of cost calculations based on the\nsize of work_mem, and if so why do those calculations seem to have so\nlittle effect here?\n\n2) Why is the setting of work_mem something left to the admin and/or\ndeveloper? Couldn't the optimizer say how much it thinks it needs to build\na hash table based on size of the keys and estimated number of rows?\n\nIt is difficult for a software development platform like ours to take\nadvantage of suggestions to set work_mem, or to change the cost function, or\nturn on/off join strategies for individual queries. The SQL we issue is\nformed by user interaction with the product and rarely static. How would we\nknow when to turn something on or off? That's why I'm looking for a\nconfiguratoin solution that I can set on a database-wide basis and have it\nwork well for all queries.\n\nthanks\nPeter\n\n\nOn Fri, Jul 30, 2010 at 7:03 AM, Tom Lane <[email protected]> wrote:\n\n> Peter Hussey <[email protected]> writes:\n> > Using the default of 1MB work_mem, the planner chooses a hash join plan :\n> > \"Hash Left Join (cost=252641.82..11847353.87 rows=971572 width=111)\n> (actual\n> > time=124196.670..280461.604 rows=968080 loops=1)\"\n> > ...\n> > For the same default 1MB work_mem, a nested loop plan is better\n> > \"Nested Loop Left Join (cost=8.27..15275401.19 rows=971572 width=111)\n> > (actual time=145.015..189957.023 rows=968080 loops=1)\"\n> > ...\n>\n> Hm. A nestloop with nearly a million rows on the outside is pretty\n> scary. The fact that you aren't unhappy with that version of the plan,\n> rather than the hash, indicates that the \"object\" table must be\n> fully cached in memory, otherwise the repeated indexscans would be a\n> lot slower than this:\n>\n> > \" -> Index Scan using uq_object on object obj (cost=0.00..3.51 rows=1\n> > width=95) (actual time=0.168..0.170 rows=1 loops=968080)\"\n> > \" Index Cond: ((sd.lsid)::text = (obj.objecturi)::text)\"\n>\n> My take on it is that the estimate of the hash plan's cost isn't bad;\n> what's bad is that the planner is mistakenly estimating the nestloop as\n> being worse. What you need to do is adjust the planner's cost\n> parameters so that it has a better idea of the true cost of repeated\n> index probes in your environment. Crank up effective_cache_size if\n> you didn't already, and experiment with lowering random_page_cost.\n> See the list archives for more discussion of these parameters.\n>\n> regards, tom lane\n>\n\n\n\n-- \nPeter Hussey\nLabKey Software\n206-667-7193 (office)\n206-291-5625 (cell)\n\nI already had effective_cache_size set to 500MB.I experimented with lowering  random_page_cost to 3 then 2.  It made no difference in the choice of plan that I could see.  In the explain analyze output the estimated costs of nested loop were in fact lowererd, but so were the costs of the hash join plan, and the hash join remained the lowest predicted costs in all tests i tried.\nWhat seems wrong to me is that the hash join strategy shows almost no difference in estimated costs as work_mem goes from 1MB to 500MB. The cost function decreases by 1%, but the actual time for the query to execute decreases by 86% as work_mem goes from 1MB to 500MB.\nMy questions are still 1)  Does the planner have any component of cost calculations based on the size of work_mem, and if so why do those calculations  seem to have so little effect here?2) Why is the setting of work_mem something left to the admin and/or developer?  Couldn't the optimizer say how much it thinks it needs to build a hash table based on size of the keys and estimated number of rows?\nIt is difficult for a software development platform like ours to take advantage of suggestions to set work_mem, or to change the cost function, or turn on/off join strategies for individual queries.  The SQL we issue is formed by user interaction with the product and rarely static.  How would we know when to turn something on or off?  That's why I'm looking for a configuratoin solution that I can set on a database-wide basis and have it work well for all queries.\nthanksPeterOn Fri, Jul 30, 2010 at 7:03 AM, Tom Lane <[email protected]> wrote:\nPeter Hussey <[email protected]> writes:\n> Using the default of 1MB work_mem, the planner chooses a hash join plan :\n> \"Hash Left Join  (cost=252641.82..11847353.87 rows=971572 width=111) (actual\n> time=124196.670..280461.604 rows=968080 loops=1)\"\n> ...\n> For the same default 1MB work_mem, a nested loop plan is better\n> \"Nested Loop Left Join  (cost=8.27..15275401.19 rows=971572 width=111)\n> (actual time=145.015..189957.023 rows=968080 loops=1)\"\n> ...\n\nHm.  A nestloop with nearly a million rows on the outside is pretty\nscary.  The fact that you aren't unhappy with that version of the plan,\nrather than the hash, indicates that the \"object\" table must be\nfully cached in memory, otherwise the repeated indexscans would be a\nlot slower than this:\n\n> \"  ->  Index Scan using uq_object on object obj  (cost=0.00..3.51 rows=1\n> width=95) (actual time=0.168..0.170 rows=1 loops=968080)\"\n> \"        Index Cond: ((sd.lsid)::text = (obj.objecturi)::text)\"\n\nMy take on it is that the estimate of the hash plan's cost isn't bad;\nwhat's bad is that the planner is mistakenly estimating the nestloop as\nbeing worse.  What you need to do is adjust the planner's cost\nparameters so that it has a better idea of the true cost of repeated\nindex probes in your environment.  Crank up effective_cache_size if\nyou didn't already, and experiment with lowering random_page_cost.\nSee the list archives for more discussion of these parameters.\n\n                        regards, tom lane\n-- Peter HusseyLabKey Software206-667-7193 (office)206-291-5625 (cell)", "msg_date": "Mon, 2 Aug 2010 14:23:05 -0700", "msg_from": "Peter Hussey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "On Mon, Aug 2, 2010 at 5:23 PM, Peter Hussey <[email protected]> wrote:\n> I already had effective_cache_size set to 500MB.\n>\n> I experimented with lowering  random_page_cost to 3 then 2.  It made no\n> difference in the choice of plan that I could see.  In the explain analyze\n> output the estimated costs of nested loop were in fact lowererd, but so were\n> the costs of the hash join plan, and the hash join remained the lowest\n> predicted costs in all tests i tried.\n\nWhat do you get if you set random_page_cost to a small value such as 0.01?\n\n> What seems wrong to me is that the hash join strategy shows almost no\n> difference in estimated costs as work_mem goes from 1MB to 500MB. The cost\n> function decreases by 1%, but the actual time for the query to execute\n> decreases by 86% as work_mem goes from 1MB to 500MB.\n\nWow. It would be interesting to find out how many batches are being\nused. Unfortunately, releases prior to 9.0 don't display that\ninformation.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Mon, 2 Aug 2010 22:48:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Peter Hussey <[email protected]> writes:\n> My questions are still\n> 1) Does the planner have any component of cost calculations based on the\n> size of work_mem,\n\nSure.\n\n> and if so why do those calculations seem to have so\n> little effect here?\n\nSince you haven't provided sufficient information to let someone else\nreproduce what you're seeing, it's pretty hard to say. It might have\nsomething to do with the particularly wide join key values you're using,\nbut that's mere speculation based on the one tidbit you provided. There\nmight be some other effect altogether that's making it do the wrong thing.\n\n> 2) Why is the setting of work_mem something left to the admin and/or\n> developer?\n\nBecause we're not smart enough to find a way to avoid that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 2010 22:55:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem " }, { "msg_contents": "On Mon, 2010-08-02 at 14:23 -0700, Peter Hussey wrote:\n> I already had effective_cache_size set to 500MB.\n> \n> I experimented with lowering random_page_cost to 3 then 2. \n\nIn case of fully cached database it is closer to 1.\n\n> 2) Why is the setting of work_mem something left to the admin and/or\n> developer? Couldn't the optimizer say how much it thinks it needs to\n> build a hash table based on size of the keys and estimated number of\n> rows?\n\nYes, It can say how much it thinks it needs to build a hash table, the\npart it can't figure out is how much it can afford, based on things like\nnumber concurrent queries and how much work-mem these are using, and any\nwork-mem used will be substracted from total memory pool, affecting also\nhow much of the files the system caches.\n\n> It is difficult for a software development platform like ours to take\n> advantage of suggestions to set work_mem, or to change the cost\n> function, or turn on/off join strategies for individual queries. The\n> SQL we issue is formed by user interaction with the product and rarely\n> static. How would we know when to turn something on or off? That's\n> why I'm looking for a configuration solution that I can set on a\n> database-wide basis and have it work well for all queries.\n\nKeep trying. The close you get with your conf to real conditions, the\nbetter choices the optimiser can make ;)\n\n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Tue, 03 Aug 2010 10:03:51 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "On Tue, Aug 3, 2010 at 3:03 AM, Hannu Krosing <[email protected]> wrote:\n> In case of fully cached database it is closer to 1.\n\nIn the case of a fully cached database I believe the correct answer\nbegins with a decimal point.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 4 Aug 2010 09:14:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Peter Hussey <[email protected]> wrote:\n \n> I already had effective_cache_size set to 500MB.\n \nThat seems awfully small. You do realize that this setting does not\ncause PostgreSQL to allocate any memory; it merely advises how much\ndisk space is likely to be cached. It should normally be set to the\nsum of your shared_buffers setting and whatever your OS reports as\ncached. Setting it too small will discourage the optimizer from\npicking plans which use indexes.\n \n> I experimented with lowering random_page_cost to 3 then 2.\n \nAs others have said, in a fully cached system that's still too high.\nIf the active portion of your database is fully cached, you should\nset random_page_cost and seq_page_cost to the same value, and that\nvalue should probably be in the range of 0.1 to 0.005. It can get\ntrickier if the active portion is largely but not fully cached; we\nhave one server where we found, through experimentation, that we got\nbetter plans overall with seq_page_cost = 0.3 and random_page_cost =\n0.5 than any other settings we tried.\n \n-Kevin\n", "msg_date": "Wed, 04 Aug 2010 09:01:34 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and\n\t work_mem" }, { "msg_contents": "On Wed, 2010-08-04 at 09:14 -0400, Robert Haas wrote:\n> On Tue, Aug 3, 2010 at 3:03 AM, Hannu Krosing <[email protected]> wrote:\n> > In case of fully cached database it is closer to 1.\n> \n> In the case of a fully cached database I believe the correct answer\n> begins with a decimal point.\n\nThe number 1 here was suggested in relation to seq_page_cost, which is\n1. \n\nFor fully cached db there is no additional seek time for random access,\nso seq_page_cost == random_page_cost.\n\nOf course there are more variables than just *_page_cost, so if you nail\ndown any other one, you may end with less than 1 for both page costs.\n\nI have always used seq_page_cost = 1 in my thinking and adjusted others\nrelative to it.\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Wed, 04 Aug 2010 20:51:08 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Of course there are more variables than just *_page_cost, so if you nail\n> down any other one, you may end with less than 1 for both page costs.\n\n> I have always used seq_page_cost = 1 in my thinking and adjusted others\n> relative to it.\n\nRight, seq_page_cost = 1 is sort of the traditional reference point,\nbut you don't have to do it that way. The main point here is that for\nan all-in-RAM database, the standard page access costs are too high\nrelative to the CPU effort costs:\n\nregression=# select name, setting from pg_settings where name like '%cost';\n name | setting \n----------------------+---------\n cpu_index_tuple_cost | 0.005\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n random_page_cost | 4\n seq_page_cost | 1\n(5 rows)\n\nTo model an all-in-RAM database, you can either dial down both\nrandom_page_cost and seq_page_cost to 0.1 or so, or set random_page_cost\nto 1 and increase all the CPU costs. The former is less effort ;-)\n\nIt should be noted also that there's not all that much evidence backing\nup the default values of the cpu_xxx_cost variables. In the past those\ndidn't matter much because I/O costs always swamped CPU costs anyway.\nBut I can foresee us having to twiddle those defaults and maybe refine\nthe CPU cost model more, as all-in-RAM cases get more common.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Aug 2010 14:00:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem " }, { "msg_contents": "On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:\n> Hannu Krosing <[email protected]> writes:\n> > Of course there are more variables than just *_page_cost, so if you nail\n> > down any other one, you may end with less than 1 for both page costs.\n> \n> > I have always used seq_page_cost = 1 in my thinking and adjusted others\n> > relative to it.\n> \n> Right, seq_page_cost = 1 is sort of the traditional reference point,\n> but you don't have to do it that way. The main point here is that for\n> an all-in-RAM database, the standard page access costs are too high\n> relative to the CPU effort costs:\n> \n> regression=# select name, setting from pg_settings where name like '%cost';\n> name | setting \n> ----------------------+---------\n> cpu_index_tuple_cost | 0.005\n> cpu_operator_cost | 0.0025\n> cpu_tuple_cost | 0.01\n> random_page_cost | 4\n> seq_page_cost | 1\n> (5 rows)\n> \n> To model an all-in-RAM database, you can either dial down both\n> random_page_cost and seq_page_cost to 0.1 or so, or set random_page_cost\n> to 1 and increase all the CPU costs. The former is less effort ;-)\n> \n> It should be noted also that there's not all that much evidence backing\n> up the default values of the cpu_xxx_cost variables. In the past those\n> didn't matter much because I/O costs always swamped CPU costs anyway.\n> But I can foresee us having to twiddle those defaults and maybe refine\n> the CPU cost model more, as all-in-RAM cases get more common.\n\nEspecially the context switch + copy between shared buffers and system\ndisk cache will become noticeable at these speeds.\n\nAn easy way to test it is loading a table with a few indexes, once with\na shared_buffers value, which is senough for only the main table and\nonce with one that fits both table and indexes,\n\n\n> \t\t\tregards, tom lane\n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Wed, 04 Aug 2010 21:41:27 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:\n> On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:\n\n> > regression=# select name, setting from pg_settings where name like '%cost';\n> > name | setting \n> > ----------------------+---------\n> > cpu_index_tuple_cost | 0.005\n> > cpu_operator_cost | 0.0025\n> > cpu_tuple_cost | 0.01\n> > random_page_cost | 4\n> > seq_page_cost | 1\n> > (5 rows)\n> > \n> > To model an all-in-RAM database, you can either dial down both\n> > random_page_cost and seq_page_cost to 0.1 or so, or set random_page_cost\n> > to 1 and increase all the CPU costs. The former is less effort ;-)\n> > \n> > It should be noted also that there's not all that much evidence backing\n> > up the default values of the cpu_xxx_cost variables. In the past those\n> > didn't matter much because I/O costs always swamped CPU costs anyway.\n> > But I can foresee us having to twiddle those defaults and maybe refine\n> > the CPU cost model more, as all-in-RAM cases get more common.\n> \n> Especially the context switch + copy between shared buffers and system\n> disk cache will become noticeable at these speeds.\n> \n> An easy way to test it is loading a table with a few indexes, once with\n> a shared_buffers value, which is senough for only the main table and\n> once with one that fits both table and indexes,\n\nok, just to back this up I ran the following test with 28MB and 128MB\nshared buffers.\n\ncreate table sbuf_test(f1 float, f2 float, f3 float);\ncreate index sbuf_test1 on sbuf_test(f1);\ncreate index sbuf_test2 on sbuf_test(f2);\ncreate index sbuf_test3 on sbuf_test(f3);\n\nand then did 3 times the following for each shared_buffers setting\n\ntruncate sbuf_test;\ninsert into sbuf_test \nselect random(), random(), random() from generate_series(1,600000);\n\nthe main table size was 31MB, indexes were 18MB each for total size of\n85MB\n\nin case of 128MB shared buffers, the insert run in 14sec (+/- 1 sec)\n\nin case of 28MB shared buffers, the insert run between 346 and 431 sec,\nthat is 20-30 _times_ slower.\n\nThere was ample space for keeping the indexes in linux cache (it has 1GB\ncached currently) though the system may have decided to start writing it\nto disk, so I suspect that most of the time was spent copying random\nindex pages back and forth between shared buffers and disk cache.\n\nI did not verify this, so there may be some other factors involved, but\nthis seems like the most obvious suspect.\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Wed, 04 Aug 2010 22:03:54 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Hannu Krosing wrote:\n> There was ample space for keeping the indexes in linux cache (it has 1GB\n> cached currently) though the system may have decided to start writing it\n> to disk, so I suspect that most of the time was spent copying random\n> index pages back and forth between shared buffers and disk cache.\n> \n\nLow shared_buffers settings will result in the same pages more often \nbeing written multiple times per checkpoint, particularly index pages, \nwhich is less efficient than keeping in the database cache and updating \nthem there. This is a slightly different issue than just the overhead \nof copying them back and forth; by keeping them in cache, you actually \nreduce writes to the OS cache. What I do to quantify that is...well, \nthe attached shows it better than I can describe; only works on 9.0 or \nlater as it depends on a feature I added for this purpose there. It \nmeasures exactly how much buffer cache churn happened during a test, in \nthis case creating a pgbench database.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 04 Aug 2010 15:16:56 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "This time with attachment...\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us", "msg_date": "Wed, 04 Aug 2010 15:18:44 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> There was ample space for keeping the indexes in linux cache (it has 1GB\n> cached currently) though the system may have decided to start writing it\n> to disk, so I suspect that most of the time was spent copying random\n> index pages back and forth between shared buffers and disk cache.\n\nIf you're on a platform that has oprofile, you could probably verify\nthat rather than just guess it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Aug 2010 15:20:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem " }, { "msg_contents": "On Wed, 2010-08-04 at 22:03 +0300, Hannu Krosing wrote:\n> On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:\n> > On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:\n> \n> > > regression=# select name, setting from pg_settings where name like '%cost';\n> > > name | setting \n> > > ----------------------+---------\n> > > cpu_index_tuple_cost | 0.005\n> > > cpu_operator_cost | 0.0025\n> > > cpu_tuple_cost | 0.01\n> > > random_page_cost | 4\n> > > seq_page_cost | 1\n> > > (5 rows)\n> > > \n> > > To model an all-in-RAM database, you can either dial down both\n> > > random_page_cost and seq_page_cost to 0.1 or so, or set random_page_cost\n> > > to 1 and increase all the CPU costs. The former is less effort ;-)\n> > > \n> > > It should be noted also that there's not all that much evidence backing\n> > > up the default values of the cpu_xxx_cost variables. In the past those\n> > > didn't matter much because I/O costs always swamped CPU costs anyway.\n> > > But I can foresee us having to twiddle those defaults and maybe refine\n> > > the CPU cost model more, as all-in-RAM cases get more common.\n> > \n> > Especially the context switch + copy between shared buffers and system\n> > disk cache will become noticeable at these speeds.\n> > \n> > An easy way to test it is loading a table with a few indexes, once with\n> > a shared_buffers value, which is senough for only the main table and\n> > once with one that fits both table and indexes,\n\nI re-ran the test, and checked idx_blks_read for 28MB case\n\nhannu=# select * from pg_statio_user_indexes where relname =\n'sbuf_test';\n| schemaname | relname | indexrelname | idx_blks_read | idx_blks_hit \n+------------+-----------+--------------+---------------+--------------\n| hannu | sbuf_test | sbuf_test1 | 71376 | 1620908\n| hannu | sbuf_test | sbuf_test2 | 71300 | 1620365\n| hannu | sbuf_test | sbuf_test3 | 71436 | 1619619\n\n\nthis means that there were a total of 214112 index blocks read back from\ndisk cache (obviously at least some of these had to be copied the other\nway as well).\n\nThis seems to indicate about 1 ms for moving pages over user/system\nboundary. (Intel Core2 Duo T7500 @ 2.20GHz, Ubuntu 9.10, 4GB RAM)\n\nfor 128MB shared buffers the total idx_blks_read for 3 indexes was about\n6300 .\n\n\n> ok, just to back this up I ran the following test with 28MB and 128MB\n> shared buffers.\n> \n> create table sbuf_test(f1 float, f2 float, f3 float);\n> create index sbuf_test1 on sbuf_test(f1);\n> create index sbuf_test2 on sbuf_test(f2);\n> create index sbuf_test3 on sbuf_test(f3);\n> \n> and then did 3 times the following for each shared_buffers setting\n> \n> truncate sbuf_test;\n> insert into sbuf_test \n> select random(), random(), random() from generate_series(1,600000);\n> \n> the main table size was 31MB, indexes were 18MB each for total size of\n> 85MB\n> \n> in case of 128MB shared buffers, the insert run in 14sec (+/- 1 sec)\n> \n> in case of 28MB shared buffers, the insert run between 346 and 431 sec,\n> that is 20-30 _times_ slower.\n> \n> There was ample space for keeping the indexes in linux cache (it has 1GB\n> cached currently) though the system may have decided to start writing it\n> to disk, so I suspect that most of the time was spent copying random\n> index pages back and forth between shared buffers and disk cache.\n> \n> I did not verify this, so there may be some other factors involved, but\n> this seems like the most obvious suspect.\n> \n> -- \n> Hannu Krosing http://www.2ndQuadrant.com\n> PostgreSQL Scalability and Availability \n> Services, Consulting and Training\n> \n> \n> \n\n\n", "msg_date": "Wed, 04 Aug 2010 22:38:42 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "On Wed, 2010-08-04 at 15:16 -0400, Greg Smith wrote:\n> Hannu Krosing wrote:\n> > There was ample space for keeping the indexes in linux cache (it has 1GB\n> > cached currently) though the system may have decided to start writing it\n> > to disk, so I suspect that most of the time was spent copying random\n> > index pages back and forth between shared buffers and disk cache.\n> > \n> \n> Low shared_buffers settings will result in the same pages more often \n> being written multiple times per checkpoint,\n\nDo you mean \"written to disk\", or written out from shared_buffers to\ndisk cache ?\n\n> particularly index pages, \n> which is less efficient than keeping in the database cache and updating \n> them there. This is a slightly different issue than just the overhead \n> of copying them back and forth; by keeping them in cache, you actually \n> reduce writes to the OS cache. \n\nThat's what I meant. Both writes to and read from the OS cache take a\nsignificant amount of time once you are not doing real disk I/O.\n\n> What I do to quantify that is...well, \n> the attached shows it better than I can describe; only works on 9.0 or \n> later as it depends on a feature I added for this purpose there. It \n> measures exactly how much buffer cache churn happened during a test, in \n> this case creating a pgbench database.\n> \n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n> \n\n\n", "msg_date": "Wed, 04 Aug 2010 22:51:44 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Greg Smith <[email protected]> wrote:\n \n> What I do to quantify that is...well, the attached shows it better\n> than I can describe; only works on 9.0 or later as it depends on a\n> feature I added for this purpose there. It measures exactly how\n> much buffer cache churn happened during a test, in this case\n> creating a pgbench database.\n \nI'm not entirely sure I understand what I'm supposed to get from\nthat. On a 3GB workstation, a compile from a recent HEAD checkout,\nwith a default postgresql.conf file, I get this:\n \n-[ RECORD 1 ]------+------------------------------\nnow | 2010-08-04 14:25:46.683766-05\ncheckpoints_timed | 0 \ncheckpoints_req | 0 \nbuffers_checkpoint | 0 \nbuffers_clean | 0 \nmaxwritten_clean | 0 \nbuffers_backend | 0 \nbuffers_alloc | 73 \n\nInitializing pgbench\n-[ RECORD 1 ]------+------------------------------\nnow | 2010-08-04 14:27:49.062551-05\ncheckpoints_timed | 0\ncheckpoints_req | 0\nbuffers_checkpoint | 0\nbuffers_clean | 0\nmaxwritten_clean | 0\nbuffers_backend | 633866\nbuffers_alloc | 832\n \nI boost shared_buffers from 32MB to 320MB, restart, and get this:\n \n-[ RECORD 1 ]------+------------------------------\nnow | 2010-08-04 14:30:42.816719-05\ncheckpoints_timed | 0\ncheckpoints_req | 0\nbuffers_checkpoint | 0\nbuffers_clean | 0\nmaxwritten_clean | 0\nbuffers_backend | 0\nbuffers_alloc | 0\n\nInitializing pgbench\n-[ RECORD 1 ]------+------------------------------\nnow | 2010-08-04 14:32:40.750098-05\ncheckpoints_timed | 0\ncheckpoints_req | 0\nbuffers_checkpoint | 0\nbuffers_clean | 0\nmaxwritten_clean | 0\nbuffers_backend | 630794\nbuffers_alloc | 2523\n \nSo run time dropped from 123 seconds to 118 seconds, buffers_backend\ndropped by less than 0.5%, and buffers_alloc went up. Assuming this\nis real, and not just \"in the noise\" -- what conclusions would you\ndraw from this? Dedicating an additional 10% of my free memory got\nme a 4% speed improvement? Was I supposed to try with other scales?\nWhich ones?\n \n-Kevin\n", "msg_date": "Wed, 04 Aug 2010 14:58:08 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and\n\t work_mem" }, { "msg_contents": "On Wed, 2010-08-04 at 22:03 +0300, Hannu Krosing wrote:\n> On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:\n> > On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:\n> \n> > > regression=# select name, setting from pg_settings where name like '%cost';\n> > > name | setting \n> > > ----------------------+---------\n> > > cpu_index_tuple_cost | 0.005\n> > > cpu_operator_cost | 0.0025\n> > > cpu_tuple_cost | 0.01\n> > > random_page_cost | 4\n> > > seq_page_cost | 1\n> > > (5 rows)\n> > > \n> > > To model an all-in-RAM database, you can either dial down both\n> > > random_page_cost and seq_page_cost to 0.1 or so, or set random_page_cost\n> > > to 1 and increase all the CPU costs. The former is less effort ;-)\n> > > \n> > > It should be noted also that there's not all that much evidence backing\n> > > up the default values of the cpu_xxx_cost variables. In the past those\n> > > didn't matter much because I/O costs always swamped CPU costs anyway.\n> > > But I can foresee us having to twiddle those defaults and maybe refine\n> > > the CPU cost model more, as all-in-RAM cases get more common.\n> > \n> > Especially the context switch + copy between shared buffers and system\n> > disk cache will become noticeable at these speeds.\n> > \n> > An easy way to test it is loading a table with a few indexes, once with\n> > a shared_buffers value, which is senough for only the main table and\n> > once with one that fits both table and indexes,\n\nI re-ran the test, and checked idx_blks_read for 28MB case\n\nhannu=# select * from pg_statio_user_indexes where relname =\n'sbuf_test';\n| schemaname | relname | indexrelname | idx_blks_read | idx_blks_hit \n+------------+-----------+--------------+---------------+--------------\n| hannu | sbuf_test | sbuf_test1 | 71376 | 1620908\n| hannu | sbuf_test | sbuf_test2 | 71300 | 1620365\n| hannu | sbuf_test | sbuf_test3 | 71436 | 1619619\n\n\nthis means that there were a total of 214112 index blocks read back from\ndisk cache (obviously at least some of these had to be copied the other\nway as well).\n\nThis seems to indicate about 1 ms for moving pages over user/system\nboundary. (Intel Core2 Duo T7500 @ 2.20GHz, Ubuntu 9.10, 4GB RAM)\n\nfor 128MB shared buffers the total idx_blks_read for 3 indexes was about\n6300 .\n\n\n> ok, just to back this up I ran the following test with 28MB and 128MB\n> shared buffers.\n> \n> create table sbuf_test(f1 float, f2 float, f3 float);\n> create index sbuf_test1 on sbuf_test(f1);\n> create index sbuf_test2 on sbuf_test(f2);\n> create index sbuf_test3 on sbuf_test(f3);\n> \n> and then did 3 times the following for each shared_buffers setting\n> \n> truncate sbuf_test;\n> insert into sbuf_test \n> select random(), random(), random() from generate_series(1,600000);\n> \n> the main table size was 31MB, indexes were 18MB each for total size of\n> 85MB\n> \n> in case of 128MB shared buffers, the insert run in 14sec (+/- 1 sec)\n> \n> in case of 28MB shared buffers, the insert run between 346 and 431 sec,\n> that is 20-30 _times_ slower.\n> \n> There was ample space for keeping the indexes in linux cache (it has 1GB\n> cached currently) though the system may have decided to start writing it\n> to disk, so I suspect that most of the time was spent copying random\n> index pages back and forth between shared buffers and disk cache.\n> \n> I did not verify this, so there may be some other factors involved, but\n> this seems like the most obvious suspect.\n> \n> -- \n> Hannu Krosing http://www.2ndQuadrant.com\n> PostgreSQL Scalability and Availability \n> Services, Consulting and Training\n> \n> \n> \n\n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Wed, 04 Aug 2010 23:24:40 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Kevin Grittner wrote:\n> Assuming this is real, and not just \"in the noise\" -- what conclusions would you\n> draw from this?\n\nWas trying to demonstrate the general ability of pg_stat_bgwriter \nsnapshots at points in time to directly measure the buffer activity \nHannu was theorizing about, not necessarily show a useful benchmark of \nany sort with that. Watching pgbench create a database isn't all that \ninteresting unless you either a) increase the database scale such that \nat least one timed checkpoint kicks in, or b) turn on archive_mode so \nthe whole WAL COPY optimization is defeated. More on this topic later, \njust happened to have that little example script ready to demonstrate \nthe measurement concept.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 04 Aug 2010 16:38:32 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and\t work_mem" }, { "msg_contents": "Hannu Krosing wrote:\n> Do you mean \"written to disk\", or written out from shared_buffers to\n> disk cache ?\n> \n\nThe later turns into the former eventually, so both really. The kernel \nwill do some amount of write combining for you if you're lucky. But not \nin all cases; it may decide to write something out to physical disk \nbefore the second write shows up.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 04 Aug 2010 16:40:14 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Hannu Krosing <[email protected]> wrote:\n \n> This seems to indicate about 1 ms for moving pages over\n> user/system boundary. (Intel Core2 Duo T7500 @ 2.20GHz, Ubuntu\n> 9.10, 4GB RAM)\n \nUsing Greg's test script on a box with two cores like this:\n \nIntel(R) Pentium(R) D CPU 3.40GHz\nLinux kgrittn-desktop 2.6.31-22-generic #60-Ubuntu SMP Thu May 27\n00:22:23 UTC 2010 i686 GNU/Linux\n \nDividing the run time by accumulated buffers_backend, it comes to\nless than 0.2 ms per dirty buffer flushed. If I get a few spare\nticks I'll try again while checking what vmstat and oprofile say\nabout how much of that went to things besides the transfer from\nshared buffers to the OS. I mean, it's possible I was waiting on\nactual disk I/O at some point.\n \n-Kevin\n", "msg_date": "Wed, 04 Aug 2010 15:42:49 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and\n\t work_mem" }, { "msg_contents": "Greg Smith <[email protected]> wrote:\n \n> Was trying to demonstrate the general ability of pg_stat_bgwriter \n> snapshots at points in time to directly measure the buffer\n> activity Hannu was theorizing about, not necessarily show a useful\n> benchmark of any sort with that.\n \nAh, OK. Sorry I didn't pick up on that; I was struggling to tease\nout some particular effect you expected to see in the numbers from\nthat particular run. :-/\n \n-Kevin\n", "msg_date": "Wed, 04 Aug 2010 15:46:05 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and\t\n\t work_mem" }, { "msg_contents": "Tom Lane wrote:\n> Hannu Krosing <[email protected]> writes:\n> > Of course there are more variables than just *_page_cost, so if you nail\n> > down any other one, you may end with less than 1 for both page costs.\n> \n> > I have always used seq_page_cost = 1 in my thinking and adjusted others\n> > relative to it.\n> \n> Right, seq_page_cost = 1 is sort of the traditional reference point,\n> but you don't have to do it that way. The main point here is that for\n> an all-in-RAM database, the standard page access costs are too high\n> relative to the CPU effort costs:\n> \n> regression=# select name, setting from pg_settings where name like '%cost';\n> name | setting \n> ----------------------+---------\n> cpu_index_tuple_cost | 0.005\n> cpu_operator_cost | 0.0025\n> cpu_tuple_cost | 0.01\n> random_page_cost | 4\n> seq_page_cost | 1\n> (5 rows)\n> \n> To model an all-in-RAM database, you can either dial down both\n> random_page_cost and seq_page_cost to 0.1 or so, or set random_page_cost\n> to 1 and increase all the CPU costs. The former is less effort ;-)\n> \n> It should be noted also that there's not all that much evidence backing\n> up the default values of the cpu_xxx_cost variables. In the past those\n> didn't matter much because I/O costs always swamped CPU costs anyway.\n> But I can foresee us having to twiddle those defaults and maybe refine\n> the CPU cost model more, as all-in-RAM cases get more common.\n\nThis confused me. If we are assuing the data is in\neffective_cache_size, why are we adding sequential/random page cost to\nthe query cost routines?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Wed, 11 Aug 2010 21:42:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and\n work_mem" }, { "msg_contents": "On Wed, Aug 11, 2010 at 9:42 PM, Bruce Momjian <[email protected]> wrote:\n> This confused me.  If we are assuing the data is in\n> effective_cache_size, why are we adding sequential/random page cost to\n> the query cost routines?\n\nSee the comments for index_pages_fetched(). We basically assume that\nall data starts uncached at the beginning of each query - in fact,\neach plan node. effective_cache_size only measures the chances that\nif we hit the same block again later in the execution of something\nlike a nested-loop-with-inner-indexscan, it'll still be in cache.\n\nIt's an extremely weak knob, and unless you have tables or indices\nthat are larger than RAM, the only mistake you can make is setting it\ntoo low.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 11 Aug 2010 22:39:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and work_mem" }, { "msg_contents": "Robert Haas wrote:\n> On Wed, Aug 11, 2010 at 9:42 PM, Bruce Momjian <[email protected]> wrote:\n> > This confused me. ?If we are assuing the data is in\n> > effective_cache_size, why are we adding sequential/random page cost to\n> > the query cost routines?\n> \n> See the comments for index_pages_fetched(). We basically assume that\n> all data starts uncached at the beginning of each query - in fact,\n> each plan node. effective_cache_size only measures the chances that\n> if we hit the same block again later in the execution of something\n> like a nested-loop-with-inner-indexscan, it'll still be in cache.\n> \n> It's an extremely weak knob, and unless you have tables or indices\n> that are larger than RAM, the only mistake you can make is setting it\n> too low.\n\nThe attached patch documents that there is no assumption that data\nremains in the disk cache between queries. I thought this information\nmight be helpful.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +", "msg_date": "Mon, 31 Jan 2011 22:17:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and\n work_mem" }, { "msg_contents": "Bruce Momjian wrote:\n> Robert Haas wrote:\n> > On Wed, Aug 11, 2010 at 9:42 PM, Bruce Momjian <[email protected]> wrote:\n> > > This confused me. ?If we are assuing the data is in\n> > > effective_cache_size, why are we adding sequential/random page cost to\n> > > the query cost routines?\n> > \n> > See the comments for index_pages_fetched(). We basically assume that\n> > all data starts uncached at the beginning of each query - in fact,\n> > each plan node. effective_cache_size only measures the chances that\n> > if we hit the same block again later in the execution of something\n> > like a nested-loop-with-inner-indexscan, it'll still be in cache.\n> > \n> > It's an extremely weak knob, and unless you have tables or indices\n> > that are larger than RAM, the only mistake you can make is setting it\n> > too low.\n> \n> The attached patch documents that there is no assumption that data\n> remains in the disk cache between queries. I thought this information\n> might be helpful.\n\nApplied.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Tue, 1 Feb 2011 15:24:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions on query planner, join types, and\n work_mem" } ]
[ { "msg_contents": "Hi,\n\nOn a hunch I removed two (legacy) WHERE conditions from the following\nquery I obtained a 158x speed improvement. Yet these condiditions do not\nfilter anything. Does that make any sense?\n\nThe EXPLAIN ANALYSE output is attached with, first the fast version and\nthen the slow one.\n\nI'd like to understand what is at play here to explain such a dramatic\ndifference. This is with pg 8.4.4.\n\nThanks,\n\nselect p3.price as first_price,\n p4.price as second_price,\n p5.price as third_price,\n t.id_cabin_category, t.id_currency, t.id_alert_cruise, t.id_cruise,\n t.created_by, t.cabin_name, t.cabin_cat_code, t.cabin_type_name,\n cr.login, cr.email, fx.currency_symbol, fx.currency_code,\n c.saildate, ct.id_cruise_type, ct.package_name, s.id_ship, s.ship_name\n from (select\n first_value(max(p.id_price)) over w as first_id_price,\n nth_value(max(p.id_price),2) over w as second_id_price,\n p.id_cabin_category, p.id_currency,\n p.created_on > ac.modified_on as is_new_price,\n ac.id_alert_cruise, ac.id_cruise, ac.cabin_name, ac.created_by,\n ac.cabin_cat_code, ac.cabin_type_name\n from alert_to_category ac\n join price p on (ac.id_cabin_category=p.id_cabin_category and\n p.id_cruise=ac.id_cruise and (p.id_currency=ac.id_currency or\n ac.id_currency is null))\n\t\t-- XXX: removing these speeds up query by 158x !\n -- where (ac.created_by=0 or nullif(0, 0) is null)\n -- and (p.id_cruise=0 or nullif(0, 0) is null)\n group by ac.id_cruise,ac.created_by,ac.id_alert_cruise,ac.cabin_name,\n ac.cabin_cat_code, ac.cabin_type_name,\n p.id_cabin_category,p.id_currency,p.id_cruise,\n p.created_on > ac.modified_on\n window w as (partition by\n p.id_currency,p.id_cabin_category,p.id_cruise order by\n p.created_on > ac.modified_on desc\n rows between unbounded preceding and unbounded following)\n order by p.id_cabin_category,p.id_currency) as t\n join cruiser cr on (t.created_by=cr.id_cruiser)\n join cruise c using (id_cruise)\n join cruise_type ct using (id_cruise_type)\n join ship s using (id_ship)\n join currency fx using (id_currency)\n join price p3 on (t.first_id_price=p3.id_price)\n left join price p4 on (t.second_id_price=p4.id_price)\n left join price p5 on (p5.id_price=(select id_price from price \n where id_cruise=p3.id_cruise and id_cabin_category=p3.id_cabin_category \n and id_currency=p3.id_currency and id_price < t.second_id_price \n order by id_price desc limit 1))\n where t.is_new_price is true and p3.price <> p4.price;\n\n-- \nhttp://www.cruisefish.net\n", "msg_date": "Wed, 28 Jul 2010 12:27:44 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "158x query improvement when removing 2 (noop) WHERE conditions" }, { "msg_contents": "On Wednesday 28 July 2010 12:27:44 Louis-David Mitterrand wrote:\n> The EXPLAIN ANALYSE output is attached with, first the fast version and\n> then the slow one.\nI think you forgot to attach it.\n\nAndres\n", "msg_date": "Wed, 28 Jul 2010 12:49:34 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 158x query improvement when removing 2 (noop) WHERE conditions" } ]
[ { "msg_contents": "Hi there,\n\nI have a simple query where I don't understand the planner's choice to \nuse a particular index.\n\nThe main table looks like this:\n\n# \\d sq_ast_attr_val\n Table \"public.sq_ast_attr_val\"\n Column | Type | Modifiers\n-------------+-----------------------+------------------------------\n assetid | character varying(15) | not null\n attrid | integer | not null\n contextid | integer | not null default 0\n custom_val | text |\n use_default | character(1) | not null default '1'::bpchar\nIndexes:\n \"ast_attr_val_pk\" PRIMARY KEY, btree (assetid, attrid, contextid)\n \"sq_ast_attr_val_assetid\" btree (assetid)\n \"sq_ast_attr_val_attrid\" btree (attrid)\n \"sq_ast_attr_val_concat\" btree (((assetid::text || '~'::text) || \nattrid))\n \"sq_ast_attr_val_contextid\" btree (contextid)\n\n\nThe query:\n\nSELECT\n assetid, custom_val\nFROM\n sq_ast_attr_val\nWHERE\n attrid IN (SELECT attrid FROM sq_ast_attr WHERE name = \n'is_contextable' AND (type_code = 'metadata_field_select' OR \nowning_type_code = 'metadata_field'))\n AND contextid = 0\nINTERSECT\nSELECT\n assetid, custom_val\nFROM\n sq_ast_attr_val\nWHERE\n assetid = '62321'\n AND contextid = 0;\n\n\nThe explain analyze plan:\n\nhttp://explain.depesz.com/s/nWs\n\nI'm not sure why it's picking the sq_ast_attr_val_contextid index to do \nthe contextid = 0 check, the other parts (attrid/assetid) are much more \nselective.\n\nIf I drop that particular index:\n\nhttp://explain.depesz.com/s/zp\n\n\nAll (I hope) relevant postgres info:\n\nCentos 5.5 x86_64 running pg8.4.4.\n\nServer has 8gig memory.\n\n# select name, setting, source from pg_settings where name in \n('shared_buffers', 'effective_cache_size', 'work_mem');\n name | setting\n----------------------+--------\nshared_buffers | 262144\neffective_cache_size | 655360\nwork_mem | 32768\n\nAll planner options are enabled:\n\n# select name, setting, source from pg_settings where name like 'enable_%';\n name | setting | source\n-------------------+---------+---------\n enable_bitmapscan | on | default\n enable_hashagg | on | default\n enable_hashjoin | on | default\n enable_indexscan | on | default\n enable_mergejoin | on | default\n enable_nestloop | on | default\n enable_seqscan | on | default\n enable_sort | on | default\n enable_tidscan | on | default\n\nAny insights welcome - thanks!\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Thu, 29 Jul 2010 10:51:09 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "planner index choice" }, { "msg_contents": "Chris <[email protected]> writes:\n> The query:\n\n> SELECT\n> assetid, custom_val\n> FROM\n> sq_ast_attr_val\n> WHERE\n> attrid IN (SELECT attrid FROM sq_ast_attr WHERE name = \n> 'is_contextable' AND (type_code = 'metadata_field_select' OR \n> owning_type_code = 'metadata_field'))\n> AND contextid = 0\n> INTERSECT\n> SELECT\n> assetid, custom_val\n> FROM\n> sq_ast_attr_val\n> WHERE\n> assetid = '62321'\n> AND contextid = 0;\n\n> The explain analyze plan:\n> http://explain.depesz.com/s/nWs\n\nHrm ... are you *certain* that's an 8.4 server? Because the bit with\n\n\tIndex Cond: (sq_ast_attr_val.attrid = \"outer\".attrid)\n\nis a locution that EXPLAIN hasn't used since 8.1, according to a quick\ncheck. More recent versions don't say \"outer\".\n\nThe actual problem seems to be that choose_bitmap_and() is choosing to\nadd an indexscan on sq_ast_attr_val_contextid, even though this index\nis a lot less selective than the sq_ast_attr_val_attrid scan it had\nalready picked. I've seen that behavior before, and there were a series\nof patches back in 2006-2007 that seem to have pretty much fixed it.\nSo that's another reason for suspecting you've got an old server version\nthere...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jul 2010 23:53:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner index choice " }, { "msg_contents": "Hi,\n\n> Hrm ... are you *certain* that's an 8.4 server?\n\nYep.\n\n# psql -U postgres -d db\npsql (8.4.4)\n\ndb=# select version();\n version \n\n------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.4 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) \n4.1.2 20080704 (Red Hat 4.1.2-48), 64-bit\n(1 row)\n\n\n> The actual problem seems to be that choose_bitmap_and() is choosing to\n> add an indexscan on sq_ast_attr_val_contextid, even though this index\n> is a lot less selective than the sq_ast_attr_val_attrid scan it had\n> already picked. I've seen that behavior before, and there were a series\n> of patches back in 2006-2007 that seem to have pretty much fixed it.\n> So that's another reason for suspecting you've got an old server version\n> there...\n\nI just recreated the index and re-ran the explain analyze and it doesn't \ngive the \"outer\" bit any more - not sure how I got that before.\n\ndb=# begin;\nBEGIN\ndb=# create index attr_val_contextid on sq_ast_attr_val(contextid);\nCREATE INDEX\ndb=# analyze sq_ast_attr_val;\nANALYZE\ndb=# explain analyze SELECT\ndb-# assetid, custom_val\ndb-# FROM\ndb-# sq_ast_attr_val\ndb-# WHERE\ndb-# attrid IN (SELECT attrid FROM sq_ast_attr WHERE name =\ndb(# 'is_contextable' AND (type_code = 'metadata_field_select' OR\ndb(# owning_type_code = 'metadata_field'))\ndb-# AND contextid = 0\ndb-# INTERSECT\ndb-# SELECT\ndb-# assetid, custom_val\ndb-# FROM\ndb-# sq_ast_attr_val\ndb-# WHERE\ndb-# assetid = '62321'\ndb-# AND contextid = 0;\n\nhttp://explain.depesz.com/s/br9\n\nWithout that index (again with an analyze after doing a rollback):\n\nhttp://explain.depesz.com/s/gxH\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Thu, 29 Jul 2010 17:29:23 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: planner index choice" }, { "msg_contents": "> http://explain.depesz.com/s/br9\n> http://explain.depesz.com/s/gxH\n\nWell, I don't have time to do a thorough analysis right now, but in all\nthe plans you've posted there are quite high values in the \"Rows x\" column\n(e.g. the 5727.5 value).\n\nThat means a significant difference in estimated and actual row number,\nwhich may lead to poor choice of indexes etc. The planner may simply think\nthe index is better due to imprecise statistics etc.\n\nTry to increase te statistics target for the columns, e.g.\n\nALTER TABLE table ALTER COLUMN column SET STATISTICS integer\n\nwhere \"integer\" is between 0 and 1000 (the default value is 10 so use 100\nor maybe 1000), run analyze and try to run the query again.\n\nTomas\n\n", "msg_date": "Thu, 29 Jul 2010 13:14:54 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: planner index choice" }, { "msg_contents": "[email protected] writes:\n>> http://explain.depesz.com/s/br9\n>> http://explain.depesz.com/s/gxH\n\n> Well, I don't have time to do a thorough analysis right now, but in all\n> the plans you've posted there are quite high values in the \"Rows x\" column\n> (e.g. the 5727.5 value).\n\n> That means a significant difference in estimated and actual row number,\n> which may lead to poor choice of indexes etc. The planner may simply think\n> the index is better due to imprecise statistics etc.\n\nYeah. The sq_ast_attr_val_attrid scan is a lot more selective than the\nplanner is guessing (3378 rows estimated vs an average of 15 actual),\nand I think that is making the difference. If you look at the estimated\nrow counts and costs, it's expecting that adding the second index will\ncut the number of heap fetches about 7x, hence saving somewhere around\n4800 cost units in the heapscan step, more than it thinks the indexscan\nwill cost. But taking 15 row fetches down to 2 isn't nearly enough to\npay for the extra indexscan.\n\n> Try to increase te statistics target for the columns, e.g.\n> ALTER TABLE table ALTER COLUMN column SET STATISTICS integer\n\nIt's worth a try but I'm not sure how much it'll help. A different line\nof attack is to play with the planner cost parameters. In particular,\nreducing random_page_cost would reduce the estimated cost of the heap\nfetches and thus discourage it from using the extra index. If you're\nworking with mostly-cached tables then this would probably improve\nbehavior overall, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Jul 2010 11:20:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner index choice " } ]
[ { "msg_contents": "Hi all.\nI'm wondering about PGSQL scalability.\nIn particular I have two main topics in my mind:\n\n1. What'd be the behavior of the query planner in the case I have\na single huge table with hundreds or thousands of partial indexes\n(just differing by the WHERE clause).\nThis is an idea of mine to make index-partitioning instead of\ntable-partitioning.\n\n2. What'd be the behavior of the query planner in the case I have\nhundreds or thousands of child tables, possibly in a multilevel hierarchy\n(let's say, partitioning by year, month and company).\n\nI fear the presence of linear selection algorithms in these two cases that\nwould kill my design.\n\nIs there any insight about these two points?\n\n-- \nNotOrAnd Information Technologies\nVincenzo Romano\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 29 Jul 2010 19:08:52 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "On Scalability" }, { "msg_contents": "On Thu, 2010-07-29 at 19:08 +0200, Vincenzo Romano wrote:\n> Hi all.\n> I'm wondering about PGSQL scalability.\n> In particular I have two main topics in my mind:\n> \n> 1. What'd be the behavior of the query planner in the case I have\n> a single huge table with hundreds or thousands of partial indexes\n> (just differing by the WHERE clause).\n> This is an idea of mine to make index-partitioning instead of\n> table-partitioning.\n\nWell the planner is not going to care about the partial indexes that\ndon't match the where clause but what you are suggesting is going to\nmake writes and maintenance extremely expensive. It will also increase\nplanning time as the optimizer at a minimum has to discard the use of\nthose indexes.\n\n> \n> 2. What'd be the behavior of the query planner in the case I have\n> hundreds or thousands of child tables, possibly in a multilevel hierarchy\n> (let's say, partitioning by year, month and company).\n\nAgain, test it. Generally speaking the number of child tables directly\ncorrelates to planning time. Most experience that 60-100 tables is\nreally the highest you can go.\n\nIt all depends on actual implementation and business requirements\nhowever.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n", "msg_date": "Thu, 29 Jul 2010 10:12:06 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/7/29 Joshua D. Drake <[email protected]>:\n> On Thu, 2010-07-29 at 19:08 +0200, Vincenzo Romano wrote:\n>> Hi all.\n>> I'm wondering about PGSQL scalability.\n>> In particular I have two main topics in my mind:\n>>\n>> 1. What'd be the behavior of the query planner in the case I have\n>> a single huge table with hundreds or thousands of partial indexes\n>> (just differing by the WHERE clause).\n>> This is an idea of mine to make index-partitioning instead of\n>> table-partitioning.\n>\n> Well the planner is not going to care about the partial indexes that\n> don't match the where clause but what you are suggesting is going to\n> make writes and maintenance extremely expensive. It will also increase\n> planning time as the optimizer at a minimum has to discard the use of\n> those indexes.\n>\n>>\n>> 2. What'd be the behavior of the query planner in the case I have\n>> hundreds or thousands of child tables, possibly in a multilevel hierarchy\n>> (let's say, partitioning by year, month and company).\n>\n> Again, test it. Generally speaking the number of child tables directly\n> correlates to planning time. Most experience that 60-100 tables is\n> really the highest you can go.\n>\n> It all depends on actual implementation and business requirements\n> however.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n\nI expect that a more complex schema will imply higher workloads\non the query planner. What I don't know is how the increase in the\nworkload will happen: linearly, sublinearly, polinomially or what?\n\nSignificant testing would require a prototype implementation with\nan almost complete feed of data from the current solution.\nBut I'm at the feasibility study stage and have not enough resources\nfor that.\n\nThanks anyway for the insights, Joshua.\nDoes the 60-100 tables limit applies to a single level\nof inheritance? Or is it more general?\n\n-- \nNotOrAnd Information Technologies\nVincenzo Romano\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 29 Jul 2010 19:34:20 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "On Thu, 2010-07-29 at 19:34 +0200, Vincenzo Romano wrote:\n\n> I expect that a more complex schema will imply higher workloads\n> on the query planner. What I don't know is how the increase in the\n> workload will happen: linearly, sublinearly, polinomially or what?\n> \n> Significant testing would require a prototype implementation with\n> an almost complete feed of data from the current solution.\n> But I'm at the feasibility study stage and have not enough resources\n> for that.\n> \n> Thanks anyway for the insights, Joshua.\n> Does the 60-100 tables limit applies to a single level\n> of inheritance? Or is it more general?\n\nI do not currently have experience (except that it is possible) with\nmulti-level inheritance and postgresql.\n\n> \n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n", "msg_date": "Thu, 29 Jul 2010 10:39:44 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/7/29 Joshua D. Drake <[email protected]>:\n> On Thu, 2010-07-29 at 19:34 +0200, Vincenzo Romano wrote:\n>\n>> I expect that a more complex schema will imply higher workloads\n>> on the query planner. What I don't know is how the increase in the\n>> workload will happen: linearly, sublinearly, polynomially or what?\n\nDo you think I should ask somewhere else?\nAny hint?\n\n>> Thanks anyway for the insights, Joshua.\n>> Does the 60-100 tables limit applies to a single level\n>> of inheritance? Or is it more general?\n>\n> I do not currently have experience (except that it is possible) with\n> multi-level inheritance and postgresql.\n\nThanks anyway.\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 29 Jul 2010 19:52:12 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "On Thu, 2010-07-29 at 19:52 +0200, Vincenzo Romano wrote:\n> 2010/7/29 Joshua D. Drake <[email protected]>:\n> > On Thu, 2010-07-29 at 19:34 +0200, Vincenzo Romano wrote:\n> >\n> >> I expect that a more complex schema will imply higher workloads\n> >> on the query planner. What I don't know is how the increase in the\n> >> workload will happen: linearly, sublinearly, polynomially or what?\n> \n> Do you think I should ask somewhere else?\n> Any hint?\n\nThe two people that would likely know the best are on vacation, TGL and\nHeikki. You may have to wait a bit.\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n", "msg_date": "Thu, 29 Jul 2010 11:16:50 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "\n> Do you think I should ask somewhere else?\n> Any hint?\n\nI might suggest asking on the pgsql-performance mailing list instead.\nYou'll get *lots* more speculation there. However, the only way you're\nreally going to know is to test.\n\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 29 Jul 2010 13:09:47 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/7/29 Josh Berkus <[email protected]>:\n>\n>> Do you think I should ask somewhere else?\n>> Any hint?\n>\n> I might suggest asking on the pgsql-performance mailing list instead.\n> You'll get *lots* more speculation there.  However, the only way you're\n> really going to know is to test.\n\nOr maybe checking against the source code and its documentation, if any.\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 29 Jul 2010 23:30:00 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/7/29 Josh Berkus <[email protected]>:\n>\n>> Or maybe checking against the source code and its documentation, if any.\n>\n> No, not really.  What you really want to know is: what's the real\n> planner overhead of having dozens/hundreds of partial indexes?  What's\n> the write overhead?  There's no way you can derive that from the source\n> code faster than you can test it.\n\nAgain, as the test would be rather killing for my group at this stage.\n\nI think that knowing whether certain parts have been implemented\nwith linear or sub-linear (or whatever else) algorithms would\ngive good insights about scalability.\n\nAt a first glance it seems that for inheritance some bottleneck is\nhindering a full exploit for table partitioning.\n\nIs there anyone who knows whether those algorithms are linear or not?\n\nAnd of course, I agree that real tests on real data will provide the real thing.\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Fri, 30 Jul 2010 12:24:10 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "On Fri, Jul 30, 2010 at 11:24 AM, Vincenzo Romano\n<[email protected]> wrote:\n> At a first glance it seems that for inheritance some bottleneck is\n> hindering a full exploit for table partitioning.\n\nThere have been lengthy discussions of how to implement partitioning\nto fix these precise problems, yes.\n\n\n> Is there anyone who knows whether those algorithms are linear or not?\n\nThey're linear in both cases. But they happen at plan time rather than\nquery execution time. So if your application prepares all its queries\nand then uses them many times it would not slow down query execution\nbut would slow down the query planning time. In some applications this\nis much better but in others unpredictable run-times is as bad as long\nrun-times.\n\nAlso in the case of having many partial indexes it would slow down\ninserts and updates as well, though to a lesser degree, and that would\nhappen at execution time.\n\n\n-- \ngreg\n", "msg_date": "Fri, 30 Jul 2010 11:49:37 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/7/30 Greg Stark <[email protected]>:\n> On Fri, Jul 30, 2010 at 11:24 AM, Vincenzo Romano\n> <[email protected]> wrote:\n>> At a first glance it seems that for inheritance some bottleneck is\n>> hindering a full exploit for table partitioning.\n>\n> There have been lengthy discussions of how to implement partitioning\n> to fix these precise problems, yes.\n\nAny reference?\n\n>> Is there anyone who knows whether those algorithms are linear or not?\n>\n> They're linear in both cases. But they happen at plan time rather than\n> query execution time. So if your application prepares all its queries\n> and then uses them many times it would not slow down query execution\n> but would slow down the query planning time. In some applications this\n> is much better but in others unpredictable run-times is as bad as long\n> run-times.\n\nHmmm ... maybe I'm missing the inner meaning of your remarks, Greg.\nBy using PREPARE I run the query planned sooner and I should use\nthe plan with the later execution.\nYou can bet that some of the PREPAREd query variables will\npertain to either the child table's CHECK contraints (for table partitions)\nor to the partial index's WHERE condition (for index partitioning).\n\nIt's exactly this point (execution time) where the \"linearity\" will\nkill the query\nover a largely partitioned table.\n\nIs this what you meant? :-)\n\n> Also in the case of having many partial indexes it would slow down\n> inserts and updates as well, though to a lesser degree, and that would\n> happen at execution time.\n\nThis makes fully sense to me.\n\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Fri, 30 Jul 2010 13:40:31 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "\n> Is there anyone who knows whether those algorithms are linear or not?\n\nRead the code? It's really very accessible, and there's lots and lots\nof comments. While the person who wrote the code is around, isn't it\nbetter to see the real implementation?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 30 Jul 2010 11:51:15 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/7/30 Josh Berkus <[email protected]>:\n>\n>> Is there anyone who knows whether those algorithms are linear or not?\n>\n> Read the code?  It's really very accessible, and there's lots and lots\n> of comments.  While the person who wrote the code is around, isn't it\n> better to see the real implementation?\n\nIf the programmer(s) who wrote that part is around, a simple hint would suffice.\nEven an hint to where look into the code would be very appreciated: the query\nplanner is not as simple as the \"ls\" command (which is not that simple any\nmore, though).\n\nIt looks like I need to go the hard way ...\nStarting from postgresql-8.4.4/src/backend/optimizer\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\ncel +393398083886 fix +390823454163 fax +3902700506964\ngtalk. [email protected] skype. notorand.it\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Fri, 30 Jul 2010 21:50:41 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "Vincenzo Romano wrote:\n> By using PREPARE I run the query planned sooner and I should use\n> the plan with the later execution.\n> You can bet that some of the PREPAREd query variables will\n> pertain to either the child table's CHECK contraints (for table partitions)\n> or to the partial index's WHERE condition (for index partitioning).\n> \n\nPrepared statements are not necessarily a cure for long query planning \ntime, because the sort of planning decisions made with partitioned child \ntables and index selection can need to know the parameter values to \nexecute well; that's usually the situation rather than the exception \nwith partitions. You run the risk that the generic prepared plan will \nend up looking at all the partitions, because at preparation plan time \nit can't figure out which can be excluded. Can only figure that out \nonce they're in there for some types of queries.\n\nI think you aren't quite lined up with the people suggesting \"test it\" \nin terms of what that means. The idea is not that you should build a \nfull on application test case yet, which can be very expensive. The \nidea is that you might explore things like \"when I partition this way \nincreasing the partitions from 1 to n, does query time go up linearly?\" \nby measuring with fake data and a machine-generated schema. What's \nhappened in some of these cases is that, despite the theoretical, some \nconstant or external overhead ends up dominating behavior for lower \nnumbers. As an example, it was recognized that the amount of statistics \nfor a table collected with default_statistics_target had a quadratic \nimpact on some aspects of performance. But it turned out that for the \nrange of interesting values to most people, the measured runtime did not \ngo up with the square as feared. Only way that was sorted out was to \nbuild a simple simulation.\n\nHere's a full example from that discussion that shows the sort of tests \nyou probably want to try, and comments on the perils of guessing based \non theory rather than testing:\n\nhttp://archives.postgresql.org/pgsql-hackers/2008-12/msg00601.php\nhttp://archives.postgresql.org/pgsql-hackers/2008-12/msg00687.php\n\ngenerate_series can be very helpful here, and you can even use that to \ngenerate timestamps if you need them in the data set.\n\nThat said, anecdotally everyone agrees that partitions don't scale well \ninto even the very low hundreds for most people, and doing multi-level \nones won't necessarily normally drop query planning time--just the cost \nof maintaining the underlying tables and indexes. My opinion is that \nbuilding a simple partitioned case and watching how the EXPLAIN plans \nchange as you adjust things will be more instructive for you than either \nasking about it or reading the source. Vary the parameters, watch the \nplans, measure things and graph them if you want to visualize the \nbehavior better. Same thing goes for large numbers of partial indexes, \nwhich have a similar query planning impact, but unlike partitions I \nhaven't seen anyone analyze them via benchmarks. I'm sure you could get \nhelp here (probably the performance list is a better spot though) with \ngetting your test case right if you wanted to try and nail that down.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 30 Jul 2010 16:38:17 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "On Fri, Jul 30, 2010 at 3:50 PM, Vincenzo Romano\n<[email protected]> wrote:\n> 2010/7/30 Josh Berkus <[email protected]>:\n>>\n>>> Is there anyone who knows whether those algorithms are linear or not?\n>>\n>> Read the code?  It's really very accessible, and there's lots and lots\n>> of comments.  While the person who wrote the code is around, isn't it\n>> better to see the real implementation?\n>\n> If the programmer(s) who wrote that part is around, a simple hint would suffice.\n> Even an hint to where look into the code would be very appreciated: the query\n> planner is not as simple as the \"ls\" command (which is not that simple any\n> more, though).\n>\n> It looks like I need to go the hard way ...\n> Starting from postgresql-8.4.4/src/backend/optimizer\n\nI think you're approaching this in the wrong way. You've repeatedly\nsaid you don't want to do all the work of setting up a test, but\ntrying to search the code for algorithms that might not be linear is\nnot going to be easier. I've been reading this thread and I'm fairly\nfamiliar with this code, and I even understand the algorithms pretty\nwell, and I don't know whether they're going to be linear for what you\nwant to do or not. Certainly, the overall task of join planning is\nexponential-time in the number of *tables*, but if you're just doing\nSELECT statements on a single table, will that be linear? Tough to\nsay. Certainly, there are a lot of linked lists in there, so if we\nhave any place where we have two nested loops over the list of\nindices, it won't be linear. I can't think of a place where we do\nthat, but that doesn't mean there isn't one. And even if they are\nlinear, or n log n or something, the coefficients could still be\nlousy. Theoretical computer science is one of my favorite subjects,\nbut, it doesn't always tell you what you want to know about the real\nworld.\n\nIt doesn't seem like it should be very hard to figure this out\nempirically. Just create a big database full of random data. Maybe\nyou could populate one of the columns with something like (random() *\n1000)::int. Then you could create partial indices ON\n(some_other_column) WHERE that_column = <blat> for <blat> in 0..999.\nThen you could run some test queries and see how you make out.\n\nOr, I mean, you can read the source code. That's fine, too. It's\njust... I've already read the source code quite a few times, and I\nstill don't know the answer. Admittedly, I wasn't trying to answer\nthis specific question, but still - I don't think it's an easy\nquestion to answer that way.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 30 Jul 2010 16:39:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "Hi all.\nI laready posted this a couple of months ago on -hackers:\nhttp://archives.postgresql.org/pgsql-hackers/2010-07/msg01519.php\nI've also been directed to ask here for better and deeper details.\n\nWhat came out is that the management of both inheritance hierarchy and\npartial indexes doesn't scale well up as it'd have a linear algorithm\ndeep in its bowels.\n\nWhat's the \"real\" story, then?\n\nThanks in advance.\n\n2010/7/29 Vincenzo Romano <[email protected]>:\n> Hi all.\n> I'm wondering about PGSQL scalability.\n> In particular I have two main topics in my mind:\n>\n> 1. What'd be the behavior of the query planner in the case I have\n> a single huge table with hundreds or thousands of partial indexes\n> (just differing by the WHERE clause).\n> This is an idea of mine to make index-partitioning instead of\n> table-partitioning.\n>\n> 2. What'd be the behavior of the query planner in the case I have\n> hundreds or thousands of child tables, possibly in a multilevel hierarchy\n> (let's say, partitioning by year, month and company).\n>\n> I fear the presence of linear selection algorithms in these two cases that\n> would kill my design.\n>\n> Is there any insight about these two points?\n>\n> --\n> NotOrAnd Information Technologies\n> Vincenzo Romano\n> --\n> NON QVIETIS MARIBVS NAVTA PERITVS\n>\n\n\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 09:06:46 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "Any feedbacks from TGL and Heikki, then?\n\n2010/7/29 Joshua D. Drake <[email protected]>:\n> On Thu, 2010-07-29 at 19:52 +0200, Vincenzo Romano wrote:\n>> 2010/7/29 Joshua D. Drake <[email protected]>:\n>> > On Thu, 2010-07-29 at 19:34 +0200, Vincenzo Romano wrote:\n>> >\n>> >> I expect that a more complex schema will imply higher workloads\n>> >> on the query planner. What I don't know is how the increase in the\n>> >> workload will happen: linearly, sublinearly, polynomially or what?\n>>\n>> Do you think I should ask somewhere else?\n>> Any hint?\n>\n> The two people that would likely know the best are on vacation, TGL and\n> Heikki. You may have to wait a bit.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n> --\n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\n> Consulting, Training, Support, Custom Development, Engineering\n> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n>\n>\n\n\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\ncel +393398083886 fix +390823454163 fax +3902700506964\ngtalk. [email protected] skype. notorand.it\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 09:09:58 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "On 07.10.2010 10:09, Vincenzo Romano wrote:\n> Any feedbacks from TGL and Heikki, then?\n\nI don't have anything to add to what others said already. Your best \nadvice is to test it yourself.\n\nI would expect the plan time to be linear relative to the number of \npartial indexes or child tables involved, except that constraint \nexclusion of CHECK constraints on the partitions is exponential. But I \nalso wouldn't be surprised if there's some other non-linear aspect there \nthat shows its head with thousands of partitions.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 07 Oct 2010 10:28:29 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "On Thu, 2010-10-07 at 10:28 +0300, Heikki Linnakangas wrote:\n\n> constraint exclusion of CHECK constraints on the partitions is\n> exponential\n\nConstraint exclusion is linear with respect to number of partitions.\nWhy do you say exponential?\n \n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Development, 24x7 Support, Training and Services\n\n", "msg_date": "Thu, 07 Oct 2010 08:41:30 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "On 07.10.2010 10:41, Simon Riggs wrote:\n> On Thu, 2010-10-07 at 10:28 +0300, Heikki Linnakangas wrote:\n>\n>> constraint exclusion of CHECK constraints on the partitions is\n>> exponential\n>\n> Constraint exclusion is linear with respect to number of partitions.\n> Why do you say exponential?\n\nFor some reason I thought the planner needs to check the constraints of \nthe partitions against each other, but you're right, clearly that's not \nthe case. Linear it is.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 07 Oct 2010 10:51:09 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Heikki Linnakangas <[email protected]>:\n> On 07.10.2010 10:41, Simon Riggs wrote:\n>>\n>> On Thu, 2010-10-07 at 10:28 +0300, Heikki Linnakangas wrote:\n>>\n>>> constraint exclusion of CHECK constraints on the partitions is\n>>> exponential\n>>\n>> Constraint exclusion is linear with respect to number of partitions.\n>> Why do you say exponential?\n>\n> For some reason I thought the planner needs to check the constraints of the\n> partitions against each other, but you're right, clearly that's not the\n> case. Linear it is.\n>\n> --\n>  Heikki Linnakangas\n>  EnterpriseDB   http://www.enterprisedb.com\n>\n\nMaking these things sub-linear (whether not O(log n) or even O(1) ),\nprovided that there's way to, would make this RDBMS more appealing\nto enterprises.\nI mean also partial indexes (as an alternative to table partitioning).\nBeing able to effectively cope with \"a dozen child tables or so\" it's more\nlike an amateur feature.\nIf you really need partitioning (or just hierarchical stuff) I think you'll need\nfor quite more than a dozen items.\nIf you partition by just weeks, you'll need 50+ a year.\n\nIs there any precise direction to where look into the code for it?\n\nIs there a way to put this into a wish list?\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 14:10:11 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "On Thu, Oct 7, 2010 at 8:10 AM, Vincenzo Romano\n<[email protected]> wrote:\n> Making these things sub-linear (whether not O(log n) or even O(1) ),\n> provided that there's  way to, would make this RDBMS more appealing\n> to enterprises.\n> I mean also partial indexes (as an alternative to table partitioning).\n> Being able to effectively cope with \"a dozen child tables or so\" it's more\n> like an amateur feature.\n> If you really need partitioning (or just hierarchical stuff) I think you'll need\n> for quite more than a dozen items.\n> If you partition by just weeks, you'll need 50+ a year.\n>\n> Is there any precise direction to where look into the code for it?\n>\n> Is there a way to put this into a wish list?\n\nWell, you can't just arbitrarily turn a O(n) algorithm into an O(lg n)\nalgorithm. I think the most promising approach to scaling to large\nnumbers of partitions is the patch that Itagaki Takahiro was working\non back in July. Unfortunately, that patch still needs a lot of work\n- and some redesign - before it will really meet our needs. Right\nnow, the way to set up partitioning is to create a parent table and\nthen create a bunch of child tables that inherit from them and then\nput mutually exclusive CHECK constraints on all the children and make\nsure constraint_exclusion is on so that the planner can notice when\nnot all children need to be scanned. As a totally general\narchitecture, this is probably hard to beat (or to make sublinear).\nHowever, if we have DDL that allows the user to say: this is a set of\nchild tables that are range partitions on this key column, with these\nboundaries, then you should be able to make the constraint exclusion\ncalculations much more efficient, because it won't have to infer so\nmuch from first principles. O(lg n) doesn't seem out of the question\ngiven that architecture.\n\nI think, though, that that is still some way off. If you're in a\nposition to help with (or fund) the coding, it can be made to happen\nfaster, of course.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 7 Oct 2010 09:30:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 07.10.2010 10:41, Simon Riggs wrote:\n>> Constraint exclusion is linear with respect to number of partitions.\n>> Why do you say exponential?\n\n> For some reason I thought the planner needs to check the constraints of \n> the partitions against each other, but you're right, clearly that's not \n> the case. Linear it is.\n\nWell, it's really more like O(mn) where m is the number of partitions\nand n is the number of clauses in the query --- and not only that, but\nthe O() notation is hiding a depressingly high constant factor. And\nthen there are practical problems like failing to exclude partitions as\nsoon as there are any parameters in the query.\n\nThere's basically no way that we're going to get decent performance for\nlarge numbers of partitions as long as we have to resort to\ntheorem-proving to lead us to the correct partition.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Oct 2010 09:52:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability " }, { "msg_contents": "2010/10/7 Robert Haas <[email protected]>:\n> Well, you can't just arbitrarily turn a O(n) algorithm into an O(lg n)\n\nThat's trivially true. I was not asking for the recipe to do it.\n\n> algorithm.  I think the most promising approach to scaling to large\n> numbers of partitions is the patch that Itagaki Takahiro was working\n> on back in July.  Unfortunately, that patch still needs a lot of work\n> - and some redesign - before it will really meet our needs.  Right\n> now, the way to set up partitioning is to create a parent table and\n> then create a bunch of child tables that inherit from them and then\n> put mutually exclusive CHECK constraints on all the children and make\n> sure constraint_exclusion is on so that the planner can notice when\n> not all children need to be scanned.  As a totally general\n> architecture, this is probably hard to beat (or to make sublinear).\n\nThis is exactly what's described into the official documentation.\nEveryone I ask information about before going deeper in test I get\nthe same answer: don't try to use more than a dozen child tables.\n\n> However, if we have DDL that allows the user to say: this is a set of\n> child tables that are range partitions on this key column, with these\n> boundaries, then you should be able to make the constraint exclusion\n> calculations much more efficient, because it won't have to infer so\n> much from first principles.  O(lg n) doesn't seem out of the question\n> given that architecture.\n\nI see the main problem in the way the planner \"understands\" which partition\nis useful and which one is not.\nHaving the DDL supporting the feature could just be syntactic sugar\nif the underlying mechanism is inadequate.\n\n> I think, though, that that is still some way off.  If you're in a\n> position to help with (or fund) the coding, it can be made to happen\n> faster, of course.\n\nThis is why I was asking for directions: brwosing the whole code to look for the\nrelevant stuff is quite time consuming.\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 15:57:04 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Tom Lane <[email protected]>:\n> Heikki Linnakangas <[email protected]> writes:\n>> On 07.10.2010 10:41, Simon Riggs wrote:\n>>> Constraint exclusion is linear with respect to number of partitions.\n>>> Why do you say exponential?\n>\n>> For some reason I thought the planner needs to check the constraints of\n>> the partitions against each other, but you're right, clearly that's not\n>> the case. Linear it is.\n>\n> Well, it's really more like O(mn) where m is the number of partitions\n> and n is the number of clauses in the query --- and not only that, but\n> the O() notation is hiding a depressingly high constant factor.  And\n> then there are practical problems like failing to exclude partitions as\n> soon as there are any parameters in the query.\n\n\nDoes the same considerations apply to partial indexes?\nI mean, I can replace table partitioning with index partitioning concept.\n(Well I know it's not really the same).\nWould then it be the same O(nm) to let the planner choose the right indexes\ngiven a certain query?\n\n> There's basically no way that we're going to get decent performance for\n> large numbers of partitions as long as we have to resort to\n> theorem-proving to lead us to the correct partition.\n>\n>                        regards, tom lane\n>\n\nI'm not sure about MySQL, but Oracle can handle large partitioning.\nSo I would say there's a way to achieve the same goal.\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\ncel +393398083886 fix +390823454163 fax +3902700506964\ngtalk. [email protected] skype. notorand.it\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 16:20:25 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "* Vincenzo Romano ([email protected]) wrote:\n> I see the main problem in the way the planner \"understands\" which partition\n> is useful and which one is not.\n> Having the DDL supporting the feature could just be syntactic sugar\n> if the underlying mechanism is inadequate.\n\nI'm pretty sure the point with the DDL would be to have a way for the\nuser to communicate to the planner more understanding about the\npartitioning, not just to be syntactic sugar. With that additional\ninformation, the planner can make a faster and better decision.\n\n\tStephen", "msg_date": "Thu, 7 Oct 2010 10:29:27 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "Vincenzo Romano wrote:\n> I see the main problem in the way the planner \"understands\" which partition\n> is useful and which one is not.\n> Having the DDL supporting the feature could just be syntactic sugar\n> if the underlying mechanism is inadequate.\n> \n\nYou have the order of this backwards. In order to do better than the \nway the current scheme is implemented, the optimizer needs higher \nquality metadata about the structure of the partitions to work with. \nRight now, it's inferring them from the CHECK constraints, which \nrequires the whole theorem-proving bit Tom mentioned. That's never \ngoing to get any more algorithmically efficient than it already is.\n\nIf the DDL that created the partitions also made better quality metadata \navailable about the structure of the partitions, at that point it would \nbe possible to also do better in how the optimizer pruned partitions to \nconsider too. If the list it has was known to be in a particular \nstructured/sorted order, the optimizer could do a binary search to find \nrelevant partitions, rather than the linear scan required right now.\n\nUntil that work is done, any other improvement attempts are doomed to \nfail. That's the point Robert was trying to make to you. And the fact \nOracle does this is why it's able to scale to high partition counts \nbetter than PostgreSQL can.\n\nYou can read more about the work that was being done here at \nhttp://wiki.postgresql.org/wiki/Table_partitioning\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\n\n", "msg_date": "Thu, 07 Oct 2010 10:32:38 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Stephen Frost <[email protected]>:\n> * Vincenzo Romano ([email protected]) wrote:\n>> I see the main problem in the way the planner \"understands\" which partition\n>> is useful and which one is not.\n>> Having the DDL supporting the feature could just be syntactic sugar\n>> if the underlying mechanism is inadequate.\n>\n> I'm pretty sure the point with the DDL would be to have a way for the\n> user to communicate to the planner more understanding about the\n> partitioning, not just to be syntactic sugar.  With that additional\n> information, the planner can make a faster and better decision.\n>\n>        Stephen\n\nWhich kind of information are you thinking about?\nI think that the stuff you put into the CHECK condition for the table\nwill say it all.\nInfact there you have not just the column names with relevant values, but the\nactual expression(s) to be checked,\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 16:33:13 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Greg Smith <[email protected]>:\n> Vincenzo Romano wrote:\n>>\n>> I see the main problem in the way the planner \"understands\" which\n>> partition\n>> is useful and which one is not.\n>> Having the DDL supporting the feature could just be syntactic sugar\n>> if the underlying mechanism is inadequate.\n>>\n>\n> You have the order of this backwards.  In order to do better than the way\n> the current scheme is implemented, the optimizer needs higher quality\n> metadata about the structure of the partitions to work with.  Right now,\n> it's inferring them from the CHECK constraints, which requires the whole\n> theorem-proving bit Tom mentioned.  That's never going to get any more\n> algorithmically efficient than it already is.\n> If the DDL that created the partitions also made better quality metadata\n> available about the structure of the partitions, at that point it would be\n> possible to also do better in how the optimizer pruned partitions to\n> consider too.  If the list it has was known to be in a particular\n> structured/sorted order, the optimizer could do a binary search to find\n> relevant partitions, rather than the linear scan required right now.\n\n\nDo you mean the check constraint is used as plain text to be (somehow) executed?\nIf this is the case, then you (all) are perfectly and obviously right\nand I'm just fishing\nfor bicycles in the sea.\n\nI would expect a parser to ... ehm ... parse the CHECK constraint\nexpression at \"CREATE TABLE \" time and\nextract all the needed \"high quality metadata\", like the list of\ncolumns involved and the type of\nchecks (range, value list, etc.).\nThe same would be useful for partial indexes, as well.\n\nBut maybe this is just wishful thinking.\n\n> Until that work is done, any other improvement attempts are doomed to fail.\n>  That's the point Robert was trying to make to you.  And the fact Oracle\n> does this is why it's able to scale to high partition counts better than\n> PostgreSQL can.\n>\n> You can read more about the work that was being done here at\n> http://wiki.postgresql.org/wiki/Table_partitioning\n\nDone. As well as the official documentation.\nThe point is that there are no hints on the topic.\nThere should be a \"caveat\" in the documentation saying that partitioning\nis not scalable. As well as partial indexing.\n\nThanks so far for the information.\n\n> --\n> Greg Smith, 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services and Support  www.2ndQuadrant.us\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 16:44:34 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "* Vincenzo Romano ([email protected]) wrote:\n> Which kind of information are you thinking about?\n> I think that the stuff you put into the CHECK condition for the table\n> will say it all.\n\nThe problem is that CHECK conditions can contain just about anything,\nhence the planner needs to deal with that possibility.\n\n> Infact there you have not just the column names with relevant values, but the\n> actual expression(s) to be checked,\n\nYes, that would be the problem. Proving something based on expressions\nis alot more time consuming and complicated than being explicitly told\nwhat goes where.\n\n\tStephen", "msg_date": "Thu, 7 Oct 2010 10:52:19 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Stephen Frost <[email protected]>:\n> * Vincenzo Romano ([email protected]) wrote:\n>> Which kind of information are you thinking about?\n>> I think that the stuff you put into the CHECK condition for the table\n>> will say it all.\n>\n> The problem is that CHECK conditions can contain just about anything,\n> hence the planner needs to deal with that possibility.\n\nNot really. For partitioning there would be some constraints as you\nhave in the DEFAULT values.\n\n>> Infact there you have not just the column names with relevant values, but the\n>> actual expression(s) to be checked,\n>\n> Yes, that would be the problem.  Proving something based on expressions\n> is alot more time consuming and complicated than being explicitly told\n> what goes where.\n\nConsuming computing resources at DDL-time should be OK if that will\nlead to big savings at DML-time (run-time), my opinion. It'd be just like\ncompile time optimizations.\n\n>        Stephen\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.9 (GNU/Linux)\n>\n> iEYEARECAAYFAkyt3qMACgkQrzgMPqB3kiih3wCcCwLlvpDCjgG5LSgim/XGieEE\n> MsEAn0mHfAizDOpvepGXWTWlxHtJibA5\n> =Szx4\n> -----END PGP SIGNATURE-----\n>\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 17:03:40 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "Excerpts from Vincenzo Romano's message of jue oct 07 10:44:34 -0400 2010:\n\n> Do you mean the check constraint is used as plain text to be (somehow) executed?\n> If this is the case, then you (all) are perfectly and obviously right\n> and I'm just fishing\n> for bicycles in the sea.\n\nYeah, hence this thread hasn't advanced things very much in any useful\ndirection. That we need to improve the partitioning implementation is\nalready known.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 07 Oct 2010 11:07:18 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Alvaro Herrera <[email protected]>:\n> Excerpts from Vincenzo Romano's message of jue oct 07 10:44:34 -0400 2010:\n>\n>> Do you mean the check constraint is used as plain text to be (somehow) executed?\n>> If this is the case, then you (all) are perfectly and obviously right\n>> and I'm just fishing\n>> for bicycles in the sea.\n>\n> Yeah, hence this thread hasn't advanced things very much in any useful\n> direction.  That we need to improve the partitioning implementation is\n> already known.\n\nMaybe I'm willing to help and possibly able to.\nBut I need to understand things that are already known but I didn't know yet.\n\n> --\n> Álvaro Herrera <[email protected]>\n> The PostgreSQL Company - Command Prompt, Inc.\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 17:10:40 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "* Vincenzo Romano ([email protected]) wrote:\n> I would expect a parser to ... ehm ... parse the CHECK constraint\n> expression at \"CREATE TABLE \" time and\n> extract all the needed \"high quality metadata\", like the list of\n> columns involved and the type of\n> checks (range, value list, etc.).\n\nCheck constraints can be added after the table is created. Inheiritance\ncan be added/changed independently of check constraints. Hacking all of\nthe inheiritance, check constraint creation, and any other possibly\ninvolved code paths to try to figure out if this particular table, check\nconstraint, inheiritance relationship, etc, is part of a partitioning\nsetup isn't exactly trivial, or the right approach.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 7 Oct 2010 11:12:44 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "* Vincenzo Romano ([email protected]) wrote:\n> 2010/10/7 Stephen Frost <[email protected]>:\n> > * Vincenzo Romano ([email protected]) wrote:\n> > The problem is that CHECK conditions can contain just about anything,\n> > hence the planner needs to deal with that possibility.\n> \n> Not really. For partitioning there would be some constraints as you\n> have in the DEFAULT values.\n\nHow do we know when it's partitioning and not a CHECK constraint being\nused for something else..? I'll tell you- through the user using\nspecific partitioning DDL statements.\n\n> Consuming computing resources at DDL-time should be OK if that will\n> lead to big savings at DML-time (run-time), my opinion. It'd be just like\n> compile time optimizations.\n\nCHECK constraints, inheiritance, etc, are general things which can be\nused for more than just partitioning. Abusing them to go through tons\nof extra gyrations to make the specific partitioning case faster at DML\ntime (if that's really even possible... I'm not convinced you could\nmake it bullet-proof) isn't a good approach.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 7 Oct 2010 11:15:44 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Stephen Frost <[email protected]>:\n> * Vincenzo Romano ([email protected]) wrote:\n>> 2010/10/7 Stephen Frost <[email protected]>:\n>> > * Vincenzo Romano (vincenzo.romano@notorand\n.it) wrote:\n>> > The problem is that CHECK conditions can contain just about anything,\n>> > hence the planner needs to deal with that possibility.\n>>\n>> Not really. For partitioning there would be some constraints as you\n>> have in the DEFAULT values.\n>\n> How do we know when it's partitioning and not a CHECK constraint being\n> used for something else..?\n\nWhy asking? You don't need to tell them apart.\n\"Just\" parse the expression, extract the metadata to be used when the expression\nneed to be evaluated. Being it a \"plain\" CHECK constraint or something\nfor the partition\nmanagement would then be irrelevant.\n\n> I'll tell you- through the user using\n> specific partitioning DDL statements.\n\nThat could be the next step, once the underlying stuff is already in place.\n\n>> Consuming computing resources at DDL-time should be OK if that will\n>> lead to big savings at DML-time (run-time), my opinion. It'd be just like\n>> compile time optimizations.\n>\n> CHECK constraints, inheiritance, etc, are general things which can be\n> used for more than just partitioning.  Abusing them to go through tons\n> of extra gyrations to make the specific partitioning case faster at DML\n> time (if that's really even possible...  I'm not convinced you could\n> make it bullet-proof) isn't a good approach.\n\nAt the moment I'm not interested in particular cases.\nI think that CHECK constraints (as well as partial indexes expressions) should\nbe handled in a more effective way. Better partitioning (both for\ntables and indexes) would\nbe a side effect.\n\nThanks for the insights.\n\n>        Thanks,\n>\n>                Stephen\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.9 (GNU/Linux)\n>\n> iEYEARECAAYFAkyt5CAACgkQrzgMPqB3kijAUACfd9QcB00Nic6mSwWmwoXABc4p\n> kBoAnAijF39ZTFOGjpk1CN/8/I3Tj9HI\n> =C8G/\n> -----END PGP SIGNATURE-----\n\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 17:23:54 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Stephen Frost <[email protected]>:\n> * Vincenzo Romano ([email protected]) wrote:\n>> I would expect a parser to ... ehm ... parse the CHECK constraint\n>> expression at \"CREATE TABLE \" time and\n>> extract all the needed \"high quality metadata\", like the list of\n>> columns involved and the type of\n>> checks (range, value list, etc.).\n>\n> Check constraints can be added after the table is created.  Inheiritance\n> can be added/changed independently of check constraints.  Hacking all of\n> the inheiritance, check constraint creation, and any other possibly\n> involved code paths to try to figure out if this particular table, check\n> constraint, inheiritance relationship, etc, is part of a partitioning\n> setup isn't exactly trivial, or the right approach.\n>\n>        Thanks,\n>\n>                Stephen\n\nI think none will say things are trivial.\nSo, what'd be the right approach in your vision?\nI mean, if you think about partitioning a-la Oracle, then you'll have to\nparse those expressions anyway.\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\ncel +393398083886 fix +390823454163 fax +3902700506964\ngtalk. [email protected] skype. notorand.it\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 17:25:31 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "Vincenzo Romano <[email protected]> wrote:\n> 2010/10/7 Stephen Frost <[email protected]>:\n \n>> Yes, that would be the problem. Proving something based on\n>> expressions is alot more time consuming and complicated than\n>> being explicitly told what goes where.\n> \n> Consuming computing resources at DDL-time should be OK if that\n> will lead to big savings at DML-time (run-time), my opinion. It'd\n> be just like compile time optimizations.\n \nI think something you haven't entirely grasped is how pluggable\nPostgreSQL is -- you can not only define your own functions in a\nwide variety of languages (including C), but your own data types,\noperators, casts, index strategies, etc. Determining, even at DDL\ntime that even a built-in datatype's expression is or isn't useful\nin partitioning could be quite painful in the absence of syntax\nspecifically geared toward partitioning. If there's a CHECK\nconstraint on a polygon column to ensure that it isn't a concave\npolygon, you might be facing a lot of work to know whether it's\ninvolved in partitioning. Now imagine that a CHECK constraint is on\na column with a user defined type and uses the @%!! operator and\nthat the user has changed some of the allowed implicit casts used in\nthe expression.\n \nWhile this flexibility is a great strength of PostgreSQL, it makes\nsome things more difficult to implement than they would be in more\nlimited database products.\n \n-Kevin\n", "msg_date": "Thu, 07 Oct 2010 10:35:29 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "* Vincenzo Romano ([email protected]) wrote:\n> So, what'd be the right approach in your vision?\n\nHave you read http://wiki.postgresql.org/wiki/Table_partitioning and the\nvarious places it links to..?\n\n> I mean, if you think about partitioning a-la Oracle, then you'll have to\n> parse those expressions anyway.\n\nOracle's approach is discussed there.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 7 Oct 2010 11:41:29 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Stephen Frost <[email protected]>:\n> * Vincenzo Romano ([email protected]) wrote:\n>> So, what'd be the right approach in your vision?\n>\n> Have you read http://wiki.postgresql.org/wiki/Table_partitioning and the\n> various places it links to..?\n>\n>> I mean, if you think about partitioning a-la Oracle, then you'll have to\n>> parse those expressions anyway.\n>\n> Oracle's approach is discussed there.\n\nI didn't meant the implementation, but the goals achieved.\n\n>\n>        Thanks,\n>\n>                Stephen\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.9 (GNU/Linux)\n>\n> iEYEARECAAYFAkyt6ikACgkQrzgMPqB3kih0HwCcD8rQQhD6oXao8ZnG/bMAvx2d\n> 4HkAnjrzox4XemzVyFkhKRXb3ZjS2nba\n> =6WlP\n> -----END PGP SIGNATURE-----\n>\n>\n\n\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Thu, 7 Oct 2010 19:08:24 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "Firstly I want to say I think this discussion is over-looking some\nbenefits of the current system in other use cases. I don't think we\nshould get rid of the current system even once we have \"proper\"\npartitioning. It solves use cases such as data warehouse queries that\nneed to do a full table scan of some subset of the data which happens\nto be located in a single sub-table quite well. In that case being\nable to do a sequential scan instead of an index range scan is a big\nbenefit and the overhead of the analysis is irrelevant for a data\nwarehouse query. And the constraint may or may not have anything to do\nwith the partitioning key. You cold have constraints like \"customer_id\nin (...)\" for last month's financial records so lookups for new\ncustomers don't need to check all the historical tables from before\nthey became customers.\n\nIn fact what I'm interested in doing is extending the support to use\nstats on children marked read-only. If we have a histogram for a table\nwhich has been marked read-only since the table was analyzed then we\ncould trust the upper and lower bounds or the most-frequent-list to\nexclude partitions. That would really help for things like date-range\nlookups on tables where the partition key is \"financial quarter\" or\n\"invoice_id\" or some other nearly perfectly correlated column.\n\nNone of this replaces having a good partitioning story for OLTP\nqueries and management needs. But it extends the usefulness of that\nsetup to data warehouse queries on other related columns that haven't\nbeen explicitly declared as the partitioning key.\n\n\nOn Thu, Oct 7, 2010 at 8:35 AM, Kevin Grittner\n<[email protected]> wrote:\n> Vincenzo Romano <[email protected]> wrote:\n>> 2010/10/7 Stephen Frost <[email protected]>:\n>\n>>> Yes, that would be the problem.  Proving something based on\n>>> expressions is alot more time consuming and complicated than\n>>> being explicitly told what goes where.\n>>\n>> Consuming computing resources at DDL-time should be OK if that\n>> will lead to big savings at DML-time (run-time), my opinion. It'd\n>> be just like compile time optimizations.\n>\n> I think something you haven't entirely grasped is how pluggable\n> PostgreSQL is -- you can not only define your own functions in a\n> wide variety of languages (including C), but your own data types,\n> operators, casts, index strategies, etc.\n\nI suspect it's likely that a partitioning system would only work with\nbtree opclasses anyways. It might be interesting to think about what\nit would take to make the setups we've talked about in the past work\nwith arbitrary operator classes as long as those operator classes\nsupport some concept of \"mutually exclusive\". But nothing we've talked\nabout so far would be that flexible.\n\nPre-analyzing the check constraints to construct a partitioning data\nstructure might even be a plausible way to move forward -- I don't see\nany obvious show-stoppers. The system could look for a set of btree\nopclass based conditions that guarantee all the partitions are\nmutually exclusive ranges.\n\nMy instincts tell me it would be less useful though because there's\nless the system would be able to do with that structure to help the\nuser. That is, if it *can't* prove the constraints are mutually\nexclusive then the user is left with a bunch of check constraints and\nno useful feedback about what they've done wrong. And if it can prove\nit the user is happy but the next time he has to add a partition he\nhas to look at the existing partitions and carefully construct his\ncheck constraint instead of having the system help him out by\nsupplying one side of the bounds and providing a convenient syntax. It\nwould also be hard to specify how to automatically add partitions\nwhich I expect is a feature people will want eventually.\n\nThere are some plus sides as well -- allowing some optimizations for\ncheck constraints without requiring the user to promise to always use\nthat as their partitioning key in the future. But I think on the whole\nit would be a disadvantage.\n\n\n-- \ngreg\n", "msg_date": "Thu, 7 Oct 2010 12:35:10 -0700", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "On Thu, 2010-10-07 at 14:10 +0200, Vincenzo Romano wrote:\n\n> Making these things sub-linear (whether not O(log n) or even O(1) ),\n> provided that there's way to, would make this RDBMS more appealing\n> to enterprises.\n> I mean also partial indexes (as an alternative to table partitioning).\n> Being able to effectively cope with \"a dozen child tables or so\" it's more\n> like an amateur feature.\n> If you really need partitioning (or just hierarchical stuff) I think you'll need\n> for quite more than a dozen items.\n> If you partition by just weeks, you'll need 50+ a year.\n> \n> Is there any precise direction to where look into the code for it?\n> \n> Is there a way to put this into a wish list?\n\nIt's already on the wish list (\"TODO\") and has been for many years.\n\nWe've mostly lacked somebody with the experience and time/funding to\ncomplete that implementation work. I figure I'll be doing it for 9.2\nnow; it may be difficult to do this for next release.\n\nTheoretically, this can be O(n.log n) for range partitioning and O(1)\nfor exact value partitioning, though the latter isn't a frequent use\ncase.\n\nYour conclusion that the current partitioning only works with a dozen or\nso items doesn't match the experience of current users however.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Development, 24x7 Support, Training and Services\n\n", "msg_date": "Thu, 07 Oct 2010 22:42:59 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Simon Riggs <[email protected]>:\n> On Thu, 2010-10-07 at 14:10 +0200, Vincenzo Romano wrote:\n>\n>> Making these things sub-linear (whether not O(log n) or even O(1) ),\n>> provided that there's  way to, would make this RDBMS more appealing\n>> to enterprises.\n>> I mean also partial indexes (as an alternative to table partitioning).\n>> Being able to effectively cope with \"a dozen child tables or so\" it's more\n>> like an amateur feature.\n>> If you really need partitioning (or just hierarchical stuff) I think you'll need\n>> for quite more than a dozen items.\n>> If you partition by just weeks, you'll need 50+ a year.\n>>\n>> Is there any precise direction to where look into the code for it?\n>>\n>> Is there a way to put this into a wish list?\n>\n> It's already on the wish list (\"TODO\") and has been for many years.\n>\n> We've mostly lacked somebody with the experience and time/funding to\n> complete that implementation work. I figure I'll be doing it for 9.2\n> now; it may be difficult to do this for next release.\n>\n> Theoretically, this can be O(n.log n) for range partitioning and O(1)\n> for exact value partitioning, though the latter isn't a frequent use\n> case.\n\nO(n*log n) is what I would expect from a good algorithm.\n\n> Your conclusion that the current partitioning only works with a dozen or\n> so items doesn't match the experience of current users however.\n\nPeople on the mailing lists says so.\nI think I'm forced now to plan for tests on our side, despite this is\nnot what I'd\nlike to do with the \"most advanced open source database\".\nI'll publish the results on my blog, anyway.\n\n> --\n>  Simon Riggs           www.2ndQuadrant.com\n>  PostgreSQL Development, 24x7 Support, Training and Services\n>\n>\n\n\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\ncel +393398083886 fix +390823454163 fax +3902700506964\ngtalk. [email protected] skype. notorand.it\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Fri, 8 Oct 2010 08:34:18 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "2010/10/7 Simon Riggs <[email protected]>:\n> On Thu, 2010-10-07 at 14:10 +0200, Vincenzo Romano wrote:\n>\n>> Making these things sub-linear (whether not O(log n) or even O(1) ),\n>> provided that there's  way to, would make this RDBMS more appealing\n>> to enterprises.\n>> I mean also partial indexes (as an alternative to table partitioning).\n>> Being able to effectively cope with \"a dozen child tables or so\" it's more\n>> like an amateur feature.\n>> If you really need partitioning (or just hierarchical stuff) I think you'll need\n>> for quite more than a dozen items.\n>> If you partition by just weeks, you'll need 50+ a year.\n>>\n>> Is there any precise direction to where look into the code for it?\n>>\n>> Is there a way to put this into a wish list?\n>\n> It's already on the wish list (\"TODO\") and has been for many years.\n>\n> We've mostly lacked somebody with the experience and time/funding to\n> complete that implementation work. I figure I'll be doing it for 9.2\n> now; it may be difficult to do this for next release.\n>\n> Theoretically, this can be O(n.log n) for range partitioning and O(1)\n> for exact value partitioning, though the latter isn't a frequent use\n> case.\n>\n> Your conclusion that the current partitioning only works with a dozen or\n> so items doesn't match the experience of current users however.\n>\n> --\n>  Simon Riggs           www.2ndQuadrant.com\n>  PostgreSQL Development, 24x7 Support, Training and Services\n>\n\nDo the same conclusions apply to partial indexes?\nI mean, if I have a large number (n>=100 or n>=1000) of partial indexes\non a single very large table (m>=10**12), how good is the planner to choose the\nright indexes to plan a query?\nHas also this algorithm superlinear complexity?\n\n\n-- \nVincenzo Romano at NotOrAnd Information Technologies\nSoftware Hardware Networking Training Support Security\n--\nNON QVIETIS MARIBVS NAVTA PERITVS\n", "msg_date": "Fri, 8 Oct 2010 12:20:14 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Scalability" }, { "msg_contents": "On Fri, Oct 8, 2010 at 3:20 AM, Vincenzo Romano\n<[email protected]> wrote:\n> Do the same conclusions apply to partial indexes?\n> I mean, if I have a large number (n>=100 or n>=1000) of partial indexes\n> on a single very large table (m>=10**12), how good is the planner to choose the\n> right indexes to plan a query?\n> Has also this algorithm superlinear complexity?\n\nNo, it's also linear. It needs to look at every partial index and\ncheck to see whether it's a candidate for your query. Actually that's\ntrue for regular indexes as well but it has the extra step of proving\nthat the partial index includes all the rows your query needs which is\nnot a cheap step.\n\nThe size of the table isn't relevant though, except inasmuch as the\nsavings when actually running the query will be larger for larger\ntables so it may be worth spending more time planning queries on large\ntables.\n\n-- \ngreg\n", "msg_date": "Fri, 8 Oct 2010 09:10:07 -0700", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Scalability" } ]
[ { "msg_contents": "Dear postgresql list,\n\n\n\nI have some troubles generating data\nfor a analysis task at hand.\n\n\n\nI have a table (table A) containing 5\nmillion records and 28 number of attributes. This table is 461MB big\nif I copy it to a csv file.\n\n\n\nI want to create another table (table\nB) based on the contents of table A plus some 15 extra attributes (in\npl/pgsql written functions which produce those extra attributes)\n\n\n\nSo my statement looks like this:\n\n\n\ncreate tableB as (\nselect some attributes,\nfunction1(A.attribute1)as attributeX+1,\nfunction2(A.attribute1,A.Attribute2,A.attribute3,A.attribute4,A.attribute5)\nas attribute X+2......function15(A.attribute1,A.attribute9) as\nattributeX+15 from tableA as A)\n\n\n\nThis takes almost 60 hours to finish on\nmy database server running debian 5.0 with XFS as filesystem\ncontaining 4GB RAM. I'm using postgresql server version 8.3 (but am seeing the same phenomena on my FreeBSD 8.0 database server running postgresql 8.4 as well)\n\n\n\n\nI arrived at 15 functions because I had\n7 or 8 joins in the past and saw that my disk was getting hid and I\nhad heard someplace that RAM is faster so I rewrote those 7 or 8\njoins as functions in pl/pgsql. They were just simple lookups,\nalthough some of the functions are looking stuff up in tables\ncontaining 78000 records. However, I thought this wouldn't be a\nproblem because they are simple functions which look up the value of\none variable based on a parameter. 3 of the more special functions\nare shown here:\n\n\n\nCREATE OR REPLACE FUNCTION agenttype1(a\ncharacter)\n RETURNS integer AS\n$BODY$\nDECLARE \n\ni integer;\nt1_rij canxagents%ROWTYPE;\nBEGIN\nselect * into t1_rij from canxagents\nwhere agent = a;\nif NOT FOUND THEN i := 0;\nELSE\n\tif t1_rij.aantal >= 0 and\nt1_rij.aantal <=499 THEN i := 1;\n\tELSE\n\t\tif t1_rij.aantal > 500 and\nt1_rij.aantal <=1999 THEN i := 2;\n\t\tELSE\n\t\t\tif t1_rij.aantal >= 2000 THEN i\n:= 3;\n\t\t\tEND IF;\n\t\tEND IF;\n\tEND IF;\nEND IF;\nreturn i ;\nEND;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE\n COST 100;\n\n\n\n\n\n\nCREATE OR REPLACE FUNCTION agenttype2(a\ncharacter)\n RETURNS integer AS\n$BODY$\nDECLARE \n\ni integer;\nt1_rij showagents%ROWTYPE;\nBEGIN\nselect * into t1_rij from showagents\nwhere agent = a;\nif NOT FOUND THEN i := 0;\nELSE\n\tif t1_rij.aantal >= 0 and\nt1_rij.aantal <=499 THEN i := 1;\n\tELSE\n\t\tif t1_rij.aantal > 500 and\nt1_rij.aantal <=999 THEN i := 2;\n\t\tELSE\n\t\t\tif t1_rij.aantal >= 1000 THEN i\n:= 3;\n\t\t\tEND IF;\n\t\tEND IF;\n\tEND IF;\nEND IF;\nreturn i ;\nEND;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE\n COST 100;\n\n\n\n\n\n\nCREATE OR REPLACE FUNCTION agenttype3(a\ncharacter)\n RETURNS integer AS\n$BODY$\nDECLARE \n\ni integer;\nt1_rij noagents%ROWTYPE;\nBEGIN\nselect * into t1_rij from noagents\nwhere agent = a;\nif NOT FOUND THEN i := 0;\nELSE\n\tif t1_rij.aantal >= 0 and\nt1_rij.aantal <=299 THEN i := 1;\n\tELSE\n\t\tif t1_rij.aantal > 300 and\nt1_rij.aantal <=899 THEN i := 2;\n\t\tELSE\n\t\t\tif t1_rij.aantal >= 900 THEN i :=\n3;\n\t\t\tEND IF;\n\t\tEND IF;\n\tEND IF;\nEND IF;\nreturn i ;\nEND;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE\n COST 100;\n\n\n\nThe interesting parts of my\npostgresql.conf file look like this:\n\n\n\n#------------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#------------------------------------------------------------------------------\n\n\n\n# - Memory -\n\n\n\nshared_buffers = 512MB\t\t\t# min 128kB or\nmax_connections*16kB \n\n\t\t\t\t\t# (change requires restart)\ntemp_buffers = 8MB\t\t\t# min 800kB\n#max_prepared_transactions = 5\t\t# can\nbe 0 or more\n\t\t\t\t\t# (change requires restart)\n# Note: Increasing\nmax_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space\n(see max_locks_per_transaction).\nwork_mem = 50MB\t\t\t\t# min 64kB \n\nmaintenance_work_mem = 256MB\t\t# min 1MB\n\n#max_stack_depth = 2MB\t\t\t# min 100kB\n\n\n\n# - Free Space Map -\n\n\n\nmax_fsm_pages = 153600\t\t\t# min\nmax_fsm_relations*16, 6 bytes each\n\t\t\t\t\t# (change requires restart)\n#max_fsm_relations = 1000\t\t# min 100,\n~70 bytes each\n\t\t\t\t\t# (change requires restart)\n\n\n\n# - Kernel Resource Usage -\n\n\n\n#max_files_per_process = 1000\t\t# min 25\n\t\t\t\t\t# (change requires restart)\n#shared_preload_libraries = ''\t\t#\n(change requires restart)\n\n\n\n# - Cost-Based Vacuum Delay -\n\n\n\n#vacuum_cost_delay = 0\t\t\t# 0-1000\nmilliseconds\n#vacuum_cost_page_hit = 1\t\t# 0-10000\ncredits\n#vacuum_cost_page_miss = 10\t\t# 0-10000\ncredits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000\ncredits\n#vacuum_cost_limit = 200\t\t# 1-10000\ncredits\n\n\n\n# - Background Writer -\n\n\n\n#bgwriter_delay = 200ms\t\t\t# 10-10000ms\nbetween rounds\n#bgwriter_lru_maxpages = 100\t\t# 0-1000\nmax buffers written/round\n#bgwriter_lru_multiplier = 2.0\t\t#\n0-10.0 multipler on buffers scanned/round\n\n\n\n\n\n\n#------------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#------------------------------------------------------------------------------\n\n\n\n# - Settings -\n\n\n\n#fsync = on\t\t\t\t# turns forced\nsynchronization on or off\n#synchronous_commit = on\t\t# immediate\nfsync at commit\n#wal_sync_method = fsync\t\t# the default\nis the first option \n\n\t\t\t\t\t# supported by the operating\nsystem:\n\t\t\t\t\t# open_datasync\n\t\t\t\t\t# fdatasync\n\t\t\t\t\t# fsync\n\t\t\t\t\t# fsync_writethrough\n\t\t\t\t\t# open_sync\n#full_page_writes = on\t\t\t# recover from\npartial page writes\n#wal_buffers = 64kB\t\t\t# min 32kB\n\t\t\t\t\t# (change requires restart)\n#wal_writer_delay = 200ms\t\t# 1-10000\nmilliseconds\n\n\n\n#commit_delay = 0\t\t\t# range 0-100000,\nin microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n\n\n# - Checkpoints -\n\n\n\n#checkpoint_segments = 3\t\t# in logfile\nsegments, min 1, 16MB each\n#checkpoint_timeout = 5min\t\t# range\n30s-1h\n#checkpoint_completion_target = 0.5\t#\ncheckpoint target duration, 0.0 - 1.0\n#checkpoint_warning = 30s\t\t# 0 is off\n\n\n\n# - Archiving -\n\n\n\n#archive_mode = off\t\t# allows archiving\nto be done\n\t\t\t\t# (change requires restart)\n#archive_command = ''\t\t# command to use\nto archive a logfile segment\n#archive_timeout = 0\t\t# force a logfile\nsegment switch after this\n\t\t\t\t# time; 0 is off\n\n\n\n\n\n\n#------------------------------------------------------------------------------\n# QUERY TUNING\n#------------------------------------------------------------------------------\n\n\n\n# - Planner Method Configuration -\n\n\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n\n\n# - Planner Cost Constants -\n\n\n\n#seq_page_cost = 1.0\t\t\t# measured on an\narbitrary scale\n#random_page_cost = 4.0\t\t\t# same scale\nas above\n#cpu_tuple_cost = 0.01\t\t\t# same scale\nas above\n#cpu_index_tuple_cost = 0.005\t\t# same\nscale as above\n#cpu_operator_cost = 0.0025\t\t# same\nscale as above\neffective_cache_size = 256MB\t\t# was 128\n\n\n\n\n# - Genetic Query Optimizer -\n\n\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default\nbased on effort\n#geqo_generations = 0\t\t\t# selects\ndefault based on effort\n#geqo_selection_bias = 2.0\t\t# range\n1.5-2.0\n\n\n\n# - Other Planner Options -\n\n\n\n#default_statistics_target = 10\t\t#\nrange 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8\t\t# 1 disables\ncollapsing of explicit \n\n\t\t\t\t\t# JOIN clauses\n\n\n\n\n\n\n\n\n\nQuestions\n\n\n\nWhat can I do to let the creation\n\tof table B go faster?\n\tDo you think the use of indices\n\t(but where) would help me? I didn't go that route because in fact I\n\tdon't have a where clause in the create table B statement. I could\n\tput indices on the little tables I'm using in the functions. \n\t\n\tWhat about the functions? Should I\n\tcode them differently? \n\t\n\tWhat about my server\n\tconfiguration. What could be done over there?\n\n\n\n\nThanks in advanced\n\n\n\n \n\nDear postgresql list,\n\n\nI have some troubles generating data\nfor a analysis task at hand.\n\n\nI have a table (table A) containing 5\nmillion records and 28 number of attributes. This table is 461MB big\nif I copy it to a csv file.\n\n\nI want to create another table (table\nB) based on the contents of table A plus some 15 extra attributes (in\npl/pgsql written functions which produce those extra attributes)\n\n\nSo my statement looks like this:\n\n\ncreate tableB as (\nselect some attributes,\nfunction1(A.attribute1)as attributeX+1,\nfunction2(A.attribute1,A.Attribute2,A.attribute3,A.attribute4,A.attribute5)\nas attribute X+2......function15(A.attribute1,A.attribute9) as\nattributeX+15 from tableA as A)\n\n\nThis takes almost 60 hours to finish on\nmy database server running debian 5.0 with XFS as filesystem\ncontaining 4GB RAM. I'm using postgresql server version 8.3 (but am seeing the same phenomena on my FreeBSD 8.0 database server running postgresql 8.4 as well)\n\n\nI arrived at 15 functions because I had\n7 or 8 joins in the past and saw that my disk was getting hid and I\nhad heard someplace that RAM is faster so I rewrote those 7 or 8\njoins as functions in pl/pgsql. They were just simple lookups,\nalthough some of the functions are looking stuff up in tables\ncontaining 78000 records. However, I thought this wouldn't be a\nproblem because they are simple functions which look up the value of\none variable based on a parameter. 3 of the more special functions\nare shown here:\n\n\nCREATE OR REPLACE FUNCTION agenttype1(a\ncharacter)\n RETURNS integer AS\n$BODY$\nDECLARE \n\ni integer;\nt1_rij canxagents%ROWTYPE;\nBEGIN\nselect * into t1_rij from canxagents\nwhere agent = a;\nif NOT FOUND THEN i := 0;\nELSE\n\tif t1_rij.aantal >= 0 and\nt1_rij.aantal <=499 THEN i := 1;\n\tELSE\n\t\tif t1_rij.aantal > 500 and\nt1_rij.aantal <=1999 THEN i := 2;\n\t\tELSE\n\t\t\tif t1_rij.aantal >= 2000 THEN i\n:= 3;\n\t\t\tEND IF;\n\t\tEND IF;\n\tEND IF;\nEND IF;\nreturn i ;\nEND;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE\n COST 100;\n\n\n\n\nCREATE OR REPLACE FUNCTION agenttype2(a\ncharacter)\n RETURNS integer AS\n$BODY$\nDECLARE \n\ni integer;\nt1_rij showagents%ROWTYPE;\nBEGIN\nselect * into t1_rij from showagents\nwhere agent = a;\nif NOT FOUND THEN i := 0;\nELSE\n\tif t1_rij.aantal >= 0 and\nt1_rij.aantal <=499 THEN i := 1;\n\tELSE\n\t\tif t1_rij.aantal > 500 and\nt1_rij.aantal <=999 THEN i := 2;\n\t\tELSE\n\t\t\tif t1_rij.aantal >= 1000 THEN i\n:= 3;\n\t\t\tEND IF;\n\t\tEND IF;\n\tEND IF;\nEND IF;\nreturn i ;\nEND;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE\n COST 100;\n\n\n\n\nCREATE OR REPLACE FUNCTION agenttype3(a\ncharacter)\n RETURNS integer AS\n$BODY$\nDECLARE \n\ni integer;\nt1_rij noagents%ROWTYPE;\nBEGIN\nselect * into t1_rij from noagents\nwhere agent = a;\nif NOT FOUND THEN i := 0;\nELSE\n\tif t1_rij.aantal >= 0 and\nt1_rij.aantal <=299 THEN i := 1;\n\tELSE\n\t\tif t1_rij.aantal > 300 and\nt1_rij.aantal <=899 THEN i := 2;\n\t\tELSE\n\t\t\tif t1_rij.aantal >= 900 THEN i :=\n3;\n\t\t\tEND IF;\n\t\tEND IF;\n\tEND IF;\nEND IF;\nreturn i ;\nEND;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE\n COST 100;\n\n\nThe interesting parts of my\npostgresql.conf file look like this:\n\n\n#------------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#------------------------------------------------------------------------------\n\n\n# - Memory -\n\n\nshared_buffers = 512MB\t\t\t# min 128kB or\nmax_connections*16kB \n\n\t\t\t\t\t# (change requires restart)\ntemp_buffers = 8MB\t\t\t# min 800kB\n#max_prepared_transactions = 5\t\t# can\nbe 0 or more\n\t\t\t\t\t# (change requires restart)\n# Note: Increasing\nmax_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space\n(see max_locks_per_transaction).\nwork_mem = 50MB\t\t\t\t# min 64kB \n\nmaintenance_work_mem = 256MB\t\t# min 1MB\n\n#max_stack_depth = 2MB\t\t\t# min 100kB\n\n\n# - Free Space Map -\n\n\nmax_fsm_pages = 153600\t\t\t# min\nmax_fsm_relations*16, 6 bytes each\n\t\t\t\t\t# (change requires restart)\n#max_fsm_relations = 1000\t\t# min 100,\n~70 bytes each\n\t\t\t\t\t# (change requires restart)\n\n\n# - Kernel Resource Usage -\n\n\n#max_files_per_process = 1000\t\t# min 25\n\t\t\t\t\t# (change requires restart)\n#shared_preload_libraries = ''\t\t#\n(change requires restart)\n\n\n# - Cost-Based Vacuum Delay -\n\n\n#vacuum_cost_delay = 0\t\t\t# 0-1000\nmilliseconds\n#vacuum_cost_page_hit = 1\t\t# 0-10000\ncredits\n#vacuum_cost_page_miss = 10\t\t# 0-10000\ncredits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000\ncredits\n#vacuum_cost_limit = 200\t\t# 1-10000\ncredits\n\n\n# - Background Writer -\n\n\n#bgwriter_delay = 200ms\t\t\t# 10-10000ms\nbetween rounds\n#bgwriter_lru_maxpages = 100\t\t# 0-1000\nmax buffers written/round\n#bgwriter_lru_multiplier = 2.0\t\t#\n0-10.0 multipler on buffers scanned/round\n\n\n\n\n#------------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#------------------------------------------------------------------------------\n\n\n# - Settings -\n\n\n#fsync = on\t\t\t\t# turns forced\nsynchronization on or off\n#synchronous_commit = on\t\t# immediate\nfsync at commit\n#wal_sync_method = fsync\t\t# the default\nis the first option \n\n\t\t\t\t\t# supported by the operating\nsystem:\n\t\t\t\t\t# open_datasync\n\t\t\t\t\t# fdatasync\n\t\t\t\t\t# fsync\n\t\t\t\t\t# fsync_writethrough\n\t\t\t\t\t# open_sync\n#full_page_writes = on\t\t\t# recover from\npartial page writes\n#wal_buffers = 64kB\t\t\t# min 32kB\n\t\t\t\t\t# (change requires restart)\n#wal_writer_delay = 200ms\t\t# 1-10000\nmilliseconds\n\n\n#commit_delay = 0\t\t\t# range 0-100000,\nin microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n\n# - Checkpoints -\n\n\n#checkpoint_segments = 3\t\t# in logfile\nsegments, min 1, 16MB each\n#checkpoint_timeout = 5min\t\t# range\n30s-1h\n#checkpoint_completion_target = 0.5\t#\ncheckpoint target duration, 0.0 - 1.0\n#checkpoint_warning = 30s\t\t# 0 is off\n\n\n# - Archiving -\n\n\n#archive_mode = off\t\t# allows archiving\nto be done\n\t\t\t\t# (change requires restart)\n#archive_command = ''\t\t# command to use\nto archive a logfile segment\n#archive_timeout = 0\t\t# force a logfile\nsegment switch after this\n\t\t\t\t# time; 0 is off\n\n\n\n\n#------------------------------------------------------------------------------\n# QUERY TUNING\n#------------------------------------------------------------------------------\n\n\n# - Planner Method Configuration -\n\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n\n# - Planner Cost Constants -\n\n\n#seq_page_cost = 1.0\t\t\t# measured on an\narbitrary scale\n#random_page_cost = 4.0\t\t\t# same scale\nas above\n#cpu_tuple_cost = 0.01\t\t\t# same scale\nas above\n#cpu_index_tuple_cost = 0.005\t\t# same\nscale as above\n#cpu_operator_cost = 0.0025\t\t# same\nscale as above\neffective_cache_size = 256MB\t\t# was 128\n\n\n\n# - Genetic Query Optimizer -\n\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default\nbased on effort\n#geqo_generations = 0\t\t\t# selects\ndefault based on effort\n#geqo_selection_bias = 2.0\t\t# range\n1.5-2.0\n\n\n# - Other Planner Options -\n\n\n#default_statistics_target = 10\t\t#\nrange 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8\t\t# 1 disables\ncollapsing of explicit \n\n\t\t\t\t\t# JOIN clauses\n\n\n\n\n\n\nQuestions\n\n\nWhat can I do to let the creation\n\tof table B go faster?\nDo you think the use of indices\n\t(but where) would help me? I didn't go that route because in fact I\n\tdon't have a where clause in the create table B statement. I could\n\tput indices on the little tables I'm using in the functions. \n\t\nWhat about the functions? Should I\n\tcode them differently? \n\t\nWhat about my server\n\tconfiguration. What could be done over there?\n\n\n\nThanks in advanced", "msg_date": "Thu, 29 Jul 2010 14:58:56 -0700 (PDT)", "msg_from": "Dino Vliet <[email protected]>", "msg_from_op": true, "msg_subject": "How to improve: performance of query on postgresql 8.3 takes days" }, { "msg_contents": " On 07/29/10 2:58 PM, Dino Vliet wrote:\n>\n> Dear postgresql list,\n>\n>\n> I have some troubles generating data for a analysis task at hand.\n>\n>\n> I have a table (table A) containing 5 million records and 28 number of \n> attributes. This table is 461MB big if I copy it to a csv file.\n>\n>\n> I want to create another table (table B) based on the contents of \n> table A plus some 15 extra attributes (in pl/pgsql written functions \n> which produce those extra attributes)\n>\n>\n> So my statement looks like this:\n>\n>\n> create tableB as (\n>\n> select some attributes, function1(A.attribute1)as attributeX+1, \n> function2(A.attribute1,A.Attribute2,A.attribute3,A.attribute4,A.attribute5) \n> as attribute X+2......function15(A.attribute1,A.attribute9) as \n> attributeX+15 from tableA as A)\n>\n>\n> This takes almost 60 hours to finish on my database server running \n> debian 5.0 with XFS as filesystem containing 4GB RAM. I'm using \n> postgresql server version 8.3 (but am seeing the same phenomena on my \n> FreeBSD 8.0 database server running postgresql 8.4 as well)\n>\n>\n> I arrived at 15 functions because I had 7 or 8 joins in the past and \n> saw that my disk was getting hid and I had heard someplace that RAM is \n> faster so I rewrote those 7 or 8 joins as functions in pl/pgsql. They \n> were just simple lookups, although some of the functions are looking \n> stuff up in tables containing 78000 records. However, I thought this \n> wouldn't be a problem because they are simple functions which look up \n> the value of one variable based on a parameter. 3 of the more special \n> functions are shown here:\n>\n> ...\n>\n> 1.\n>\n> What can I do to let the creation of table B go faster?\n>\n> 2.\n>\n> Do you think the use of indices (but where) would help me? I\n> didn't go that route because in fact I don't have a where clause\n> in the create table B statement. I could put indices on the\n> little tables I'm using in the functions.\n>\n> 3.\n>\n> What about the functions? Should I code them differently?\n>\n> 4.\n>\n> What about my server configuration. What could be done over there?\n>\n>\n> Thanks in advanced\n>\n>\n\ncertainly your lookup tables should have a index on the key you're using \nto look up values. without said index, that 78000 row 'little' table \nwill have to be sequentially scanned for every one of your several \nmillion rows.\n\nwith said indexes, you may find that just doing JOINs when you actually \nuse this data rather than creating a new table will work quite nicely. \nyou could use a VIEW to do the joins transparently on the fly.\n\n\n", "msg_date": "Thu, 29 Jul 2010 15:17:40 -0700", "msg_from": "John R Pierce <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to improve: performance of query on postgresql 8.3 takes days" }, { "msg_contents": "In response to Dino Vliet :\n> I arrived at 15 functions because I had 7 or 8 joins in the past and saw that\n> my disk was getting hid and I had heard someplace that RAM is faster so I\n> rewrote those 7 or 8 joins as functions in pl/pgsql. They were just simple\n> lookups, although some of the functions are looking stuff up in tables\n> containing 78000 records. However, I thought this wouldn't be a problem because\n> they are simple functions which look up the value of one variable based on a\n> parameter. 3 of the more special functions are shown here:\n\nI disaagree with you. The database has to do the same job, wherever with\n7 or 8 joins or with functions, but functions (in this case) are slower.\n\nYou should run EXPLAIN <your statement with 7 or 8 joins> and show us\nthe result, i believe there are missing indexes.\n\n\n> # - Memory -\n> \n> \n> shared_buffers = 512MB # min 128kB or max_connections*16kB\n\nHow much RAM contains your server? You should set this to approx. 25% of RAM.\n\n\n> work_mem = 50MB # min 64kB\n\nThat's maybe too much, but it depends on your workload. If you have a\nlot of simultaneous and complex queries you run out of RAM, but if there\nonly one user (only one connection) it's okay.\n\n\n> effective_cache_size = 256MB # was 128\n\nThat's too tow, effective_cache_size = shared_buffers + OS-cache\n\n\n> Questions\n> \n> \n> 1. What can I do to let the creation of table B go faster?\n\nUse JOINs for table-joining, not functions.\n\n\n> \n> 2. Do you think the use of indices (but where) would help me? I didn't go that\n> route because in fact I don't have a where clause in the create table B\n> statement. I could put indices on the little tables I'm using in the\n> functions.\n\nYes! Create indexes on the joining columns.\n\n\n> \n> 3. What about the functions? Should I code them differently?\n\nDon't use functions for that kind of table-joining.\n\n\n> \n> 4. What about my server configuration. What could be done over there?\n\nsee above.\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Fri, 30 Jul 2010 08:11:29 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to improve: performance of query on postgresql 8.3 takes days" }, { "msg_contents": "On 29 Jul 2010, at 23:58, Dino Vliet wrote:\n\n> CREATE OR REPLACE FUNCTION agenttype1(a character)\n> RETURNS integer AS\n\n> LANGUAGE 'plpgsql' VOLATILE\n> COST 100;\n> \n> \n> CREATE OR REPLACE FUNCTION agenttype2(a character)\n> RETURNS integer AS\n\n> LANGUAGE 'plpgsql' VOLATILE\n> COST 100;\n> \n> \n> CREATE OR REPLACE FUNCTION agenttype3(a character)\n> RETURNS integer AS\n\n> LANGUAGE 'plpgsql' VOLATILE\n> COST 100;\n\nAs others have already said, using these functions will be less efficient than using joins. \n\nRegardless of that though, you should at least declare these functions as STABLE instead of VOLATILE, see:\n\nhttp://www.postgresql.org/docs/8.4/interactive/xfunc-volatility.html\n\nAlban Hertroys\n\n--\nScrewing up is an excellent way to attach something to the ceiling.\n\n\n!DSPAM:737,4c52ae01286211819977167!\n\n\n", "msg_date": "Fri, 30 Jul 2010 12:48:31 +0200", "msg_from": "Alban Hertroys <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to improve: performance of query on postgresql 8.3 takes days" } ]
[ { "msg_contents": "I'm wondering whether columns, in the select list of a view that is used \nin a join, which are not used either as join criteria or in the select \nlist of the overall query, effect the performance of the query.\n\nIn other words supposed I define a view something like\n\nCREATE view MyView AS SELECT a,b, c, d, e, f, g FROM (several tables \njoined together)\n\nAssume for the sake of simplicity there are no aggregates or such, we're \njust joining tables, and getting a bunch of columns back each of the \nvarious tables.\n\nI then then perform a query something like\n\nSELECT v.a, x.h, y.i FROM MyView as v JOIN otherTable as x on (x.m = \nv.a) JOIN yetAnotherTable as y on (y.n=v.a)\n\nDoes all the extra clutter of b,c,d,e,f in MyView affect the performance \nof the query by taking up extra space in memory during the joins or is \nthe optimizer smart enough to realize that they aren't needed and evoke \nthe query as if MyView were really defined as\n\nCREATE view MyView AS SELECT a FROM (several tables joined together)?\n\nThanks,\n\nEric\n", "msg_date": "Fri, 30 Jul 2010 10:55:32 -0400", "msg_from": "Eric Schwarzenbach <[email protected]>", "msg_from_op": true, "msg_subject": "view columns and performance" }, { "msg_contents": "Eric Schwarzenbach <[email protected]> writes:\n> I'm wondering whether columns, in the select list of a view that is used \n> in a join, which are not used either as join criteria or in the select \n> list of the overall query, effect the performance of the query.\n\nIf the view gets \"flattened\" into the calling query then unreferenced\ncolumns will be optimized away, otherwise probably not. You haven't\ngiven enough details about your intended view definition to be sure\nwhether it can be optimized, but if it's just a join it's fine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Jul 2010 11:11:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view columns and performance " } ]
[ { "msg_contents": "We are running 8.3.10 64bit.\n\nCompare the plans below.\n\nThey all do the same thing and delete from a table named work_active (about 500rows), which is a subset of work_unit (about 50m rows).\n\nI want to introduce range-partitions on work_unit.id column (serial pk), and I want constraint exclusion to be used.\nStmt_3 is the plan currently in use.\n\nStmt_4 and stmt_5 compare explain plans of two variants of the stmt (no partitions yet):\n\n- Limit the sub-query using constants (derived from a prior query of min() and max() against work_active), (ref stmt_4 below) or\n\n- Try and do something cute and do a subquery using min() and max() (ref stmt_5 below).\n\n\nMy questions are:\n\n- What does the \"initplan\" operation do? ( I can take a guess, but could someone give me some details, cos the docn about it is pretty sparse).\n\n- Will this enable constraint exclusion on the work_unit table if we introduce partitioning?\n\n\n\nThanks in adv for any help you can give me.\nMr\n\n\n\n\n\n\n\ncaesius=# \\i stmt_3.sql\nexplain\nDELETE FROM work_active wa\nWHERE EXISTS (\n SELECT 1\n FROM work_unit wu\n , run r\n WHERE wu.id = wa.wu_id\n AND wu.run_id = r.id\n AND (( (wu.status not in (2,3)) OR (wu.stop_time is not null)) OR (r.status > 2) )\n LIMIT 1\n);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\nSeq Scan on work_active wa (cost=0.00..23078.82 rows=370 width=6)\n Filter: (subplan)\n SubPlan\n -> Limit (cost=0.00..30.53 rows=1 width=0)\n -> Nested Loop (cost=0.00..30.53 rows=1 width=0)\n Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))\n -> Index Scan using tmp_work_unit_pkey on work_unit wu (cost=0.00..19.61 rows=1 width=16)\n Index Cond: (id = $0)\n -> Index Scan using run_pkey on run r (cost=0.00..10.91 rows=1 width=8)\n Index Cond: (r.id = wu.run_id)\n(10 rows)\n\n\n\n\n\ncaesius=# \\i stmt_4.sql\nexplain\nDELETE FROM work_active wa\nwhere exists (\n SELECT 1\n FROM work_unit wu\n , run r\n WHERE wu.id = wa.wu_id\n\n AND wu.id between 1000000 and 1100000\n AND wu.run_id = r.id\n AND (( (wu.status not in(2,3) ) OR (wu.stop_time is not null)) OR (r.status > 2) )\n LIMIT 1\n);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\nSeq Scan on work_active wa (cost=0.00..22624.37 rows=362 width=6)\n Filter: (subplan)\n SubPlan\n -> Limit (cost=0.00..30.54 rows=1 width=0)\n -> Nested Loop (cost=0.00..30.54 rows=1 width=0)\n Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))\n -> Index Scan using tmp_work_unit_pkey on work_unit wu (cost=0.00..19.61 rows=1 width=16)\n Index Cond: ((id >= 1000000) AND (id <= 1100000) AND (id = $0))\n -> Index Scan using run_pkey on run r (cost=0.00..10.91 rows=1 width=8)\n Index Cond: (r.id = wu.run_id)\n(10 rows)\n\n\n\n\n\n\n\ncaesius=# \\i stmt_5.sql\nexplain\nDELETE FROM work_active wa\nwhere exists (\n SELECT 1\n FROM work_unit wu\n , run r\n WHERE wu.id = wa.wu_id\n AND wu.id between (select min(wu_id) from work_active limit 1) and (select max(wu_id) from work_active limit 1)\n AND wu.run_id = r.id\n AND (( (wu.status not in(2,3) ) OR (wu.stop_time is not null)) OR (r.status > 2) )\n LIMIT 1\n);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\nSeq Scan on work_active wa (cost=0.00..35071.47 rows=370 width=6)\n Filter: (subplan)\n SubPlan\n -> Limit (cost=16.22..46.76 rows=1 width=0)\n InitPlan\n -> Limit (cost=8.10..8.11 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..8.10 rows=1 width=4)\n -> Index Scan using work_active_pkey on work_active (cost=0.00..5987.09 rows=739 width=4)\n Filter: (wu_id IS NOT NULL)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Limit (cost=8.10..8.11 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..8.10 rows=1 width=4)\n -> Index Scan Backward using work_active_pkey on work_active (cost=0.00..5987.09 rows=739 width=4)\n Filter: (wu_id IS NOT NULL)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Nested Loop (cost=0.00..30.54 rows=1 width=0)\n Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))\n -> Index Scan using tmp_work_unit_pkey on work_unit wu (cost=0.00..19.61 rows=1 width=16)\n Index Cond: ((id >= $1) AND (id <= $3) AND (id = $4))\n -> Index Scan using run_pkey on run r (cost=0.00..10.91 rows=1 width=8)\n Index Cond: (r.id = wu.run_id)\n(23 rows)\n\n\n\n We are running 8.3.10 64bit. Compare the plans below. They all do the same thing and delete from a table named work_active (about 500rows), which is a subset of work_unit (about 50m rows). I want to introduce range-partitions on work_unit.id  column (serial pk), and I want constraint exclusion to be used.Stmt_3 is the plan currently in use. Stmt_4 and stmt_5 compare explain plans of two variants of the stmt (no partitions yet):-          Limit the sub-query using constants (derived from a prior query of min() and max() against work_active), (ref stmt_4 below) or-          Try and do something cute and do a subquery using min() and max() (ref stmt_5 below).  My questions are:-          What does the “initplan” operation do? ( I can take a guess, but could someone give me some details, cos the docn about it is pretty sparse).-          Will this enable constraint exclusion on the work_unit table if we introduce partitioning?   Thanks in adv for any help you can give me.Mr       caesius=# \\i stmt_3.sqlexplainDELETE FROM work_active waWHERE EXISTS (     SELECT 1     FROM   work_unit wu          , run r     WHERE  wu.id = wa.wu_id     AND    wu.run_id = r.id     AND    (( (wu.status not in (2,3)) OR (wu.stop_time is not null)) OR (r.status > 2) )     LIMIT 1);                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Seq Scan on work_active wa  (cost=0.00..23078.82 rows=370 width=6)   Filter: (subplan)   SubPlan     ->  Limit  (cost=0.00..30.53 rows=1 width=0)           ->  Nested Loop  (cost=0.00..30.53 rows=1 width=0)                 Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))                 ->  Index Scan using tmp_work_unit_pkey on work_unit wu  (cost=0.00..19.61 rows=1 width=16)                       Index Cond: (id = $0)                 ->  Index Scan using run_pkey on run r  (cost=0.00..10.91 rows=1 width=8)                       Index Cond: (r.id = wu.run_id)(10 rows)     caesius=# \\i stmt_4.sqlexplainDELETE FROM work_active wawhere exists (     SELECT 1     FROM   work_unit wu          , run r     WHERE  wu.id = wa.wu_id      AND    wu.id between 1000000 and 1100000     AND    wu.run_id = r.id     AND    (( (wu.status not in(2,3) ) OR (wu.stop_time is not null)) OR (r.status > 2) )     LIMIT 1);                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Seq Scan on work_active wa  (cost=0.00..22624.37 rows=362 width=6)   Filter: (subplan)   SubPlan     ->  Limit  (cost=0.00..30.54 rows=1 width=0)           ->  Nested Loop  (cost=0.00..30.54 rows=1 width=0)                 Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))                 ->  Index Scan using tmp_work_unit_pkey on work_unit wu  (cost=0.00..19.61 rows=1 width=16)                       Index Cond: ((id >= 1000000) AND (id <= 1100000) AND (id = $0))                 ->  Index Scan using run_pkey on run r  (cost=0.00..10.91 rows=1 width=8)                       Index Cond: (r.id = wu.run_id)(10 rows)       caesius=# \\i stmt_5.sqlexplainDELETE FROM work_active wawhere exists (     SELECT 1     FROM   work_unit wu          , run r     WHERE  wu.id = wa.wu_id     AND    wu.id between (select min(wu_id) from work_active limit 1) and (select max(wu_id) from work_active limit 1)     AND    wu.run_id = r.id     AND    (( (wu.status not in(2,3) ) OR (wu.stop_time is not null)) OR (r.status > 2) )     LIMIT 1);                                                           QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------- Seq Scan on work_active wa  (cost=0.00..35071.47 rows=370 width=6)   Filter: (subplan)   SubPlan     ->  Limit  (cost=16.22..46.76 rows=1 width=0)           InitPlan             ->  Limit  (cost=8.10..8.11 rows=1 width=0)                   InitPlan                     ->  Limit  (cost=0.00..8.10 rows=1 width=4)                           ->  Index Scan using work_active_pkey on work_active  (cost=0.00..5987.09 rows=739 width=4)                                 Filter: (wu_id IS NOT NULL)                   ->  Result  (cost=0.00..0.01 rows=1 width=0)             ->  Limit  (cost=8.10..8.11 rows=1 width=0)                   InitPlan                     ->  Limit  (cost=0.00..8.10 rows=1 width=4)                           ->  Index Scan Backward using work_active_pkey on work_active  (cost=0.00..5987.09 rows=739 width=4)                                 Filter: (wu_id IS NOT NULL)                   ->  Result  (cost=0.00..0.01 rows=1 width=0)           ->  Nested Loop  (cost=0.00..30.54 rows=1 width=0)                 Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))                 ->  Index Scan using tmp_work_unit_pkey on work_unit wu  (cost=0.00..19.61 rows=1 width=16)                       Index Cond: ((id >= $1) AND (id <= $3) AND (id = $4))                 ->  Index Scan using run_pkey on run r  (cost=0.00..10.91 rows=1 width=8)                       Index Cond: (r.id = wu.run_id)(23 rows)", "msg_date": "Fri, 30 Jul 2010 19:27:30 -0700", "msg_from": "Mark Rostron <[email protected]>", "msg_from_op": true, "msg_subject": "what does \"initplan\" operation in explain output mean?" } ]
[ { "msg_contents": "We are running 8.3.10 64bit.\n\nThis message is a request for information about the \"initplan\" operation in explain plan.\nI want to know if I can take advantage of it, and use it to initialize query-bounds for the purpose of enforcing constraint exclusion on a table which has been range-partitioned on a serial-id column.\n\nCompare the plans below.\n\nThey all do the same thing and delete from a table named work_active (about 500rows), which is a subset of work_unit (about 50m rows).\n\nStmt_3 is the plan currently in use.\n\nStmt_4 and stmt_5 ilustrate explain plans of two variants of stmt_3 (no partitions yet):\n\n- Limit the sub-query using constants (derived from a prior query min() and max() against work_active), (ref stmt_4 below) or\n\n- Try and do something cute and do a subquery using min() and max() (ref stmt_5 below).\n\n\nMy questions are:\n\n- What does the \"initplan\" operation do? ( I can take a guess, but could someone give me some details, cos the docn about it is pretty sparse).\n\n- Will this enable constraint exclusion on the work_unit table if we introduce partitioning?\n\n\n\nThanks in adv for any help you can give me.\nMr\n\n\n\n\n\n\n\ncaesius=# \\i stmt_3.sql\nexplain\nDELETE FROM work_active wa\nWHERE EXISTS (\n SELECT 1\n FROM work_unit wu\n , run r\n WHERE wu.id = wa.wu_id\n AND wu.run_id = r.id\n AND (( (wu.status not in (2,3)) OR (wu.stop_time is not null)) OR (r.status > 2) )\n LIMIT 1\n);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\nSeq Scan on work_active wa (cost=0.00..23078.82 rows=370 width=6)\n Filter: (subplan)\n SubPlan\n -> Limit (cost=0.00..30.53 rows=1 width=0)\n -> Nested Loop (cost=0.00..30.53 rows=1 width=0)\n Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))\n -> Index Scan using tmp_work_unit_pkey on work_unit wu (cost=0.00..19.61 rows=1 width=16)\n Index Cond: (id = $0)\n -> Index Scan using run_pkey on run r (cost=0.00..10.91 rows=1 width=8)\n Index Cond: (r.id = wu.run_id)\n(10 rows)\n\n\n\n\n\ncaesius=# \\i stmt_4.sql\nexplain\nDELETE FROM work_active wa\nwhere exists (\n SELECT 1\n FROM work_unit wu\n , run r\n WHERE wu.id = wa.wu_id\n\n AND wu.id between 1000000 and 1100000\n AND wu.run_id = r.id\n AND (( (wu.status not in(2,3) ) OR (wu.stop_time is not null)) OR (r.status > 2) )\n LIMIT 1\n);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\nSeq Scan on work_active wa (cost=0.00..22624.37 rows=362 width=6)\n Filter: (subplan)\n SubPlan\n -> Limit (cost=0.00..30.54 rows=1 width=0)\n -> Nested Loop (cost=0.00..30.54 rows=1 width=0)\n Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))\n -> Index Scan using tmp_work_unit_pkey on work_unit wu (cost=0.00..19.61 rows=1 width=16)\n Index Cond: ((id >= 1000000) AND (id <= 1100000) AND (id = $0))\n -> Index Scan using run_pkey on run r (cost=0.00..10.91 rows=1 width=8)\n Index Cond: (r.id = wu.run_id)\n(10 rows)\n\n\n\n\n\n\n\ncaesius=# \\i stmt_5.sql\nexplain\nDELETE FROM work_active wa\nwhere exists (\n SELECT 1\n FROM work_unit wu\n , run r\n WHERE wu.id = wa.wu_id\n AND wu.id between (select min(wu_id) from work_active limit 1) and (select max(wu_id) from work_active limit 1)\n AND wu.run_id = r.id\n AND (( (wu.status not in(2,3) ) OR (wu.stop_time is not null)) OR (r.status > 2) )\n LIMIT 1\n);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\nSeq Scan on work_active wa (cost=0.00..35071.47 rows=370 width=6)\n Filter: (subplan)\n SubPlan\n -> Limit (cost=16.22..46.76 rows=1 width=0)\n InitPlan\n -> Limit (cost=8.10..8.11 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..8.10 rows=1 width=4)\n -> Index Scan using work_active_pkey on work_active (cost=0.00..5987.09 rows=739 width=4)\n Filter: (wu_id IS NOT NULL)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Limit (cost=8.10..8.11 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..8.10 rows=1 width=4)\n -> Index Scan Backward using work_active_pkey on work_active (cost=0.00..5987.09 rows=739 width=4)\n Filter: (wu_id IS NOT NULL)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Nested Loop (cost=0.00..30.54 rows=1 width=0)\n Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))\n -> Index Scan using tmp_work_unit_pkey on work_unit wu (cost=0.00..19.61 rows=1 width=16)\n Index Cond: ((id >= $1) AND (id <= $3) AND (id = $4))\n -> Index Scan using run_pkey on run r (cost=0.00..10.91 rows=1 width=8)\n Index Cond: (r.id = wu.run_id)\n(23 rows)\n\n\n\n We are running 8.3.10 64bit. This message is a request for information about the “initplan” operation in explain plan.I want to know if I can take advantage of it, and use it to initialize query-bounds for the purpose of enforcing constraint exclusion on a table which has been range-partitioned on a serial-id column. Compare the plans below. They all do the same thing and delete from a table named work_active (about 500rows), which is a subset of work_unit (about 50m rows). Stmt_3 is the plan currently in use. Stmt_4 and stmt_5 ilustrate explain plans of two variants of stmt_3 (no partitions yet):-          Limit the sub-query using constants (derived from a prior query  min() and max() against work_active), (ref stmt_4 below) or-          Try and do something cute and do a subquery using min() and max() (ref stmt_5 below).  My questions are:-          What does the “initplan” operation do? ( I can take a guess, but could someone give me some details, cos the docn about it is pretty sparse).-          Will this enable constraint exclusion on the work_unit table if we introduce partitioning?   Thanks in adv for any help you can give me.Mr       caesius=# \\i stmt_3.sqlexplainDELETE FROM work_active waWHERE EXISTS (     SELECT 1     FROM   work_unit wu          , run r     WHERE  wu.id = wa.wu_id     AND    wu.run_id = r.id     AND    (( (wu.status not in (2,3)) OR (wu.stop_time is not null)) OR (r.status > 2) )     LIMIT 1);                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------Seq Scan on work_active wa  (cost=0.00..23078.82 rows=370 width=6)   Filter: (subplan)   SubPlan     ->  Limit  (cost=0.00..30.53 rows=1 width=0)           ->  Nested Loop  (cost=0.00..30.53 rows=1 width=0)                 Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))                 ->  Index Scan using tmp_work_unit_pkey on work_unit wu  (cost=0.00..19.61 rows=1 width=16)                       Index Cond: (id = $0)                 ->  Index Scan using run_pkey on run r  (cost=0.00..10.91 rows=1 width=8)                       Index Cond: (r.id = wu.run_id)(10 rows)     caesius=# \\i stmt_4.sqlexplainDELETE FROM work_active wawhere exists (     SELECT 1     FROM   work_unit wu          , run r     WHERE  wu.id = wa.wu_id      AND    wu.id between 1000000 and 1100000     AND    wu.run_id = r.id     AND    (( (wu.status not in(2,3) ) OR (wu.stop_time is not null)) OR (r.status > 2) )     LIMIT 1);                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------Seq Scan on work_active wa  (cost=0.00..22624.37 rows=362 width=6)   Filter: (subplan)   SubPlan     ->  Limit  (cost=0.00..30.54 rows=1 width=0)           ->  Nested Loop  (cost=0.00..30.54 rows=1 width=0)                 Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))                 ->  Index Scan using tmp_work_unit_pkey on work_unit wu  (cost=0.00..19.61 rows=1 width=16)                       Index Cond: ((id >= 1000000) AND (id <= 1100000) AND (id = $0))                 ->  Index Scan using run_pkey on run r  (cost=0.00..10.91 rows=1 width=8)                       Index Cond: (r.id = wu.run_id)(10 rows)       caesius=# \\i stmt_5.sqlexplainDELETE FROM work_active wawhere exists (     SELECT 1     FROM   work_unit wu          , run r     WHERE  wu.id = wa.wu_id     AND    wu.id between (select min(wu_id) from work_active limit 1) and (select max(wu_id) from work_active limit 1)     AND    wu.run_id = r.id     AND    (( (wu.status not in(2,3) ) OR (wu.stop_time is not null)) OR (r.status > 2) )     LIMIT 1);                                                           QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------Seq Scan on work_active wa  (cost=0.00..35071.47 rows=370 width=6)   Filter: (subplan)   SubPlan     ->  Limit  (cost=16.22..46.76 rows=1 width=0)           InitPlan             ->  Limit  (cost=8.10..8.11 rows=1 width=0)                   InitPlan                     ->  Limit  (cost=0.00..8.10 rows=1 width=4)                           ->  Index Scan using work_active_pkey on work_active  (cost=0.00..5987.09 rows=739 width=4)                                 Filter: (wu_id IS NOT NULL)                   ->  Result  (cost=0.00..0.01 rows=1 width=0)             ->  Limit  (cost=8.10..8.11 rows=1 width=0)                   InitPlan                     ->  Limit  (cost=0.00..8.10 rows=1 width=4)                           ->  Index Scan Backward using work_active_pkey on work_active  (cost=0.00..5987.09 rows=739 width=4)                                 Filter: (wu_id IS NOT NULL)                   ->  Result  (cost=0.00..0.01 rows=1 width=0)           ->  Nested Loop  (cost=0.00..30.54 rows=1 width=0)                 Join Filter: ((wu.status <> ALL ('{2,3}'::integer[])) OR (wu.stop_time IS NOT NULL) OR (r.status > 2))                 ->  Index Scan using tmp_work_unit_pkey on work_unit wu  (cost=0.00..19.61 rows=1 width=16)                       Index Cond: ((id >= $1) AND (id <= $3) AND (id = $4))                 ->  Index Scan using run_pkey on run r  (cost=0.00..10.91 rows=1 width=8)                       Index Cond: (r.id = wu.run_id)(23 rows)", "msg_date": "Sat, 31 Jul 2010 23:36:37 -0700", "msg_from": "Mark Rostron <[email protected]>", "msg_from_op": true, "msg_subject": "what does \"initplan\" operation in explain output mean?" }, { "msg_contents": "Mark Rostron <[email protected]> writes:\n> This message is a request for information about the \"initplan\" operation in explain plan.\n\nAn initplan is a sub-SELECT that only needs to be executed once because it\nhas no dependency on the immediately surrounding query level. The cases\nyou show here are from sub-SELECTs like this:\n\n\t(select min(wu_id) from work_active limit 1)\n\nwhich yields a value that's independent of anything in the outer query.\nIf there were an outer reference in there, you'd get a SubPlan instead,\nbecause the subquery would need to be done over again for each row of\nthe outer query.\n\nBTW, adding LIMIT 1 to an aggregate query is pretty pointless.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Aug 2010 10:08:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what does \"initplan\" operation in explain output mean? " }, { "msg_contents": "Thanks.\nSo am I right in assuming that the aggregate sub-query ( against work_active ) results will not assist with constraint exclusion in the sub-query against work_unit (if we introduce range partitions on this table)?\n Mr\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Sunday, August 01, 2010 7:08 AM\nTo: Mark Rostron\nCc: [email protected]\nSubject: Re: [PERFORM] what does \"initplan\" operation in explain output mean? \n\nMark Rostron <[email protected]> writes:\n> This message is a request for information about the \"initplan\" operation in explain plan.\n\nAn initplan is a sub-SELECT that only needs to be executed once because it has no dependency on the immediately surrounding query level. The cases you show here are from sub-SELECTs like this:\n\n\t(select min(wu_id) from work_active limit 1)\n\nwhich yields a value that's independent of anything in the outer query.\nIf there were an outer reference in there, you'd get a SubPlan instead, because the subquery would need to be done over again for each row of the outer query.\n\nBTW, adding LIMIT 1 to an aggregate query is pretty pointless.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 2 Aug 2010 09:42:45 -0700", "msg_from": "Mark Rostron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what does \"initplan\" operation in explain output\n mean?" }, { "msg_contents": "Mark Rostron <[email protected]> writes:\n> So am I right in assuming that the aggregate sub-query ( against work_active ) results will not assist with constraint exclusion in the sub-query against work_unit (if we introduce range partitions on this table)?\n\nDunno. You didn't actually show what you were hoping would work, or\nwork differently as the case may be.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Aug 2010 13:29:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what does \"initplan\" operation in explain output mean? " } ]
[ { "msg_contents": "Hi,\n\nI'm running PostgreSQL 8.3 and I have a query with a couple of NOT IN\nsubqueries:\n\nDELETE FROM foo WHERE type = 'o' AND b NOT IN (SELECT cqc.b FROM bar\ncqc) AND b NOT IN (SELECT car.b FROM foo car WHERE car.type != 'o');\n\nThe plan produced for this is:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Index Scan using foo_type_index on foo (cost=17851.93..1271633830.75\nrows=66410 width=6)\n Index Cond: (type = 'o'::bpchar)\n Filter: ((NOT (subplan)) AND (NOT (subplan)))\n SubPlan\n -> Materialize (cost=6077.87..10238.57 rows=299170 width=8)\n -> Seq Scan on bar cqc (cost=0.00..4609.70 rows=299170 width=8)\n -> Materialize (cost=11774.06..15728.45 rows=284339 width=8)\n -> Seq Scan on foo car (cost=0.00..10378.73 rows=284339 width=8)\n Filter: (type <> 'o'::bpchar)\n(9 rows)\n\n\nUnfortunately, when these tables get large-ish, the materilzations get\nreally expensive to re-scan for every tuple (see cost above). At the\nmoment, I have ~500k rows in foo and ~300k rows in bar. The\nselectivity of type = 'o' is ~50%. I've tried to re-write the query as\nfollows:\n\nDELETE FROM foo WHERE b IN (SELECT candidate_run.type_o_run as b FROM\n(SELECT cqar1.b AS type_o_run, cqar2.b AS non_type_o_run FROM foo\ncqar1 LEFT OUTER JOIN foo cqar2 ON (cqar1.b = cqar2.b AND cqar2.type\n!= 'o') WHERE cqar1.type = 'o') candidate_run LEFT OUTER JOIN bar ON\n(candidate_run.type_o_run = bar.b) WHERE non_type_o_run IS NULL AND\nbar.b IS NULL);\n\nThis gives the more sensible plan:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Hash IN Join (cost=48999.81..71174.41 rows=66410 width=6)\n Hash Cond: (foo.b = cqar1.b)\n -> Seq Scan on foo (cost=0.00..9003.78 rows=549978 width=14)\n -> Hash (cost=47909.68..47909.68 rows=66410 width=8)\n -> Hash Left Join (cost=24562.29..47909.68 rows=66410 width=8)\n Hash Cond: (cqar1.b = bar.b)\n Filter: (bar.b IS NULL)\n -> Hash Left Join (cost=15043.96..33635.58 rows=132820 width=8)\n Hash Cond: (cqar1.b = cqar2.b)\n Filter: (cqar2.b IS NULL)\n -> Seq Scan on foo cqar1 (cost=0.00..10378.73\nrows=265639 width=8)\n Filter: (type = 'o'::bpchar)\n -> Hash (cost=10378.73..10378.73 rows=284339 width=8)\n -> Seq Scan on foo cqar2\n(cost=0.00..10378.73 rows=284339 width=8)\n Filter: (type <> 'o'::bpchar)\n -> Hash (cost=4609.70..4609.70 rows=299170 width=8)\n -> Seq Scan on bar (cost=0.00..4609.70\nrows=299170 width=8)\n(17 rows)\n\n\nAs far as I can tell, the results are identical.\n\nMy questions\n\n1. Is my rewrite valid?\n2. Any way to reliably achieve the second plan (or really, any plan\nthat doesn't rescan ~~500k tuples per each of ~250k tuples) by\ntweaking (per-session) planner constants? I've played with this a\nlittle, but without much success. As with any rewrite situation, I'd\nprefer to stick with the simpler, more explicit original query.\n\nHere is a SQL script to reproduce the problem:\n\n\\set ON_ERROR_STOP\n\ndrop schema if exists not_in_test cascade;\ncreate schema not_in_test;\n\nset search_path to not_in_test;\n\ncreate table foo (\n a oid not null,\n b bigint not null,\n type char not null,\n ts timestamp without time zone not null\n);\ncreate index \"foo_b_a_type_index\" on foo (b, a, type);\ncreate index \"foo_a_index\" on foo (a);\ncreate index \"foo_type_index\" on foo(type);\n\ncreate table bar (\n b bigint unique not null,\n c timestamp with time zone not null\n);\ncreate index \"bar_b_index\" on bar(b);\n\ninsert into foo select (random()*10)::integer,\ngenerate_series(1,550000), case when random() > 0.5 then 'o' else 'x'\nend, now();\ninsert into bar select val, now() from generate_series(1,1200000) as\nvals(val) where random() > 0.75;\n\nanalyze foo;\nanalyze bar;\n\nEXPLAIN DELETE FROM foo WHERE type = 'o' AND b NOT IN (SELECT cqc.b\nFROM bar cqc) AND b NOT IN (SELECT car.b FROM foo car WHERE car.type\n!= 'o');\nEXPLAIN DELETE FROM foo WHERE b IN (SELECT candidate_run.type_o_run as\nb FROM (SELECT cqar1.b AS type_o_run, cqar2.b AS non_type_o_run FROM\nfoo cqar1 LEFT OUTER JOIN foo cqar2 ON (cqar1.b = cqar2.b AND\ncqar2.type != 'o') WHERE cqar1.type = 'o') candidate_run LEFT OUTER\nJOIN bar ON (candidate_run.type_o_run = bar.b) WHERE non_type_o_run IS\nNULL AND bar.b IS NULL);\n\n\nThanks,\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Mon, 2 Aug 2010 12:12:51 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "Maciek Sakrejda <[email protected]> wrote:\n \n> DELETE FROM foo WHERE type = 'o' AND b NOT IN (SELECT cqc.b FROM\n> bar cqc) AND b NOT IN (SELECT car.b FROM foo car WHERE car.type !=\n> 'o');\n \nCan \"b\" be null in any of these tables? If not, then you can\nrewrite your query to us NOT EXISTS and have the same semantics. \nThat will often be much faster. Something like:\n \nDELETE FROM foo\n WHERE type = 'o'\n AND NOT EXISTS (SELECT * FROM bar cqc where cqc.b = foo.b)\n AND NOT EXISTS (SELECT * FROM foo car WHERE car.b = foo.b\n AND car.type <> 'o');\n \n-Kevin\n", "msg_date": "Mon, 02 Aug 2010 14:29:29 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "> Can \"b\" be null in any of these tables? If not, then you can\n> rewrite your query to us NOT EXISTS and have the same semantics.\n> That will often be much faster.\n\nThanks, Kevin.\n\nNo NULLs. It looks like it's a good deal slower than the LOJ version,\nbut a good deal faster than the original. Since the equivalence of\nsemantics is much easier to verify here, we may go with this (at least\nfor the moment).\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 230\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Mon, 2 Aug 2010 12:42:43 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "Maciek Sakrejda <[email protected]> wrote:\n \n> No NULLs. It looks like it's a good deal slower than the LOJ\n> version, but a good deal faster than the original.\n \nOn 8.4 and later the NOT EXISTS I suggested is a bit faster than\nyour fast version, since Tom did some very nice work in this area,\nimplementing semi join and anti join. If you've got much load with\nthis kind of query, it might be worth upgrading.\n \n-Kevin\n", "msg_date": "Mon, 02 Aug 2010 14:49:54 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 02, 2010 at 12:12:51PM -0700, Maciek Sakrejda wrote:\n> I'm running PostgreSQL 8.3 and I have a query with a couple of NOT IN\n> subqueries:\nWith 8.3 you will have to use manual antijoins (i.e LEFT JOIN\n... WHERE NULL). If you use 8.4 NOT EXISTS() will do that\nautomatically in many cases (contrary to NOT IN () which has strange\nNULL semantics).\n\nAnbdres\n", "msg_date": "Mon, 2 Aug 2010 21:52:57 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "All fields involved are declared NOT NULL, but thanks for the heads up.\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Mon, 2 Aug 2010 13:06:00 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "With Oracle, I've found an anti-union (MINUS in Oracle, EXCEPT in PGSQL) to\nbe often a bit better than an anti-join, which is in turn faster than NOT\nIN. Depends of course on row distribution and index layouts, and a bunch of\nother details.\n\nDepending on what you're returning, it can pay to make sure this computation\nis done with the shortest possible rows, if necessary using a subquery.\n\nCheers\nDave\n\nOn Mon, Aug 2, 2010 at 2:49 PM, Kevin Grittner\n<[email protected]>wrote:\n\n> Maciek Sakrejda <[email protected]> wrote:\n>\n> > No NULLs. It looks like it's a good deal slower than the LOJ\n> > version, but a good deal faster than the original.\n>\n> On 8.4 and later the NOT EXISTS I suggested is a bit faster than\n> your fast version, since Tom did some very nice work in this area,\n> implementing semi join and anti join. If you've got much load with\n> this kind of query, it might be worth upgrading.\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWith Oracle, I've found an anti-union (MINUS in Oracle, EXCEPT in PGSQL) to be often a bit better than an anti-join, which is in turn faster than NOT IN. Depends of course on row distribution and index layouts, and a bunch of other details.\nDepending on what you're returning, it can pay to make sure this computation is done with the shortest possible rows, if necessary using a subquery.CheersDaveOn Mon, Aug 2, 2010 at 2:49 PM, Kevin Grittner <[email protected]> wrote:\nMaciek Sakrejda <[email protected]> wrote:\n\n> No NULLs. It looks like it's a good deal slower than the LOJ\n> version, but a good deal faster than the original.\n\nOn 8.4 and later the NOT EXISTS I suggested is a bit faster than\nyour fast version, since Tom did some very nice work in this area,\nimplementing semi join and anti join.  If you've got much load with\nthis kind of query, it might be worth upgrading.\n\n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 2 Aug 2010 15:14:56 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "On Mon, Aug 02, 2010 at 01:06:00PM -0700, Maciek Sakrejda wrote:\n> All fields involved are declared NOT NULL, but thanks for the heads up.\nAfair the planner doesnt use that atm.\n\nAndres\n", "msg_date": "Mon, 2 Aug 2010 22:21:51 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": ">> All fields involved are declared NOT NULL, but thanks for the heads up.\n>Afair the planner doesnt use that atm.\n\nI was referring to not having to care about the strange NULL semantics\n(as per your original comment), since I have no NULLs. Given that, I\nthink the NOT EXISTS could be a good solution, even on 8.3 (we're\nplanning to upgrade, but it's not a feasible solution to this\nparticular problem), no?\n\nBasically, it seems like the main issue with the current plans is the\nper-tuple seq scans on the full materializations. Adding correlation\n(by rewriting NOT IN as NOT EXISTS) prevents materialization, hence\ngetting rid of the biggest performance problem.\n\nThanks,\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Mon, 2 Aug 2010 13:35:13 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "Dave Crooke <[email protected]> wrote:\n \n> With Oracle, I've found an anti-union (MINUS in Oracle, EXCEPT in\n> PGSQL) to be often a bit better than an anti-join, which is in\n> turn faster than NOT IN. Depends of course on row distribution and\n> index layouts, and a bunch of other details.\n \nI found that assertion intriguing, so I tested the \"fast\" query from\nthe original post against my suggestion and a version using EXCEPT. \n(This was against the development HEAD, not any release.)\n \nOP \"fast\": 32.9 seconds\nNOT EXISTS: 11.2 seconds\nEXCEPT: 7.7 seconds\n \nThat last was using this query, which just might work OK on 8.3:\n \nDELETE FROM foo\n where foo.b in (\n select b from foo WHERE type = 'o'\n except SELECT b FROM bar\n except SELECT b FROM foo where type <> 'o');\n \nI wonder whether this could make a reasonable alternative plan for\nthe optmizer to consider some day....\n \n-Kevin\n", "msg_date": "Mon, 02 Aug 2010 16:03:05 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> DELETE FROM foo\n> where foo.b in (\n> select b from foo WHERE type = 'o'\n> except SELECT b FROM bar\n> except SELECT b FROM foo where type <> 'o');\n \nOops. Maybe before I get excited I should try it with a query which\nis actually logically equivalent. :-(\n \n-Kevin\n", "msg_date": "Mon, 02 Aug 2010 16:23:55 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "Kevin Grittner <[email protected]> wrote:\n \n> Maybe before I get excited I should try it with a query which is\n> actually logically equivalent.\n \nFixed version:\n \nDELETE FROM foo\n where type = 'o' and foo.b in (\n select b from foo WHERE type = 'o'\n except SELECT b FROM bar\n except SELECT b FROM foo where type <> 'o');\n \nThe change didn't affect run time significantly; it still beats the\nothers.\n \n-Kevin\n\n", "msg_date": "Mon, 02 Aug 2010 16:38:21 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": "On Mon, Aug 02, 2010 at 01:35:13PM -0700, Maciek Sakrejda wrote:\n> >> All fields involved are declared NOT NULL, but thanks for the heads up.\n> >Afair the planner doesnt use that atm.\n>\n> I was referring to not having to care about the strange NULL semantics\n> (as per your original comment), since I have no NULLs. Given that, I\n> think the NOT EXISTS could be a good solution, even on 8.3 (we're\n> planning to upgrade, but it's not a feasible solution to this\n> particular problem), no?\nThe point is that only 8.4 will optimize that case properly. 8.3 will\ngenerate plans which are inefficient in many (or most) cases for both\nvariants. I would suggest to use manual antijoins...\n", "msg_date": "Tue, 3 Aug 2010 00:19:00 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" }, { "msg_contents": ">> Maybe before I get excited I should try it with a query which is\n>> actually logically equivalent.\n\nYes, the joys of manual rewrites...\n\n> Fixed version:\n>\n> DELETE FROM foo\n> where type = 'o' and foo.b in (\n> select b from foo WHERE type = 'o'\n> except SELECT b FROM bar\n> except SELECT b FROM foo where type <> 'o');\n>\n> The change didn't affect run time significantly; it still beats the\n> others.\n\nOn my 8.3, it still performs a little worse than your original\ncorrelated EXCEPT (which is actually on par with the antijoin in 8.3,\nbut significantly better in 8.4). In 8.4, this EXCEPT version does\nseem somewhat better.\n\nIt looks like according to Andres, though, I should not be depending\non these plans with 8.3, so I may want to stick with the manual\nantijoin.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Mon, 2 Aug 2010 15:37:21 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing NOT IN plans / verify rewrite" } ]
[ { "msg_contents": "Hallo,\n\nIm running pg-8,pgpoolII on sol10-zone.\n\n\nAfter update sol10u7, queries on coltype timestamp are very slow.\nSystem: sparc, 2GB RAM\n\nThis DB is a greylist-DB to fight spam.\n500 connections should be easy.\nBut 16 connection consum 10sec/query.\nOn another system (sparc) only 1 sec.i\n\ns. details\n\nhowto diag?\n\nregards heiko\n\n\n- details\n\n $ /opt/csw/postgresql/bin/postmaster -V\n postgres (PostgreSQL) 8.3.1\n\n - numDS: 200k-800k\n - conn: 50-500\n - max age: 2d\n\n- testbed\n tab=blacklist\n filter=\"^ .*[0-9]|elaps\"\n cmd=\"/usr/bin/time psql -t -p 5432 -h 192.168.5.126 -U smtpuser smtp\"\n sql=\"select count(*) from $tab\";\n sql1=\"select count(*) from $tab where create_time > abstime(int4(timenow()) -30\n00)\";\n sql2=\"select count(*) from $tab where create_time < abstime(int4(timenow()) -30\n00)\";\n echo \"$sql\" | $cmd 2>&1 | egrep \"$filter\";\n echo \"$sql1\" | $cmd 2>&1 | egrep \"$filter\";\n echo \"$sql2\" | $cmd 2>&1 | egrep \"$filter\";\n\n- result:\n 152603\n 0.01user 0.01system 0:00.29elapsed 10%CPU (0avgtext+0avgdata 0maxresident)k\n 0\n 0.02user 0.01system 0:00.05elapsed 58%CPU (0avgtext+0avgdata 0maxresident)k\n 152603\n 0.00user 0.02system 0:01.95elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k\n\n\n - select without where: 150000 / 0.3 s\n - select with where: 150000 / 2 s\n - time is depended on recv DS (linear)\n\nI simulate any parallel queries:\n$ ./sqltestcon.sh -v 1 -i 20 -min 1 -max 64 -sql \"$sql1; $sql2\" -cmd \"$cmd\"\n3081844 sum: 1 (5) * 13 = 13 -> 0 R/s, e=0 0:01.36elapsed 3%CPU\n3081901 sum: 2 (7) * 10 = 20 -> 1 R/s, e=0 0:01.53elapsed 3%CPU\n3081922 sum: 4 (11) * 8 = 32 -> 1 R/s, e=0 0:02.40elapsed 0%CPU\n3081947 sum: 8 (19) * 5 = 40 -> 2 R/s, e=0 0:04.69elapsed 0%CPU\n3082005 sum: 16 (36) * 2 = 32 -> 1 R/s, e=0 0:09.06elapsed 0%CPU\n3082043 sum: 32 (67) * 2 = 64 -> 3 R/s, e=0 0:17.83elapsed 0%CPU\n3082119 sum: 64 (130) * 1 = 64 -> 3 R/s, e=0 0:34.19elapsed 0%CPU\n\n -> 16 connections: only 2 passes -> aprox. 10s /query\n\n$ connstat -cols \"_sy_load 5432 5433 _ps_pool _ps_postmaster\" 60\n## time load 5432 5433 pgpool postmaster\n08:18:34 026 0 1 2 553 6\n08:19:36 471 1 10 553 14\n08:20:40 1804 2 35 553 42\n08:21:51 5877 63 128 553 132\n\n -> 128 connections: Load=58!\n\n- following test are used:\n - reindex\n - drop, create index\n - drop, create database\n - zfs recordsize: 128,8 -> no differ\n - zfs load: 10% ops, 10% read\n - shmmax: (0x80000000)\n - shared_buffers\n - work_mem 64,256,1024\n - wal_buffers\n - effective_cache_size\n - #log_disconnections = off\n - autovacuum = on , off\n\nWhat can i do?\nHowto interpret explain output?\n\n$ echo \"EXPLAIN ANALYSE $sql2\" | $cmd 2>&1\n Aggregate (cost=5236.53..5236.54 rows=1 width=0) (actual time=2018.985..2018.9\n86 rows=1 loops=1)\n -> Seq Scan on blacklist (cost=0.00..4855.06 rows=152588 width=0) (actual t\nime=0.329..1883.275 rows=152603 loops=1)\n Filter: (create_time < ((((timenow())::integer - 3000))::abstime)::time\nstamp with time zone)\n Total runtime: 2019.371 ms\n\n0.00user 0.02system 0:02.07elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (0major+2353minor)pagefaults 0swaps\n\n\n------------------------------------------------------------------------\n- test 5 frische DB\n\ntab=testbl\nf=/tmp/test.sql\ncat > $f <<EOF\ncreate table $tab\n(\n relay_ip inet,\n create_time timestamp default now() NOT NULL\n);\n\ncreate index ${tab}_relay_ip_idx on ${tab}(relay_ip);\ncreate index ${tab}_create_time_idx on ${tab}(create_time);\nEOF\ncat $f | $cmd\n\n------------------------------------------------------------------------\n- memstat [10]\n\n $ echo \"::memstat\"|mdb -k\n Page Summary Pages MB %Tot\n ------------ ---------------- ---------------- ----\n Kernel 189712 1482 74%\n Anon 25308 197 10%\n Exec and libs 1991 15 1%\n Page cache 1849 14 1%\n Free (cachelist) 2949 23 1%\n Free (freelist) 35060 273 14%\n\n Total 256869 2006\n Physical 255327 1994\n\n\n- shm (org)\n $ echo \"shminfo_shmmax/E\" | mdb -k\n shminfo_shmmax:\n shminfo_shmmax: 8388608\n\n\n", "msg_date": "Wed, 4 Aug 2010 09:16:09 +0200 (CEST)", "msg_from": "\"Heiko L.\" <[email protected]>", "msg_from_op": true, "msg_subject": "performance sol10 zone (fup)" }, { "msg_contents": "On Wed, Aug 4, 2010 at 3:16 AM, Heiko L. <[email protected]> wrote:\n> $ echo \"EXPLAIN ANALYSE $sql2\" | $cmd 2>&1\n>  Aggregate  (cost=5236.53..5236.54 rows=1 width=0) (actual time=2018.985..2018.9\n> 86 rows=1 loops=1)\n>   ->  Seq Scan on blacklist  (cost=0.00..4855.06 rows=152588 width=0) (actual t\n> ime=0.329..1883.275 rows=152603 loops=1)\n>         Filter: (create_time < ((((timenow())::integer - 3000))::abstime)::time\n> stamp with time zone)\n>  Total runtime: 2019.371 ms\n\nMaybe you need an index on create_time.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 11 Aug 2010 23:16:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance sol10 zone (fup)" }, { "msg_contents": "On Wed, Aug 4, 2010 at 1:16 AM, Heiko L. <[email protected]> wrote:\n> Hallo,\n>\n> Im running pg-8,pgpoolII on sol10-zone.\n\nI noticed late you mention 8.3.1. Two points, you're missing > 1 year\nof updates, bug fixes, security patches etc. Assuming this version\nwas fast before, we'll assume it's not the cause of this problem,\nhowever, you're asking for trouble with a version that old. There are\nbugs that might not bite you today, but may well in the future.\nPlease upgrade to 8.3.11.\n\n> After update sol10u7, queries on coltype timestamp are very slow.\n> System: sparc, 2GB RAM\n\nIs it possible you had an index that was working that now isn't? Are\nthe queries you included the real ones or approximations?\n\nIt looks like you have a bunch of seq scans happening. If they're all\nhappening on the same table or small set of them, then a lot of\nqueries should be able to access them in any order together in 8.3\n\nAre sequential scans normal for this query when it runs fast?\n\nWhat does vmstat 10 and / or iostat -xd 10 have to say while this is running?\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Wed, 11 Aug 2010 22:12:08 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance sol10 zone (fup)" } ]
[ { "msg_contents": "We have been using the C locale for everything at our site, but\nthere is occasionally talk of supporting characters outside the\nASCII7 set. In playing around with indexing, to see what the impact\nof that would be, I stumbled across something which was mildly\nsurprising.\n\nIn the C locale, if you want to search for an exact value which\ndoesn't contain wildcard characters, it doesn't matter whether you\nuse the 'LIKE' operator or the '=' operator. With LATIN1 encoding,\nit made three orders of magnitude difference, both in the estimated\ncost and the actual run time. I'm not entirely clear on whether it\nwould be *incorrect* for PostgreSQL to automatically turn the second\nquery below into the first, or just too expensive an optimization to\ncheck for compared to how often it might help.\n \n\"SccaParty_SearchName\" btree (\"searchName\" varchar_pattern_ops)\n \nexplain analyze select \"searchName\" from \"SccaParty\"\n where \"searchName\" like 'SMITH,JOHNBRACEYJR';\n \n Index Scan using \"SccaParty_SearchName\" on \"SccaParty\"\n (cost=0.00..2.94 rows=22 width=18)\n (actual time=0.046..0.051 rows=2 loops=1)\n Index Cond: ((\"searchName\")::text ~=~ 'SMITH,JOHNBRACEYJR'::text)\n Filter: ((\"searchName\")::text ~~ 'SMITH,JOHNBRACEYJR'::text)\n Total runtime: 0.083 ms\n \nexplain analyze select \"searchName\" from \"SccaParty\"\n where \"searchName\" = 'SMITH,JOHNBRACEYJR';\n \n Seq Scan on \"SccaParty\"\n (cost=0.00..3014.49 rows=22 width=18)\n (actual time=2.395..54.228 rows=2 loops=1)\n Filter: ((\"searchName\")::text = 'SMITH,JOHNBRACEYJR'::text)\n Total runtime: 54.274 ms\n \nI don't have a problem, and am not suggesting any action; just\ntrying to understand this.\n \n-Kevin\n", "msg_date": "Wed, 04 Aug 2010 11:34:12 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE without wildcard different from =" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> We have been using the C locale for everything at our site, but\n> there is occasionally talk of supporting characters outside the\n> ASCII7 set. In playing around with indexing, to see what the impact\n> of that would be, I stumbled across something which was mildly\n> surprising.\n\n> In the C locale, if you want to search for an exact value which\n> doesn't contain wildcard characters, it doesn't matter whether you\n> use the 'LIKE' operator or the '=' operator. With LATIN1 encoding,\n> it made three orders of magnitude difference, both in the estimated\n> cost and the actual run time.\n\nWhat PG version are you testing? 8.4 and up should know that an\nexact-match pattern can be optimized regardless of the lc_collate\nsetting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Aug 2010 12:41:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE without wildcard different from = " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> We have been using the C locale for everything at our site, but\n>> there is occasionally talk of supporting characters outside the\n>> ASCII7 set. In playing around with indexing, to see what the\n>> impact of that would be, I stumbled across something which was\n>> mildly surprising.\n> \n>> In the C locale, if you want to search for an exact value which\n>> doesn't contain wildcard characters, it doesn't matter whether\n>> you use the 'LIKE' operator or the '=' operator. With LATIN1\n>> encoding, it made three orders of magnitude difference, both in\n>> the estimated cost and the actual run time.\n> \n> What PG version are you testing? 8.4 and up should know that an\n> exact-match pattern can be optimized regardless of the lc_collate\n> setting.\n \nFor reasons not worth getting into, I had an 8.3.8 database sitting\naround in this locale, so I was testing things there. I'll take the\ntime to copy into an 8.4.4 database for further testing, and maybe\n9.0 beta, too. That'll take hours, though, so I can't immediately\ntest it.\n \nTo be clear, though, the problem isn't that it didn't turn a LIKE\nwith no wildcard characters into an equality test, it's that it\nwould have been three orders of magnitude faster (because of an\navailable index with an opclass specification) if it had treated an\nequality test as a LIKE.\n \n-Kevin\n", "msg_date": "Wed, 04 Aug 2010 11:52:24 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE without wildcard different from =" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> To be clear, though, the problem isn't that it didn't turn a LIKE\n> with no wildcard characters into an equality test, it's that it\n> would have been three orders of magnitude faster (because of an\n> available index with an opclass specification) if it had treated an\n> equality test as a LIKE.\n\nAh. Well, the real fix for that is also in 8.4: we got rid of the\nseparate ~=~ operator, so a text_pattern_ops index is now usable\nfor plain =.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Aug 2010 12:58:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE without wildcard different from = " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> Ah. Well, the real fix for that is also in 8.4: we got rid of the\n> separate ~=~ operator, so a text_pattern_ops index is now usable\n> for plain =.\n \nNice!\n \nThanks,\n \n-Kevin\n", "msg_date": "Wed, 04 Aug 2010 12:00:09 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE without wildcard different from =" } ]
[ { "msg_contents": "Hi, I'm curious -- does \"vacuum analyze e.g. table1\" improve\nperformance on \"insert into table1 ...\". I understand the vacuum\nanalyze helps out the query -- select, etc., but just not quite sure\non insert.\n\nSpecifically, I'm doing the following.\n\n1, delete records ...\n2, insert records ...\n\nif I add \"vacuum analyze\" in-between this two steps, will it help on\nthe performance on the insert?\nMore importantly, If so, why?\n\nThanks,\nSean\n", "msg_date": "Thu, 5 Aug 2010 13:49:54 -0400", "msg_from": "Sean Chen <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum performance on insert" }, { "msg_contents": "Sean Chen <[email protected]> wrote:\n \n> 1, delete records ...\n> 2, insert records ...\n> \n> if I add \"vacuum analyze\" in-between this two steps, will it help\n> on the performance on the insert?\n \nAssuming there are no long-running transactions which would still be\nable to see the deleted rows, a VACUUM between those statements\nwould allow the INSERT to re-use the space previously occupied by\nthe deleted rows, rather than possibly needing to allocate new space\nfrom the OS.\n \n-Kevin\n", "msg_date": "Thu, 05 Aug 2010 12:56:52 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum performance on insert" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Sean Chen <[email protected]> wrote:\n>> 1, delete records ...\n>> 2, insert records ...\n>> \n>> if I add \"vacuum analyze\" in-between this two steps, will it help\n>> on the performance on the insert?\n \n> Assuming there are no long-running transactions which would still be\n> able to see the deleted rows, a VACUUM between those statements\n> would allow the INSERT to re-use the space previously occupied by\n> the deleted rows, rather than possibly needing to allocate new space\n> from the OS.\n\nBut on the other side of the coin, the ANALYZE step is probably not very\nhelpful there. Better to do that after you've loaded the new data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Aug 2010 14:11:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum performance on insert " }, { "msg_contents": "hi, thank you for the reply.\n\nI ran a number of tests to try to make sense of this.\n\nWhen I ran with or without vacuum, the number of disk io operations,\ncache operations etc. gathered from pg_stat table for the insertions\nare pretty much the same.\n\nSo I don't see vacuum reduce disk io operations.\n\nNow from what you mentioned below, do you know what's the cost of\npostgres requesting new disk space from OS?\n\nI'm seeing a big performance difference with vacuum, but I need a\nproof to show it's the requesting new space operation that was the\nproblem, not disk io, etc. since I would think disk could be expensive\nas well.\n\nThanks,\nSean\n\nOn Thu, Aug 5, 2010 at 2:11 PM, Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Sean Chen <[email protected]> wrote:\n>>> 1, delete records ...\n>>> 2, insert records ...\n>>>\n>>> if I add \"vacuum analyze\" in-between this two steps, will it help\n>>> on the performance on the insert?\n>\n>> Assuming there are no long-running transactions which would still be\n>> able to see the deleted rows, a VACUUM between those statements\n>> would allow the INSERT to re-use the space previously occupied by\n>> the deleted rows, rather than possibly needing to allocate new space\n>> from the OS.\n>\n> But on the other side of the coin, the ANALYZE step is probably not very\n> helpful there.  Better to do that after you've loaded the new data.\n>\n>                        regards, tom lane\n>\n", "msg_date": "Sat, 7 Aug 2010 00:32:30 -0400", "msg_from": "Sean Chen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum performance on insert" }, { "msg_contents": "Sean Chen <[email protected]> wrote:\n \n> Now from what you mentioned below, do you know what's the cost of\n> postgres requesting new disk space from OS?\n \nDepending on your OS and its version, your file system, your mount\noptions, and your disk subsystem (and its firmware revision), there\ncould be various effects -- the one likely to be biting you is write\nbarriers. When you allocate additional space from the OS, and it\nextends a file or creates a new file, there might be a write barrier\nto ensure that the file system catalog entries are persisted. This\ncould cause all writes (and possibly even reads) to pause until the\ndata is written to disk.\n \nThat's just a guess, of course. If you have a profiler you can run\nyou might be able to pin it down with that.\n \n-Kevin\n", "msg_date": "Mon, 09 Aug 2010 09:03:37 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum performance on insert" } ]
[ { "msg_contents": "I am using PostgreSQL 8.3.7 on a dedicated IBM 3660 with 24GB RAM running \nCentOS 5.4 x86_64. I have a ServeRAID 8k controller with 6 SATA 7500RPM \ndisks in RAID 6, and for the OLAP workload it feels* slow. I have 6 more \ndisks to add, and the RAID has to be rebuilt in any case, but first I \nwould like to solicit general advice. I know that's little data to go on, \nand I believe in the scientific method, but in this case I don't have the \ntime to make many iterations.\n\nMy questions are simple, but in my reading I have not been able to find \ndefinitive answers:\n\n1) Should I switch to RAID 10 for performance? I see things like \"RAID 5 \nis bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I see little on \nRAID 6. RAID 6 was the original choice for more usable space with good \nredundancy. My current performance is 85MB/s write, 151 MB/s reads (using \ndd of 2xRAM per \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm).\n\n2) Should I configure the ext3 file system with noatime and/or \ndata=writeback or data=ordered? My controller has a battery, the logical \ndrive has write cache enabled (write-back), and the physical devices have \nwrite cache disabled (write-through).\n\n3) Do I just need to spend more time configuring postgresql? My \nnon-default settings were largely generated by pgtune-0.9.3:\n\n max_locks_per_transaction = 128 # manual; avoiding \"out of shared \nmemory\"\n default_statistics_target = 100\n maintenance_work_mem = 1GB\n constraint_exclusion = on\n checkpoint_completion_target = 0.9\n effective_cache_size = 16GB\n work_mem = 352MB\n wal_buffers = 32MB\n checkpoint_segments = 64\n shared_buffers = 2316MB\n max_connections = 32\n\nI am happy to take informed opinion. If you don't have the time to \nproperly cite all your sources but have suggestions, please send them.\n\nThanks in advance,\nKen\n\n* I know \"feels slow\" is not scientific. What I mean is that any single \nquery on a fact table, or any 'rm -rf' of a big directory sends disk \nutilization to 100% (measured with iostat -x 3).\n", "msg_date": "Thu, 05 Aug 2010 14:28:08 -0400", "msg_from": "\"Kenneth Cox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Thursday, August 05, 2010, \"Kenneth Cox\" <[email protected]> wrote:\n> 1) Should I switch to RAID 10 for performance? I see things like \"RAID 5\n> is bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I see little\n> on RAID 6. RAID 6 was the original choice for more usable space with\n> good redundancy. My current performance is 85MB/s write, 151 MB/s reads\n> (using dd of 2xRAM per\n> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm).\n\nIf you can spare the drive space, go to RAID 10. RAID 5/6 usually look fine \non single-threaded sequential tests (unless your controller really sucks), \nbut in the real world with multiple processes doing random I/O RAID 10 will \ngo a lot further on the same drives. Plus your recovery time from disk \nfailures will be a lot faster.\n\nIf you can't spare the drive space ... you should buy more drives. \n\n> \n> 2) Should I configure the ext3 file system with noatime and/or\n> data=writeback or data=ordered? My controller has a battery, the logical\n> drive has write cache enabled (write-back), and the physical devices have\n> write cache disabled (write-through).\n\nnoatime is fine but really minor filesystem options rarely show much impact. \nMy best performance comes from XFS filesystems created with stripe options \nmatching the underlying RAID array. Anything else is just a bonus.\n\n> * I know \"feels slow\" is not scientific. What I mean is that any single\n> query on a fact table, or any 'rm -rf' of a big directory sends disk\n> utilization to 100% (measured with iostat -x 3).\n\n.. and it should. Any modern system can peg a small disk array without much \neffort. Disks are slow.\n\n-- \n\"No animals were harmed in the recording of this episode. We tried but that \ndamn monkey was just too fast.\"\n", "msg_date": "Thu, 5 Aug 2010 11:53:10 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Thu, Aug 5, 2010 at 12:28 PM, Kenneth Cox <[email protected]> wrote:\n> I am using PostgreSQL 8.3.7 on a dedicated IBM 3660 with 24GB RAM running\n> CentOS 5.4 x86_64.  I have a ServeRAID 8k controller with 6 SATA 7500RPM\n> disks in RAID 6, and for the OLAP workload it feels* slow.  I have 6 more\n> disks to add, and the RAID has to be rebuilt in any case, but first I would\n> like to solicit general advice.  I know that's little data to go on, and I\n> believe in the scientific method, but in this case I don't have the time to\n> make many iterations.\n>\n> My questions are simple, but in my reading I have not been able to find\n> definitive answers:\n>\n> 1) Should I switch to RAID 10 for performance?  I see things like \"RAID 5 is\n> bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I see little on RAID\n> 6.  RAID 6 was the original choice for more usable space with good\n> redundancy.  My current performance is 85MB/s write, 151 MB/s reads (using\n> dd of 2xRAM per\n> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm).\n\nSequential read / write is not very useful for a database benchmark.\nIt does kind of give you a baseline for throughput, but most db access\nis mixed enough that random access becomes the important measurement.\n\nRAID6 is basically RAID5 with a hot spare already built into the\narray. This makes rebuild less of an issue since you can reduce the\nspare io used to rebuild the array to something really small.\nHowever, it's in the same performance ballpark as RAID 5 with the\naccompanying write performance penalty.\n\nRAID-10 is pretty much the only way to go for a DB, and if you need\nmore space, you need more or bigger drives, not RAID-5/6\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Thu, 5 Aug 2010 13:25:55 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Kenneth Cox wrote:\n> 1) Should I switch to RAID 10 for performance? I see things like \n> \"RAID 5 is bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I \n> see little on RAID 6. RAID 6 was the original choice for more usable \n> space with good redundancy. My current performance is 85MB/s write, \n> 151 MB/s reads \n\nRAID6 is no better than RAID5 performance wise, it just has better fault \ntolerance. And the ServeRAID 8k is a pretty underpowered card as RAID \ncontrollers go, so it would not be impossible for it computing RAID \nparity and the like to be the bottleneck here. I'd expect a 6-disk \nRAID10 with 7200RPM drives to be closer to 120MB/s on writes, so you're \nnot getting ideal performance there. Your read figure is more \ncompetative, but that's usually the RAID5 pattern--decent on reads, \nslugging on writes.\n\n> 2) Should I configure the ext3 file system with noatime and/or \n> data=writeback or data=ordered? My controller has a battery, the \n> logical drive has write cache enabled (write-back), and the physical \n> devices have write cache disabled (write-through).\n\ndata=ordered is the ext3 default and usually a reasonable choice. Using \nwriteback instead can be dangerous, I wouldn't advise starting there. \nnoatime is certainly a good thing, but the speedup is pretty minor if \nyou have a battery-backed write cache.\n\n\n> 3) Do I just need to spend more time configuring postgresql? My \n> non-default settings were largely generated by pgtune-0.9.3\n\nThose look reasonable enough, except no reason to make wal_buffers \nbigger than 16MB. That work_mem figure might be high too, that's a \nknown concern with pgtune I need to knock out of it one day soon. When \nyou are hitting high I/O wait periods, is the system running out of RAM \nand swapping? That can cause really nasty I/O wait.\n\nYour basic hardware is off a bit, but not so badly that I'd start \nthere. Have you turned on slow query logging to see what is hammering \nthe system when the iowait climbs? Often tuning those by looking at the \nEXPLAIN ANALYZE output can be much more effective than hardware/server \nconfiguration tuning.\n\n> * I know \"feels slow\" is not scientific. What I mean is that any \n> single query on a fact table, or any 'rm -rf' of a big directory sends \n> disk utilization to 100% (measured with iostat -x 3).\n\n\"rm -rf\" is really slow on ext3 on just about any hardware. If your \nfact tables aren't in RAM and you run a query against them, paging them \nback in again will hammer the disks until it's done. That's not \nnecessarily indicative of a misconfiguration on its own.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 05 Aug 2010 15:40:11 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "\n> 1) Should I switch to RAID 10 for performance? I see things like \"RAID \n> 5 is bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I see \n> little on RAID 6.\n\nAs others said, RAID6 is RAID5 + a hot spare.\n\nBasically when you UPDATE a row, at some point postgres will write the \npage which contains that row.\n\nRAID10 : write the page to all mirrors.\nRAID5/6 : write the page to the relevant disk. Read the corresponding page \n from all disks (minus one), compute parity, write parity.\n\nAs you can see one small write will need to hog all drives in the array. \nRAID5/6 performance for small random writes is really, really bad.\n\nDatabases like RAID10 for reads too because when you need some random data \nyou can get it from any of the mirrors, so you get increased parallelism \non reads too.\n\n> with good redundancy. My current performance is 85MB/s write, 151 MB/s \n> reads\n\nFYI, I get 200 MB/s sequential out of the software RAID5 of 3 cheap \ndesktop consumer SATA drives in my home multimedia server...\n\n", "msg_date": "Fri, 06 Aug 2010 00:27:43 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On 8/5/10 11:28 AM, Kenneth Cox wrote:\n> I am using PostgreSQL 8.3.7 on a dedicated IBM 3660 with 24GB RAM\n> running CentOS 5.4 x86_64. I have a ServeRAID 8k controller with 6 SATA\n> 7500RPM disks in RAID 6, and for the OLAP workload it feels* slow....\n> My current performance is 85MB/s write, 151 MB/s reads\n\nI get 193MB/sec write and 450MB/sec read on a RAID10 on 8 SATA 7200 RPM disks. RAID10 seems to scale linearly -- add disks, get more speed, to the limit of your controller.\n\nCraig\n", "msg_date": "Thu, 05 Aug 2010 15:52:26 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Thu, Aug 5, 2010 at 4:27 PM, Pierre C <[email protected]> wrote:\n>\n>> 1) Should I switch to RAID 10 for performance?  I see things like \"RAID 5\n>> is bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I see little on\n>> RAID 6.\n>\n> As others said, RAID6 is RAID5 + a hot spare.\n>\n> Basically when you UPDATE a row, at some point postgres will write the page\n> which contains that row.\n>\n> RAID10 : write the page to all mirrors.\n> RAID5/6 : write the page to the relevant disk. Read the corresponding page\n> from all disks (minus one), compute parity, write parity.\n\nActually it's not quite that bad. You only have to read from two\ndisks, the data disk and the parity disk, then compute new parity and\nwrite to both disks. Still 2 reads / 2 writes for every write.\n\n> As you can see one small write will need to hog all drives in the array.\n> RAID5/6 performance for small random writes is really, really bad.\n>\n> Databases like RAID10 for reads too because when you need some random data\n> you can get it from any of the mirrors, so you get increased parallelism on\n> reads too.\n\nAlso for sequential access RAID-10 can read both drives in a pair\ninterleaved so you get 50% of the data you need from each drive and\ndouble the read rate there. This is even true for linux software md\nRAID.\n\n>> with good redundancy.  My current performance is 85MB/s write, 151 MB/s\n>> reads\n>\n> FYI, I get 200 MB/s sequential out of the software RAID5 of 3 cheap desktop\n> consumer SATA drives in my home multimedia server...\n\nOn a machine NOT configured for max seq throughput (it's used for\nmostly OLTP stuff) I get 325M/s both read and write speed with a 26\ndisk RAID-10. OTOH, that setup gets ~6000 to 7000 transactions per\nsecond with multi-day runs of pgbench.\n", "msg_date": "Thu, 5 Aug 2010 17:09:32 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Definitely switch to RAID-10 .... it's not merely that it's a fair bit\nfaster on normal operations (less seek contention), it's **WAY** faster than\nany parity based RAID (RAID-2 through RAID-6) in degraded mode when you lose\na disk and have to rebuild it. This is something many people don't test for,\nand then get bitten badly when they lose a drive under production loads.\n\nUse higher capacity drives if necessary to make your data fit in the number\nof spindles your controller supports ... the difference in cost is modest\ncompared to an overall setup, especially with SATA. Make sure you still\nleave at least one hot spare!\n\nIn normal operation, RAID-5 has to read and write 2 drives for every write\n... not sure about RAID-6 but I suspect it needs to read the entire stripe\nto recalculate the Hamming parity, and it definitely has to write to 3\ndrives for each write, which means seeking all 3 of those drives to that\nposition. In degraded mode (a disk rebuilding) with either of those levels,\nALL the drives have to seek to that point for every write, and for any reads\nof the failed drive, so seek contention is horrendous.\n\nRAID-5 and RAID-6 are designed for optimum capacity, protection, and low\nwrite performance, which is fine for a general file server.\n\nParity RAID simply isn't suitable for database use .... anyone who claims\notherwise either (a) doesn't understand the failure modes of RAID, or (b) is\nrunning in a situation where performance simply doesn't matter.\n\nCheers\nDave\n\nOn Thu, Aug 5, 2010 at 1:28 PM, Kenneth Cox <[email protected]> wrote:\n\n> My questions are simple, but in my reading I have not been able to find\n> definitive answers:\n>\n> 1) Should I switch to RAID 10 for performance? I see things like \"RAID 5\n> is bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I see little on\n> RAID 6. RAID 6 was the original choice for more usable space with good\n> redundancy. My current performance is 85MB/s write, 151 MB/s reads (using\n> dd of 2xRAM per\n> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm<http://www.westnet.com/%7Egsmith/content/postgresql/pg-disktesting.htm>\n> ).\n>\n\nDefinitely switch to RAID-10 .... it's not merely that it's a fair bit faster on normal operations (less seek contention), it's **WAY** faster than any parity based RAID (RAID-2 through RAID-6) in degraded mode when you lose a disk and have to rebuild it. This is something many people don't test for, and then get bitten badly when they lose a drive under production loads.\nUse higher capacity drives if necessary to make your data fit in the number of spindles your controller supports ... the difference in cost is modest compared to an overall setup, especially with SATA. Make sure you still leave at least one hot spare!\nIn normal operation, RAID-5 has to read and write 2 drives for every write ... not sure about RAID-6 but I suspect it needs to read the entire stripe to recalculate the Hamming parity, and it definitely has to write to 3 drives for each write, which means seeking all 3 of those drives to that position. In degraded mode (a disk rebuilding) with either of those levels, ALL the drives have to seek to that point for every write, and for any reads of the failed drive, so seek contention is horrendous.\nRAID-5 and RAID-6 are designed for optimum capacity, protection, and low write performance, which is fine for a general file server. Parity RAID simply isn't suitable for database use .... anyone who claims otherwise either (a) doesn't understand the failure modes of RAID, or (b) is running in a situation where performance simply doesn't matter.\nCheersDaveOn Thu, Aug 5, 2010 at 1:28 PM, Kenneth Cox <[email protected]> wrote:\nMy questions are simple, but in my reading I have not been able to find definitive answers:\n\n1) Should I switch to RAID 10 for performance?  I see things like \"RAID 5 is bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I see little on RAID 6.  RAID 6 was the original choice for more usable space with good redundancy.  My current performance is 85MB/s write, 151 MB/s reads (using dd of 2xRAM per http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm).", "msg_date": "Thu, 5 Aug 2010 18:13:05 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Thu, Aug 5, 2010 at 5:13 PM, Dave Crooke <[email protected]> wrote:\n> Definitely switch to RAID-10 .... it's not merely that it's a fair bit\n> faster on normal operations (less seek contention), it's **WAY** faster than\n> any parity based RAID (RAID-2 through RAID-6) in degraded mode when you lose\n> a disk and have to rebuild it. This is something many people don't test for,\n> and then get bitten badly when they lose a drive under production loads.\n\nHad a friend with a 600G x 5 disk RAID-5 and one drive died. It took\nnearly 48 hours to rebuild the array.\n\n> Use higher capacity drives if necessary to make your data fit in the number\n> of spindles your controller supports ... the difference in cost is modest\n> compared to an overall setup, especially with SATA. Make sure you still\n> leave at least one hot spare!\n\nYeah, a lot of chassis hold an even number of drives, and I wind up\nwith 2 hot spares because of it.\n\n> Parity RAID simply isn't suitable for database use .... anyone who claims\n> otherwise either (a) doesn't understand the failure modes of RAID, or (b) is\n> running in a situation where performance simply doesn't matter.\n\nThe only time it's acceptable is when you're running something like\nlow write volume report generation / batch processing, where you're\nmostly sequentially reading and writing 100s of gigabytes at a time in\none or maybe two threads.\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Thu, 5 Aug 2010 17:24:02 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On 06/08/10 06:28, Kenneth Cox wrote:\n> I am using PostgreSQL 8.3.7 on a dedicated IBM 3660 with 24GB RAM \n> running CentOS 5.4 x86_64. I have a ServeRAID 8k controller with 6 \n> SATA 7500RPM disks in RAID 6, and for the OLAP workload it feels* \n> slow. I have 6 more disks to add, and the RAID has to be rebuilt in \n> any case, but first I would like to solicit general advice. I know \n> that's little data to go on, and I believe in the scientific method, \n> but in this case I don't have the time to make many iterations.\n>\n> My questions are simple, but in my reading I have not been able to \n> find definitive answers:\n>\n> 1) Should I switch to RAID 10 for performance? I see things like \n> \"RAID 5 is bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I \n> see little on RAID 6. RAID 6 was the original choice for more usable \n> space with good redundancy. My current performance is 85MB/s write, \n> 151 MB/s reads (using dd of 2xRAM per \n> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm).\n>\n\nNormally I'd agree with the others and recommend RAID10 - but you say \nyou have an OLAP workload - if it is *heavily* read biased you may get \nbetter performance with RAID5 (more effective disks to read from). \nHaving said that, your sequential read performance right now is pretty \nlow (151 MB/s - should be double this), which may point to an issue \nwith this controller. Unfortunately this *may* be important for an OLAP \nworkload (seq scans of big tables).\n\n\n\n> 2) Should I configure the ext3 file system with noatime and/or \n> data=writeback or data=ordered? My controller has a battery, the \n> logical drive has write cache enabled (write-back), and the physical \n> devices have write cache disabled (write-through).\n>\n\nProbably wise to use noatime. If you have a heavy write workload (i.e so \nwhat I just wrote above does *not* apply), then you might find adjusting \nthe ext3 commit interval upwards from its default of 5 seconds can help \n(I'm doing some testing at the moment and commit=20 seemed to improve \nperformance by about 5-10%).\n\n> 3) Do I just need to spend more time configuring postgresql? My \n> non-default settings were largely generated by pgtune-0.9.3:\n>\n> max_locks_per_transaction = 128 # manual; avoiding \"out of shared \n> memory\"\n> default_statistics_target = 100\n> maintenance_work_mem = 1GB\n> constraint_exclusion = on\n> checkpoint_completion_target = 0.9\n> effective_cache_size = 16GB\n> work_mem = 352MB\n> wal_buffers = 32MB\n> checkpoint_segments = 64\n> shared_buffers = 2316MB\n> max_connections = 32\n>\n\nPossibly higher checkpoint_segments and lower wal_buffers (I recall \nsomeone - maybe Greg suggesting that there was no benefit in having the \nlatter > 10MB). I wonder about setting shared_buffers higher - how large \nis the database?\n\nCheers\n\nMark\n\n", "msg_date": "Fri, 06 Aug 2010 11:32:28 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Thursday, August 05, 2010, Mark Kirkwood <[email protected]> \nwrote:\n> Normally I'd agree with the others and recommend RAID10 - but you say\n> you have an OLAP workload - if it is *heavily* read biased you may get\n> better performance with RAID5 (more effective disks to read from).\n> Having said that, your sequential read performance right now is pretty\n> low (151 MB/s - should be double this), which may point to an issue\n> with this controller. Unfortunately this *may* be important for an OLAP\n> workload (seq scans of big tables).\n\nProbably a low (default) readahead limitation. ext3 doesn't help but it can \nusually get up over 400MB/sec. Doubt it's the controller.\n\n-- \n\"No animals were harmed in the recording of this episode. We tried but that \ndamn monkey was just too fast.\"\n", "msg_date": "Thu, 5 Aug 2010 16:58:13 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On 06/08/10 11:58, Alan Hodgson wrote:\n> On Thursday, August 05, 2010, Mark Kirkwood<[email protected]>\n> wrote:\n> \n>> Normally I'd agree with the others and recommend RAID10 - but you say\n>> you have an OLAP workload - if it is *heavily* read biased you may get\n>> better performance with RAID5 (more effective disks to read from).\n>> Having said that, your sequential read performance right now is pretty\n>> low (151 MB/s - should be double this), which may point to an issue\n>> with this controller. Unfortunately this *may* be important for an OLAP\n>> workload (seq scans of big tables).\n>> \n> Probably a low (default) readahead limitation. ext3 doesn't help but it can\n> usually get up over 400MB/sec. Doubt it's the controller.\n>\n> \n\nYeah - good suggestion, so cranking up readahead (man blockdev) and \nretesting is recommended.\n\nCheers\n\nMark\n", "msg_date": "Fri, 06 Aug 2010 12:31:50 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On 06/08/10 12:31, Mark Kirkwood wrote:\n> On 06/08/10 11:58, Alan Hodgson wrote:\n>> On Thursday, August 05, 2010, Mark \n>> Kirkwood<[email protected]>\n>> wrote:\n>>> Normally I'd agree with the others and recommend RAID10 - but you say\n>>> you have an OLAP workload - if it is *heavily* read biased you may get\n>>> better performance with RAID5 (more effective disks to read from).\n>>> Having said that, your sequential read performance right now is pretty\n>>> low (151 MB/s - should be double this), which may point to an issue\n>>> with this controller. Unfortunately this *may* be important for an OLAP\n>>> workload (seq scans of big tables).\n>> Probably a low (default) readahead limitation. ext3 doesn't help but \n>> it can\n>> usually get up over 400MB/sec. Doubt it's the controller.\n>>\n>\n> Yeah - good suggestion, so cranking up readahead (man blockdev) and \n> retesting is recommended.\n>\n>\n\n... sorry, it just occurred to wonder about the stripe or chunk size \nused in the array, as making this too small can also severely hamper \nsequential performance.\n\nCheers\n\nMark\n", "msg_date": "Fri, 06 Aug 2010 12:35:44 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Thu, 5 Aug 2010, Scott Marlowe wrote:\n> RAID6 is basically RAID5 with a hot spare already built into the\n> array.\n\nOn Fri, 6 Aug 2010, Pierre C wrote:\n> As others said, RAID6 is RAID5 + a hot spare.\n\nNo. RAID6 is NOT RAID5 plus a hot spare.\n\nRAID5 uses a single parity datum (XOR) to ensure protection against data \nloss if one drive fails.\n\nRAID6 uses two different sets of parity (Reed-Solomon) to ensure \nprotection against data loss if two drives fail simultaneously.\n\nIf you have a RAID5 set with a hot spare, and you lose two drives, then \nyou have data loss. If the same happens to a RAID6 set, then there is no \ndata loss.\n\nMatthew\n\n-- \n And the lexer will say \"Oh look, there's a null string. Oooh, there's \n another. And another.\", and will fall over spectacularly when it realises\n there are actually rather a lot.\n - Computer Science Lecturer (edited)\n", "msg_date": "Fri, 6 Aug 2010 10:17:06 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Fri, Aug 6, 2010 at 3:17 AM, Matthew Wakeling <[email protected]> wrote:\n> On Thu, 5 Aug 2010, Scott Marlowe wrote:\n>>\n>> RAID6 is basically RAID5 with a hot spare already built into the\n>> array.\n>\n> On Fri, 6 Aug 2010, Pierre C wrote:\n>>\n>> As others said, RAID6 is RAID5 + a hot spare.\n>\n> No. RAID6 is NOT RAID5 plus a hot spare.\n\nThe original phrase was that RAID 6 was like RAID 5 with a hot spare\nALREADY BUILT IN.\n", "msg_date": "Fri, 6 Aug 2010 10:39:56 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": ">>> As others said, RAID6 is RAID5 + a hot spare.\n>>\n>> No. RAID6 is NOT RAID5 plus a hot spare.\n>\n> The original phrase was that RAID 6 was like RAID 5 with a hot spare\n> ALREADY BUILT IN.\n\nBuilt-in, or not - it is neither. It is more than that, actually. RAID\n6 is like RAID 5 in that it uses parity for redundancy and pays a\nwrite cost for maintaining those parity blocks, but will maintain data\nintegrity in the face of 2 simultaneous drive failures.\n\nIn terms of storage cost, it IS like paying for RAID5 + a hot spare,\nbut the protection is better.\n\nA RAID 5 with a hot spare built in could not survive 2 simultaneous\ndrive failures.\n", "msg_date": "Fri, 6 Aug 2010 13:32:18 -0400", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Fri, Aug 6, 2010 at 11:32 AM, Justin Pitts <[email protected]> wrote:\n>>>> As others said, RAID6 is RAID5 + a hot spare.\n>>>\n>>> No. RAID6 is NOT RAID5 plus a hot spare.\n>>\n>> The original phrase was that RAID 6 was like RAID 5 with a hot spare\n>> ALREADY BUILT IN.\n>\n> Built-in, or not - it is neither. It is more than that, actually. RAID\n> 6 is like RAID 5 in that it uses parity for redundancy and pays a\n> write cost for maintaining those parity blocks, but will maintain data\n> integrity in the face of 2 simultaneous drive failures.\n\nYes, I know that. I am very familiar with how RAID6 works. RAID5\nwith the hot spare already rebuilt / built in is a good enough answer\nfor management where big words like parity might scare some PHBs.\n\n> In terms of storage cost, it IS like paying for RAID5 + a hot spare,\n> but the protection is better.\n>\n> A RAID 5 with a hot spare built in could not survive 2 simultaneous\n> drive failures.\n\nExactly. Which is why I had said with the hot spare already built in\n/ rebuilt. Geeze, pedant much?\n\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Fri, 6 Aug 2010 11:59:05 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "> Yes, I know that.  I am very familiar with how RAID6 works.  RAID5\n> with the hot spare already rebuilt / built in is a good enough answer\n> for management where big words like parity might scare some PHBs.\n>\n>> In terms of storage cost, it IS like paying for RAID5 + a hot spare,\n>> but the protection is better.\n>>\n>> A RAID 5 with a hot spare built in could not survive 2 simultaneous\n>> drive failures.\n>\n> Exactly.  Which is why I had said with the hot spare already built in\n> / rebuilt.\n\nMy apologies. The 'rebuilt' slant escaped me. Thats a fair way to cast it.\n\n> Geeze, pedant much?\n\nOf course!\n", "msg_date": "Fri, 6 Aug 2010 14:17:18 -0400", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "\nOn Aug 5, 2010, at 4:09 PM, Scott Marlowe wrote:\n\n> On Thu, Aug 5, 2010 at 4:27 PM, Pierre C <[email protected]> wrote:\n>> \n>>> 1) Should I switch to RAID 10 for performance? I see things like \"RAID 5\n>>> is bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I see little on\n>>> RAID 6.\n>> \n>> As others said, RAID6 is RAID5 + a hot spare.\n>> \n>> Basically when you UPDATE a row, at some point postgres will write the page\n>> which contains that row.\n>> \n>> RAID10 : write the page to all mirrors.\n>> RAID5/6 : write the page to the relevant disk. Read the corresponding page\n>> from all disks (minus one), compute parity, write parity.\n> \n> Actually it's not quite that bad. You only have to read from two\n> disks, the data disk and the parity disk, then compute new parity and\n> write to both disks. Still 2 reads / 2 writes for every write.\n> \n>> As you can see one small write will need to hog all drives in the array.\n>> RAID5/6 performance for small random writes is really, really bad.\n>> \n>> Databases like RAID10 for reads too because when you need some random data\n>> you can get it from any of the mirrors, so you get increased parallelism on\n>> reads too.\n> \n> Also for sequential access RAID-10 can read both drives in a pair\n> interleaved so you get 50% of the data you need from each drive and\n> double the read rate there. This is even true for linux software md\n> RAID.\n\n\nMy experience is that it is ONLY true for software RAID and ZFS. Most hardware raid controllers read both mirrors and validate that the data is equal, and thus writing is about as fast as read. Tested with Adaptec, 3Ware, Dell PERC 4/5/6, and LSI MegaRaid hardware wise. In all cases it was clear that the hardware raid was not using data from the two mirrors to improve read performance for sequential or random I/O.\n> \n>>> with good redundancy. My current performance is 85MB/s write, 151 MB/s\n>>> reads\n>> \n>> FYI, I get 200 MB/s sequential out of the software RAID5 of 3 cheap desktop\n>> consumer SATA drives in my home multimedia server...\n> \n> On a machine NOT configured for max seq throughput (it's used for\n> mostly OLTP stuff) I get 325M/s both read and write speed with a 26\n> disk RAID-10. OTOH, that setup gets ~6000 to 7000 transactions per\n> second with multi-day runs of pgbench.\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Sat, 7 Aug 2010 23:46:49 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Sun, Aug 8, 2010 at 12:46 AM, Scott Carey <[email protected]> wrote:\n>\n> On Aug 5, 2010, at 4:09 PM, Scott Marlowe wrote:\n>\n>> On Thu, Aug 5, 2010 at 4:27 PM, Pierre C <[email protected]> wrote:\n>>>\n>>>> 1) Should I switch to RAID 10 for performance?  I see things like \"RAID 5\n>>>> is bad for a DB\" and \"RAID 5 is slow with <= 6 drives\" but I see little on\n>>>> RAID 6.\n>>>\n>>> As others said, RAID6 is RAID5 + a hot spare.\n>>>\n>>> Basically when you UPDATE a row, at some point postgres will write the page\n>>> which contains that row.\n>>>\n>>> RAID10 : write the page to all mirrors.\n>>> RAID5/6 : write the page to the relevant disk. Read the corresponding page\n>>> from all disks (minus one), compute parity, write parity.\n>>\n>> Actually it's not quite that bad.  You only have to read from two\n>> disks, the data disk and the parity disk, then compute new parity and\n>> write to both disks.  Still 2 reads / 2 writes for every write.\n>>\n>>> As you can see one small write will need to hog all drives in the array.\n>>> RAID5/6 performance for small random writes is really, really bad.\n>>>\n>>> Databases like RAID10 for reads too because when you need some random data\n>>> you can get it from any of the mirrors, so you get increased parallelism on\n>>> reads too.\n>>\n>> Also for sequential access RAID-10 can read both drives in a pair\n>> interleaved so you get 50% of the data you need from each drive and\n>> double the read rate there.  This is even true for linux software md\n>> RAID.\n>\n>\n> My experience is that it is ONLY true for software RAID and ZFS.  Most hardware raid controllers read both mirrors and validate that the data is equal, and thus writing is about as fast as read.  Tested with Adaptec, 3Ware, Dell PERC 4/5/6, and LSI MegaRaid hardware wise.  In all cases it was clear that the hardware raid was not using data from the two mirrors to improve read performance for sequential or random I/O.\n\nInteresting. I'm using an Areca, I'll have to run some tests and see\nif a mirror is reading at > 100% read speed of a single drive or not.\n", "msg_date": "Sun, 8 Aug 2010 01:50:59 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Greg Smith wrote:\n> > 2) Should I configure the ext3 file system with noatime and/or \n> > data=writeback or data=ordered? My controller has a battery, the \n> > logical drive has write cache enabled (write-back), and the physical \n> > devices have write cache disabled (write-through).\n> \n> data=ordered is the ext3 default and usually a reasonable choice. Using \n> writeback instead can be dangerous, I wouldn't advise starting there. \n> noatime is certainly a good thing, but the speedup is pretty minor if \n> you have a battery-backed write cache.\n\nWe recomment 'data=writeback' for ext3 in our docs:\n\n\thttp://www.postgresql.org/docs/9.0/static/wal-intro.html\n\t\n\tTip: Because WAL restores database file contents after a crash,\n\tjournaled file systems are not necessary for reliable storage of the\n\tdata files or WAL files. In fact, journaling overhead can reduce\n\tperformance, especially if journaling causes file system data to be\n\tflushed to disk. Fortunately, data flushing during journaling can often\n\tbe disabled with a file system mount option, e.g. data=writeback on a\n\tLinux ext3 file system. Journaled file systems do improve boot speed\n\tafter a crash. \n\nShould this be changed?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Fri, 13 Aug 2010 14:17:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Bruce Momjian wrote:\n> We recomment 'data=writeback' for ext3 in our docs\n> \n\nOnly for the WAL though, which is fine, and I think spelled out clearly \nenough in the doc section you quoted. Ken's system has one big RAID \nvolume, which means he'd be mounting the data files with 'writeback' \ntoo; that's the thing to avoid.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 13 Aug 2010 14:41:29 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Don't ever have WAL and data on the same OS volume as ext3.\n\nIf data=writeback, performance will be fine, data integrity will be ok for WAL, but data integrity will not be sufficient for the data partition.\nIf data=ordered, performance will be very bad, but data integrity will be OK.\n\nThis is because an fsync on ext3 flushes _all dirty pages in the file system_ to disk, not just those for the file being fsync'd.\n\nOne partition for WAL, one for data. If using ext3 this is essentially a performance requirement no matter how your array is set up underneath. \n\nOn Aug 13, 2010, at 11:41 AM, Greg Smith wrote:\n\n> Bruce Momjian wrote:\n>> We recomment 'data=writeback' for ext3 in our docs\n>> \n> \n> Only for the WAL though, which is fine, and I think spelled out clearly \n> enough in the doc section you quoted. Ken's system has one big RAID \n> volume, which means he'd be mounting the data files with 'writeback' \n> too; that's the thing to avoid.\n> \n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 16 Aug 2010 09:28:52 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Scott Carey wrote:\n> This is because an fsync on ext3 flushes _all dirty pages in the file system_ to disk, not just those for the file being fsync'd.\n> One partition for WAL, one for data. If using ext3 this is essentially a performance requirement no matter how your array is set up underneath. \n> \n\nUnless you want the opposite of course. Some systems split out the WAL \nonto a second disk, only to discover checkpoint I/O spikes become a \nproblem all of the sudden after that. The fsync calls for the WAL \nwrites keep the write cache for the data writes from ever getting too \nbig. This slows things down on average, but makes the worst case less \nstressful. Free lunches are so hard to find nowadays...\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 16 Aug 2010 13:46:21 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Mon, Aug 16, 2010 at 01:46:21PM -0400, Greg Smith wrote:\n> Scott Carey wrote:\n> >This is because an fsync on ext3 flushes _all dirty pages in the file system_ to disk, not just those for the file being fsync'd.\n> >One partition for WAL, one for data. If using ext3 this is\n> >essentially a performance requirement no matter how your array is\n> >set up underneath.\n>\n> Unless you want the opposite of course. Some systems split out the\n> WAL onto a second disk, only to discover checkpoint I/O spikes\n> become a problem all of the sudden after that. The fsync calls for\n> the WAL writes keep the write cache for the data writes from ever\n> getting too big. This slows things down on average, but makes the\n> worst case less stressful. Free lunches are so hard to find\n> nowadays...\nOr use -o sync. Or configure a ridiciuosly low dirty_memory amount\n(which has a problem on large systems because 1% can still be too\nmuch. Argh.)...\n\nAndres\n", "msg_date": "Mon, 16 Aug 2010 22:02:27 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Andres Freund wrote:\n> Or use -o sync. Or configure a ridiciuosly low dirty_memory amount\n> (which has a problem on large systems because 1% can still be too\n> much. Argh.)...\n> \n\n-o sync completely trashes performance, and trying to set the \ndirty_ratio values to even 1% doesn't really work due to things like the \n\"congestion avoidance\" code in the kernel. If you sync a lot more \noften, which putting the WAL on the same disk as the database \naccidentally does for you, that works surprisingly well at avoiding this \nwhole class of problem on ext3. A really good solution is going to take \na full rewrite of the PostgreSQL checkpoint logic though, which will get \nsorted out during 9.1 development. (cue dramatic foreshadowing music here)\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 16 Aug 2010 16:13:22 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Mon, Aug 16, 2010 at 04:13:22PM -0400, Greg Smith wrote:\n> Andres Freund wrote:\n> >Or use -o sync. Or configure a ridiciuosly low dirty_memory amount\n> >(which has a problem on large systems because 1% can still be too\n> >much. Argh.)...\n>\n> -o sync completely trashes performance, and trying to set the\n> dirty_ratio values to even 1% doesn't really work due to things like\n> the \"congestion avoidance\" code in the kernel. If you sync a lot\n> more often, which putting the WAL on the same disk as the database\n> accidentally does for you, that works surprisingly well at avoiding\n> this whole class of problem on ext3. A really good solution is\n> going to take a full rewrite of the PostgreSQL checkpoint logic\n> though, which will get sorted out during 9.1 development. (cue\n> dramatic foreshadowing music here)\n-o sync works ok enough for the data partition (surely not the wal) if you make the\nbackground writer less aggressive.\n\nBut yes. A new checkpointing logic + a new syncing logic\n(prepare_fsync() earlier and then fsync() later) would be a nice\nthing. Do you plan to work on that?\n\nAndres\n", "msg_date": "Mon, 16 Aug 2010 22:18:38 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Andres Freund wrote:\n> A new checkpointing logic + a new syncing logic\n> (prepare_fsync() earlier and then fsync() later) would be a nice\n> thing. Do you plan to work on that?\n> \n\nThe background writer already caches fsync calls into a queue, so the \nprepare step you're thinking needs to be there is already. The problem \nis that the actual fsync calls happen in a tight loop. That we're busy \nfixing.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 16 Aug 2010 16:54:19 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Mon, Aug 16, 2010 at 04:54:19PM -0400, Greg Smith wrote:\n> Andres Freund wrote:\n> >A new checkpointing logic + a new syncing logic\n> >(prepare_fsync() earlier and then fsync() later) would be a nice\n> >thing. Do you plan to work on that?\n> The background writer already caches fsync calls into a queue, so\n> the prepare step you're thinking needs to be there is already. The\n> problem is that the actual fsync calls happen in a tight loop. That\n> we're busy fixing.\nThat doesn't help that much on many systems with a somewhat deep\nqueue. An fsync() equals a barrier so it has the effect of stopping\nreordering around it - especially on systems with larger multi-disk\narrays thats pretty expensive.\nYou can achieve surprising speedups, at least in my experience, by\nforcing the kernel to start writing out pages *without enforcing\nbarriers* first and then later enforce a barrier to be sure its\nactually written out. Which, in a simplified case, turns the earlier\nneeded multiple barriers into a single one (in practise you want to\ncall fsync() anyway, but thats not a big problem if its already\nwritten out).\n\nAndres\n", "msg_date": "Mon, 16 Aug 2010 23:05:52 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Scott Carey wrote:\n> Don't ever have WAL and data on the same OS volume as ext3.\n> \n> If data=writeback, performance will be fine, data integrity will be ok\n> for WAL, but data integrity will not be sufficient for the data\n> partition. If data=ordered, performance will be very bad, but data\n> integrity will be OK.\n> \n> This is because an fsync on ext3 flushes _all dirty pages in the file\n> system_ to disk, not just those for the file being fsync'd.\n> \n> One partition for WAL, one for data. If using ext3 this is essentially\n> a performance requirement no matter how your array is set up underneath.\n\nDo we need to document this?\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Mon, 16 Aug 2010 21:31:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Andres Freund wrote:\n> An fsync() equals a barrier so it has the effect of stopping\n> reordering around it - especially on systems with larger multi-disk\n> arrays thats pretty expensive.\n> You can achieve surprising speedups, at least in my experience, by\n> forcing the kernel to start writing out pages *without enforcing\n> barriers* first and then later enforce a barrier to be sure its\n> actually written out.\n\nStandard practice on high performance systems with good filesystems and \na battery-backed controller is to turn off barriers anyway. That's one \nof the first things to tune on XFS for example, when you have a reliable \ncontroller. I don't have enough data on ext4 to comment on tuning for \nit yet.\n\nThe sole purpose for the whole Linux write barrier implementation in my \nworld is to flush the drive's cache, when the database does writes onto \ncheap SATA drives that will otherwise cache dangerously. Barriers don't \nhave any place on a serious system that I can see. The battery-backed \nRAID controller you have to use to make fsync calls fast anyway can do \nsome simple write reordering, but the operating system doesn't ever have \nenough visibility into what it's doing to make intelligent decisions \nabout that anyway. \n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 17 Aug 2010 04:29:10 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "Bruce Momjian wrote:\n> Scott Carey wrote:\n> \n>> Don't ever have WAL and data on the same OS volume as ext3.\n>>\n>> ...\n>> One partition for WAL, one for data. If using ext3 this is essentially\n>> a performance requirement no matter how your array is set up underneath.\n>> \n>\n> Do we need to document this?\n> \n\nNot for 9.0. What Scott is suggesting is often the case, but not \nalways; I can produce a counter example at will now that I know exactly \nwhich closets have the skeletons in them. The underlying situation is \nmore complicated due to some limitations to the whole \"spread \ncheckpoint\" code that is turning really sour on newer hardware with \nlarge amounts of RAM. I have about 5 pages of written notes on this \nspecific issue so far, and that keeps growing every week. That's all \nleading toward a proposed 9.1 change to the specific fsync behavior. \nAnd I expect to dump a large stack of documentation to support that \npatch that will address this whole area. I'll put the whole thing onto \nthe wiki as soon as my 9.0 related work settles down.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n\n\n\n\n\n\nBruce Momjian wrote:\n\nScott Carey wrote:\n \n\nDon't ever have WAL and data on the same OS volume as ext3.\n\n...\nOne partition for WAL, one for data. If using ext3 this is essentially\na performance requirement no matter how your array is set up underneath.\n \n\n\nDo we need to document this?\n \n\n\nNot for 9.0.  What Scott is suggesting is often the case, but not\nalways; I can produce a counter example at will now that I know exactly\nwhich closets have the skeletons in them.  The underlying situation is\nmore complicated due to some limitations to the whole \"spread\ncheckpoint\" code that is turning really sour on newer hardware with\nlarge amounts of RAM.  I have about 5 pages of written notes on this\nspecific issue so far, and that keeps growing every week.  That's all\nleading toward a proposed 9.1 change to the specific fsync behavior. \nAnd I expect to dump a large stack of documentation to support that\npatch that will address this whole area.  I'll put the whole thing onto\nthe wiki as soon as my 9.0 related work settles down.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us", "msg_date": "Tue, 17 Aug 2010 04:42:25 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" }, { "msg_contents": "On Tuesday 17 August 2010 10:29:10 Greg Smith wrote:\n> Andres Freund wrote:\n> > An fsync() equals a barrier so it has the effect of stopping\n> > reordering around it - especially on systems with larger multi-disk\n> > arrays thats pretty expensive.\n> > You can achieve surprising speedups, at least in my experience, by\n> > forcing the kernel to start writing out pages *without enforcing\n> > barriers* first and then later enforce a barrier to be sure its\n> > actually written out.\n> \n> Standard practice on high performance systems with good filesystems and\n> a battery-backed controller is to turn off barriers anyway. That's one\n> of the first things to tune on XFS for example, when you have a reliable\n> controller. I don't have enough data on ext4 to comment on tuning for\n> it yet.\n> \n> The sole purpose for the whole Linux write barrier implementation in my\n> world is to flush the drive's cache, when the database does writes onto\n> cheap SATA drives that will otherwise cache dangerously. Barriers don't\n> have any place on a serious system that I can see. The battery-backed\n> RAID controller you have to use to make fsync calls fast anyway can do\n> some simple write reordering, but the operating system doesn't ever have\n> enough visibility into what it's doing to make intelligent decisions\n> about that anyway.\nEven if were not talking about a write barrier in an \"ensure its written out \nof the cache\" way it still stops the io-scheduler from reordering. I \nbenchmarked it (custom app) and it was very noticeable on a bunch of different \nsystems (with a good BBUed RAID).\n\nAndres\n", "msg_date": "Tue, 17 Aug 2010 11:37:38 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice configuring ServeRAID 8k for performance" } ]
[ { "msg_contents": "If anyone is interested I just completed a series of benchmarks of stock\nPostgresql running on a normal HDD vs a SSD.\n\nIf you don't want to read the post, the summary is that SSDs are 5 to 7\ntimes faster than a 7200RPM HDD drive under a pgbench load.\n\nhttp://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n\nIs this what everyone else is seeing?\n\nThanks!\n\n-- \[email protected]\n\nIf anyone is interested I just completed a series of benchmarks of stock Postgresql running on a normal HDD vs a SSD.  If you don't want to read the post, the summary is that SSDs are 5 to 7 times faster than a 7200RPM HDD drive under a pgbench load.\nhttp://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\nIs this what everyone else is seeing?Thanks!--\[email protected]", "msg_date": "Sat, 7 Aug 2010 16:47:55 -0700", "msg_from": "Michael March <[email protected]>", "msg_from_op": true, "msg_subject": "Completely un-tuned Postgresql benchmark results: SSD vs desktop HDD" }, { "msg_contents": "SSD's actually vary quite a bit with typical postgres benchmark workloads. Many of them also do not guarantee data that has been sync'd will not be lost if power fails (most hard drives with a sane OS and file system do).\n\n\nOn Aug 7, 2010, at 4:47 PM, Michael March wrote:\n\nIf anyone is interested I just completed a series of benchmarks of stock Postgresql running on a normal HDD vs a SSD.\n\nIf you don't want to read the post, the summary is that SSDs are 5 to 7 times faster than a 7200RPM HDD drive under a pgbench load.\n\nhttp://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n\nIs this what everyone else is seeing?\n\nThanks!\n\n--\[email protected]<mailto:[email protected]>\n\n\nSSD's actually vary quite a bit with typical postgres benchmark workloads.  Many of them also do not guarantee data that has been sync'd will not be lost if power fails (most hard drives with a sane OS and file system do).On Aug 7, 2010, at 4:47 PM, Michael March wrote:If anyone is interested I just completed a series of benchmarks of stock Postgresql running on a normal HDD vs a SSD.  If you don't want to read the post, the summary is that SSDs are 5 to 7 times faster than a 7200RPM HDD drive under a pgbench load.\nhttp://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\nIs this what everyone else is seeing?Thanks!--\[email protected]", "msg_date": "Sat, 7 Aug 2010 23:36:25 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "> SSD's actually vary quite a bit with typical postgres benchmark workloads.\n>\n\nYou mean various SSDs from different vendors? Or are you saying the same SSD\nmodel might vary in performance from drive to drive?\n\n\n> Many of them also do not guarantee data that has been sync'd will not be\n> lost if power fails (most hard drives with a sane OS and file system do).\n>\n\nWhat feature does an SSD need to have to insure that sync'd data is indeed\nwritten to the SSD in the case of power loss?\n\n\n\n>\n>\n> On Aug 7, 2010, at 4:47 PM, Michael March wrote:\n>\n> If anyone is interested I just completed a series of benchmarks of stock\n> Postgresql running on a normal HDD vs a SSD.\n>\n> If you don't want to read the post, the summary is that SSDs are 5 to 7\n> times faster than a 7200RPM HDD drive under a pgbench load.\n>\n>\n> http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n>\n> Is this what everyone else is seeing?\n>\n> Thanks!\n>\n>\n\nSSD's actually vary quite a bit with typical postgres benchmark workloads. \nYou mean various SSDs from different vendors? Or are you saying the same SSD model might vary in performance from drive to drive? \n Many of them also do not guarantee data that has been sync'd will not be lost if power fails (most hard drives with a sane OS and file system do).\nWhat feature does an SSD need to have to insure that sync'd data is indeed written to the SSD in the case of power loss? \nOn Aug 7, 2010, at 4:47 PM, Michael March wrote:If anyone is interested I just completed a series of benchmarks of stock Postgresql running on a normal HDD vs a SSD.  \nIf you don't want to read the post, the summary is that SSDs are 5 to 7 times faster than a 7200RPM HDD drive under a pgbench load.\nhttp://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n\nIs this what everyone else is seeing?Thanks!", "msg_date": "Sat, 7 Aug 2010 23:49:38 -0700", "msg_from": "Michael March <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On Sat, Aug 7, 2010 at 5:47 PM, Michael March <[email protected]> wrote:\n> If anyone is interested I just completed a series of benchmarks of stock\n> Postgresql running on a normal HDD vs a SSD.\n> If you don't want to read the post, the summary is that SSDs are 5 to 7\n> times faster than a 7200RPM HDD drive under a pgbench load.\n> http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n>\n> Is this what everyone else is seeing?\n> Thanks!\n\nIt's a good first swing, but I'd be interested in seeing how it runs\nwith various numbers of clients, and how it runs as the number of\nclients goes past optimal. I.e. a nice smooth downward trend or a\nhorrible drop-off for whichever drives.\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Sun, 8 Aug 2010 00:55:09 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": ">\n> On Sat, Aug 7, 2010 at 5:47 PM, Michael March <[email protected]> wrote:\n> > If anyone is interested I just completed a series of benchmarks of stock\n> > Postgresql running on a normal HDD vs a SSD.\n> > If you don't want to read the post, the summary is that SSDs are 5 to 7\n> > times faster than a 7200RPM HDD drive under a pgbench load.\n> >\n> http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n> >\n> > Is this what everyone else is seeing?\n> > Thanks!\n>\n> It's a good first swing, but I'd be interested in seeing how it runs\n> with various numbers of clients, and how it runs as the number of\n> clients goes past optimal. I.e. a nice smooth downward trend or a\n> horrible drop-off for whichever drives.\n>\n>\nYeah. I was thinking the same thing..\n\nI need to automate the tests so I can incrementally increase the scaling of\nthe seed tables and increase of number simultaneous clients over time. Put\nanother way,I need to do A LOT more tests that will gently increment all the\ntestable factors one small step at a time.\n\nOn Sat, Aug 7, 2010 at 5:47 PM, Michael March <[email protected]> wrote:\n\n> If anyone is interested I just completed a series of benchmarks of stock\n> Postgresql running on a normal HDD vs a SSD.\n> If you don't want to read the post, the summary is that SSDs are 5 to 7\n> times faster than a 7200RPM HDD drive under a pgbench load.\n> http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n>\n> Is this what everyone else is seeing?\n> Thanks!\n\nIt's a good first swing, but I'd be interested in seeing how it runs\nwith various numbers of clients, and how it runs as the number of\nclients goes past optimal.  I.e. a nice smooth downward trend or a\nhorrible drop-off for whichever drives.Yeah. I was thinking the same thing.. I need to automate the tests so I can incrementally increase the scaling of the seed tables and increase of number simultaneous clients over time. Put another way,I need to do A LOT more tests that will gently increment all the testable factors one small step at a time.", "msg_date": "Sun, 8 Aug 2010 00:03:32 -0700", "msg_from": "Michael March <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On Sun, Aug 8, 2010 at 12:49 AM, Michael March <[email protected]> wrote:\n>\n>> SSD's actually vary quite a bit with typical postgres benchmark workloads.\n>\n> You mean various SSDs from different vendors? Or are you saying the same SSD\n> model might vary in performance from drive to drive?\n>\n>>\n>>  Many of them also do not guarantee data that has been sync'd will not be\n>> lost if power fails (most hard drives with a sane OS and file system do).\n>\n> What feature does an SSD need to have to insure that sync'd data is indeed\n> written to the SSD in the case of power loss?\n\nA big freaking capacitor and the ability to notice power's been cut\nand start writing out the cache. There are a few that have it that\nare coming out right about now. There was a post about one such drive\na few days ago, it was like 50G and $450 or so, so not cheap, but not\nthat bad compared to the $7000 drive bays with 16 15k6 drives I've\nused in to the past to get good performance (3 to 4k tps)\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Sun, 8 Aug 2010 01:30:27 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "Michael March wrote:\n> If anyone is interested I just completed a series of benchmarks of \n> stock Postgresql running on a normal HDD vs a SSD. \n>\n> If you don't want to read the post, the summary is that SSDs are 5 to \n> 7 times faster than a 7200RPM HDD drive under a pgbench load.\n>\n> http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n>\n> Is this what everyone else is seeing?\nI tested a SSD with a capacitor and posted conclusions here \nhttp://archives.postgresql.org/pgsql-performance/2010-07/msg00449.php\n\nregards,\nYeb Havinga\n\n", "msg_date": "Sun, 08 Aug 2010 12:12:07 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On Aug 7, 2010, at 11:49 PM, Michael March wrote:\n\n\nSSD's actually vary quite a bit with typical postgres benchmark workloads.\n\nYou mean various SSDs from different vendors? Or are you saying the same SSD model might vary in performance from drive to drive?\n\nModel to model (more specifically, controller chip to controller chip -- i.e. most 'Indilinx Barefoot' controller based SSD's perform similar).\n\n\n Many of them also do not guarantee data that has been sync'd will not be lost if power fails (most hard drives with a sane OS and file system do).\n\nWhat feature does an SSD need to have to insure that sync'd data is indeed written to the SSD in the case of power loss?\n\n\n\nEither properly flush to storage when the OS / File sytem asks for it (most SSD's don't, most Hard Drives do), or have a supercapacitor to flush data on power loss.\n\nThe former can be achieved by turning off the write cache on some drives (such as Intel's X25-M and -E), but hurts performance.\n\nAlso, the amount of data at risk in a power loss varies between drives. For Intel's drives, its a small chunk of data ( < 256K). For some other drives, the cache can be over 30MB of outstanding writes.\nFor some workloads this is acceptable -- not every application is doing financial transactions. Not every part of the system needs to be on an SSD either -- the WAL, and various table spaces can all have different data integrity and performance requirements.\n\n\n\nOn Aug 7, 2010, at 4:47 PM, Michael March wrote:\n\nIf anyone is interested I just completed a series of benchmarks of stock Postgresql running on a normal HDD vs a SSD.\n\nIf you don't want to read the post, the summary is that SSDs are 5 to 7 times faster than a 7200RPM HDD drive under a pgbench load.\n\nhttp://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n\nIs this what everyone else is seeing?\n\nThanks!\n\n\n\nOn Aug 7, 2010, at 11:49 PM, Michael March wrote:SSD's actually vary quite a bit with typical postgres benchmark workloads. \nYou mean various SSDs from different vendors? Or are you saying the same SSD model might vary in performance from drive to drive?Model to model (more specifically, controller chip to controller chip -- i.e. most 'Indilinx Barefoot' controller based SSD's perform similar).    \n Many of them also do not guarantee data that has been sync'd will not be lost if power fails (most hard drives with a sane OS and file system do).\nWhat feature does an SSD need to have to insure that sync'd data is indeed written to the SSD in the case of power loss? Either properly flush to storage when the OS / File sytem asks for it (most SSD's don't, most Hard Drives do), or have a supercapacitor to flush data on power loss.The former can be achieved by turning off the write cache on some drives (such as Intel's X25-M and -E), but hurts performance.Also, the amount of data at risk in a power loss varies between drives.  For Intel's drives, its a small chunk of data ( < 256K).  For some other drives, the cache can be over 30MB of outstanding writes.For some workloads this is acceptable -- not every application is doing financial transactions.   Not every part of the system needs to be on an SSD either -- the WAL, and various table spaces can all have different data integrity and performance requirements.\nOn Aug 7, 2010, at 4:47 PM, Michael March wrote:If anyone is interested I just completed a series of benchmarks of stock Postgresql running on a normal HDD vs a SSD.  \nIf you don't want to read the post, the summary is that SSDs are 5 to 7 times faster than a 7200RPM HDD drive under a pgbench load.\nhttp://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n\nIs this what everyone else is seeing?Thanks!", "msg_date": "Mon, 9 Aug 2010 09:49:50 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On Mon, 2010-08-09 at 09:49 -0700, Scott Carey wrote:\n> Also, the amount of data at risk in a power loss varies between\n> drives. For Intel's drives, its a small chunk of data ( < 256K). For\n> some other drives, the cache can be over 30MB of outstanding writes.\n> For some workloads this is acceptable -- not every application is\n> doing financial transactions. Not every part of the system needs to\n> be on an SSD either -- the WAL, and various table spaces can all have\n> different data integrity and performance requirements.\n\nI don't think it makes sense to speak about the data integrity of a\ndrive in terms of the amount of data at risk, especially with a DBMS.\nDepending on which 256K you lose, you might as well lose your entire\ndatabase.\n\nThat may be an exaggeration, but the point is that it's not as simple as\n\"this drive is only risking 256K data loss per outage\".\n\nRegards,\n\tJeff Davis\n\n\n", "msg_date": "Mon, 09 Aug 2010 15:41:59 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results:\n\tSSD vs desktop HDD" }, { "msg_contents": "Scott Carey wrote:\n> Also, the amount of data at risk in a power loss varies between \n> drives. For Intel's drives, its a small chunk of data ( < 256K). For \n> some other drives, the cache can be over 30MB of outstanding writes.\n> For some workloads this is acceptable\n\nNo, it isn't ever acceptable. You can expect the type of data loss you \nget when a cache fails to honor write flush calls results in \ncatastrophic database corruption. It's not \"I lost the last few \nseconds\"; it's \"the database is corrupted and won't start\" after a \ncrash. This is why we pound on this topic on this list. A SSD that \nfails to honor flush requests is completely worthless for anything other \nthan toy databases. You can expect significant work to recover any \nportion of your data after the first unexpected power loss under heavy \nwrite load in this environment, during which you're down. We do \ndatabase corruption recovery at 2ndQuadrant; while I can't talk about \nthe details of some recent incidents, I am not speaking theoretically \nwhen I warn about this.\n\nMichael, I would suggest you read \nhttp://www.postgresql.org/docs/current/static/wal-reliability.html and \nlink to it at the end of your article. You are recommending that people \nconsider a configuration that will result in their data being lost. \nThat can be acceptable, if for example your data is possible to recreate \nfrom backups or the like. But people should be extremely clear that \ntrade-off is happening, and your blog post is not doing that yet. Part \nof the reason for the bang per buck you're seeing here is that cheap \nSSDs are cheating.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 10 Aug 2010 12:21:20 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "Jeff Davis wrote:\n> Depending on which 256K you lose, you might as well lose your entire\n> database.\n> \n\nLet's be nice and assume that you only lose one 8K block because of the \nSSD write cache; that's not so bad, right? Guess what--you could easily \nbe the next lucky person who discovers the block corrupted is actually \nin the middle of the pg_class system catalog, where the list of tables \nin the database is at! Enjoy getting your data back again with that \npiece missing. It's really a fun time, I'll tell you that.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 10 Aug 2010 12:27:20 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": " On 8/10/2010 12:21 PM, Greg Smith wrote:\n> Scott Carey wrote:\n>> Also, the amount of data at risk in a power loss varies between \n>> drives. For Intel's drives, its a small chunk of data ( < 256K). \n>> For some other drives, the cache can be over 30MB of outstanding writes.\n>> For some workloads this is acceptable\n>\n> No, it isn't ever acceptable. You can expect the type of data loss \n> you get when a cache fails to honor write flush calls results in \n> catastrophic database corruption. It's not \"I lost the last few \n> seconds\"; it's \"the database is corrupted and won't start\" after a \n> crash. This is why we pound on this topic on this list. A SSD that \n> fails to honor flush requests is completely worthless for anything \n> other than toy databases. You can expect significant work to recover \n> any portion of your data after the first unexpected power loss under \n> heavy write load in this environment, during which you're down. We do \n> database corruption recovery at 2ndQuadrant; while I can't talk about \n> the details of some recent incidents, I am not speaking theoretically \n> when I warn about this.\n>\n\nWhat about putting indexes on them? If the drive fails and drops writes \non those, they could be rebuilt - assuming your system can function \nwithout the index(es) temporarily.\n\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Tue, 10 Aug 2010 13:57:20 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "Brad Nicholson wrote:\n> On 8/10/2010 12:21 PM, Greg Smith wrote:\n>> Scott Carey wrote:\n>>> Also, the amount of data at risk in a power loss varies between\n>>> drives. For Intel's drives, its a small chunk of data ( < 256K). \n>>> For some other drives, the cache can be over 30MB of outstanding\n>>> writes.\n>>> For some workloads this is acceptable\n>>\n>> No, it isn't ever acceptable. You can expect the type of data loss\n>> you get when a cache fails to honor write flush calls results in\n>> catastrophic database corruption. It's not \"I lost the last few\n>> seconds\"; it's \"the database is corrupted and won't start\" after a\n>> crash. This is why we pound on this topic on this list. A SSD that\n>> fails to honor flush requests is completely worthless for anything\n>> other than toy databases. You can expect significant work to recover\n>> any portion of your data after the first unexpected power loss under\n>> heavy write load in this environment, during which you're down. We\n>> do database corruption recovery at 2ndQuadrant; while I can't talk\n>> about the details of some recent incidents, I am not speaking\n>> theoretically when I warn about this.\n>\n> What about putting indexes on them? If the drive fails and drops\n> writes on those, they could be rebuilt - assuming your system can\n> function without the index(es) temporarily.\nYou could put indices on them but as noted by Scott, he's SPOT ON.\n\nANY disk that says \"write is complete\" when it really is not is entirely\nunsuitable for ANY real database use. It is simply a matter of time\nbefore you have a failure of some sort that results in catastrophic data\nloss. If you're LUCKY the database won't start and you know you're in\ntrouble. If you're UNLUCKY the database DOES start but there is\nundetected and unrecoverable data corruption somewhere inside the data\ntables, which you WILL discover at the most-inopportune moment (like\nwhen you desperately NEED that business record for some reason.) You\ncannot put either the tables or the logs on such a drive without running\nthe risk of a data corruption problem that WILL lose data and MAY be\ncatastrophic, depending on exactly what fails when.\n\nWhile it is possible to recover that which is not damaged from a\ndatabase that has corruption like this it simply is not possible to\nrecover data that never made it to the disk - no matter what you do -\nand the time and effort expended (not to mention money if you have to\nbring in someone with specialized skills you do not possess) that result\nfrom such \"decisions\" when things go wrong are extreme.\n\nDon't do it.\n\n-- Karl", "msg_date": "Tue, 10 Aug 2010 13:13:47 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger <[email protected]> wrote:\n\n> ANY disk that says \"write is complete\" when it really is not is entirely\n> unsuitable for ANY real database use.  It is simply a matter of time\n\nWhat about read only slaves where there's a master with 100+spinning\nhard drives \"getting it right\" and you need a half dozen or so read\nslaves? I can imagine that being ok, as long as you don't restart a\nserver after a crash without checking on it.\n", "msg_date": "Tue, 10 Aug 2010 12:23:22 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "Brad Nicholson wrote:\n> What about putting indexes on them? If the drive fails and drops \n> writes on those, they could be rebuilt - assuming your system can \n> function without the index(es) temporarily.\n\nDumping indexes on SSD is one of the better uses for them, presuming you \ncan survive what is likely to be an outage from a \"can the site handle \nfull load?\" perspective while they rebuild after a crash. As I'm sure \nBrad is painfully aware of already, index rebuilding in PostgreSQL can \ntake a while. To spin my broken record here again, the main thing to \nnote when you consider that--relocate indexes onto SSD--is that the ones \nyou are most concerned about the performance of were likely to be \nalready sitting in RAM anyway, meaning the SSD speedup doesn't help \nreads much. So the giant performance boost just isn't there in that case.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 10 Aug 2010 14:28:31 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "Scott Marlowe wrote:\n> On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger <[email protected]> wrote:\n> \n>> ANY disk that says \"write is complete\" when it really is not is entirely\n>> unsuitable for ANY real database use. It is simply a matter of time\n>> \n>\n> What about read only slaves where there's a master with 100+spinning\n> hard drives \"getting it right\" and you need a half dozen or so read\n> slaves? I can imagine that being ok, as long as you don't restart a\n> server after a crash without checking on it.\n> \nA read-only slave isn't read-only, is it?\n\nI mean, c'mon - how does the data get there?\n\nIF you mean \"a server that only accepts SELECTs, does not accept UPDATEs\nor INSERTs, and on a crash **reloads the entire database from the\nmaster**\", then ok.\n\nMost people who will do this won't reload it after a crash. They'll\n\"inspect\" the database and say \"ok\", and put it back online. Bad Karma\nwill ensue in the future.\n\nIncidentally, that risk is not theoretical either (I know about this one\nfrom hard experience. Fortunately the master was still ok and I was\nable to force a full-table copy.... I didn't like it as the database was\na few hundred GB, but I had no choice.)\n\n-- Karl", "msg_date": "Tue, 10 Aug 2010 13:38:08 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": " On 8/10/2010 2:28 PM, Greg Smith wrote:\n> Brad Nicholson wrote:\n>> What about putting indexes on them? If the drive fails and drops \n>> writes on those, they could be rebuilt - assuming your system can \n>> function without the index(es) temporarily.\n>\n> Dumping indexes on SSD is one of the better uses for them, presuming \n> you can survive what is likely to be an outage from a \"can the site \n> handle full load?\" perspective while they rebuild after a crash. As \n> I'm sure Brad is painfully aware of already, index rebuilding in \n> PostgreSQL can take a while. To spin my broken record here again, the \n> main thing to note when you consider that--relocate indexes onto \n> SSD--is that the ones you are most concerned about the performance of \n> were likely to be already sitting in RAM anyway, meaning the SSD \n> speedup doesn't help reads much. So the giant performance boost just \n> isn't there in that case.\n>\nThe case where I'm thinking they may be of use is for indexes you can \nafford to lose. I'm thinking of ones that are needed by nightly batch \njobs, down stream systems or reporting - the sorts of things that you \ncan turn off during a rebuild, and where the data sets are not likely \nto be in cache.\n\nWe have a few such cases, but we don't need the speed of SSD's for them.\n\nPersonally, I wouldn't entertain any SSD with a capacitor backing it for \nanything, even indexes. Not worth the hassle to me.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Tue, 10 Aug 2010 14:47:16 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On 8/10/2010 2:38 PM, Karl Denninger wrote:\n> Scott Marlowe wrote:\n>> On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger<[email protected]> wrote:\n>> \n>>> ANY disk that says \"write is complete\" when it really is not is entirely\n>>> unsuitable for ANY real database use. It is simply a matter of time\n>>> \n>>\n>> What about read only slaves where there's a master with 100+spinning\n>> hard drives \"getting it right\" and you need a half dozen or so read\n>> slaves? I can imagine that being ok, as long as you don't restart a\n>> server after a crash without checking on it.\n>> \n> A read-only slave isn't read-only, is it?\n>\n> I mean, c'mon - how does the data get there?\n>\n\nA valid case is a Slony replica if used for query offloading (not for \nDR). It's considered a read-only subscriber from the perspective of \nSlony as only Slony can modify the data (although you are technically \ncorrect, it is not read only - controlled write may be more accurate).\n\n In case of failure, a rebuild + resubscribe gets you back to the same \nconsistency. If you have high IO requirements, and don't have the \nbudget to rack up extra disk arrays to meet them, it could be an option.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n\n\n\n\n\n On 8/10/2010 2:38 PM, Karl Denninger wrote:\n \n\n Scott Marlowe wrote:\n \nOn Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger <[email protected]> wrote:\n \n\nANY disk that says \"write is complete\" when it really is not is entirely\nunsuitable for ANY real database use.  It is simply a matter of time\n \n\n\nWhat about read only slaves where there's a master with 100+spinning\nhard drives \"getting it right\" and you need a half dozen or so read\nslaves? I can imagine that being ok, as long as you don't restart a\nserver after a crash without checking on it.\n \n\n A read-only slave isn't read-only, is it?\n\n I mean, c'mon - how does the data get there?\n\n\n\n A valid case is a Slony replica if used for query offloading (not\n for DR).  It's considered a read-only subscriber from the\n perspective of Slony as only Slony can modify the data  (although\n you are technically correct, it is not read only - controlled write\n may be more accurate).  \n\n  In case of failure, a rebuild + resubscribe gets you back to the\n same consistency.  If you have high IO requirements, and don't have\n the budget to rack up extra disk arrays to meet them, it could be an\n option.\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.", "msg_date": "Tue, 10 Aug 2010 14:54:12 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "Brad Nicholson wrote:\n> On 8/10/2010 2:38 PM, Karl Denninger wrote:\n>> Scott Marlowe wrote:\n>>> On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger <[email protected]> wrote:\n>>> \n>>>> ANY disk that says \"write is complete\" when it really is not is entirely\n>>>> unsuitable for ANY real database use. It is simply a matter of time\n>>>> \n>>>\n>>> What about read only slaves where there's a master with 100+spinning\n>>> hard drives \"getting it right\" and you need a half dozen or so read\n>>> slaves? I can imagine that being ok, as long as you don't restart a\n>>> server after a crash without checking on it.\n>>> \n>> A read-only slave isn't read-only, is it?\n>>\n>> I mean, c'mon - how does the data get there?\n>>\n> A valid case is a Slony replica if used for query offloading (not for\n> DR). It's considered a read-only subscriber from the perspective of\n> Slony as only Slony can modify the data (although you are technically\n> correct, it is not read only - controlled write may be more accurate). \n>\n> In case of failure, a rebuild + resubscribe gets you back to the same\n> consistency. If you have high IO requirements, and don't have the\n> budget to rack up extra disk arrays to meet them, it could be an option.\nCAREFUL with that model and beliefs.\n\nSpecifically, the following will hose you without warning:\n\n1. SLONY gets a change on the master.\n2. SLONY commits it to the (read-only) slave.\n3. Confirmation comes back to the master that the change was propagated.\n4. Slave CRASHES without actually committing the changed data to stable\nstorage.\n\nWhen the slave restarts it will not know that the transaction was lost. \nNeither will the master, since it was told that it was committed. Slony\nwill happily go on its way and replicate forward, without any indication\nof a problem - except that on the slave, there are one or more\ntransactions that are **missing**.\n\nSome time later you issue an update that goes to the slave, but the\nchange previously lost causes the slave commit to violate referential\nintegrity. SLONY will fail to propagate that change and all behind it\n- it effectively locks at that point in time.\n\nYou can recover from this by dropping the slave from replication and\nre-inserting it, but that forces a full-table copy of everything in the\nreplication set. The bad news is that the queries to the slave in\nquestion may have been returning erroneous data for some unknown period\nof time prior to the lockup in replication (which hopefully you detect\nreasonably quickly - you ARE watching SLONY queue depth with some\nautomated process, right?)\n\nI can both cause this in the lab and have had it happen in the field. \nIt's a nasty little problem that bit me on a series of disks that\nclaimed to have write caching off, but in fact did not. I was very\nhappy that the data on the master was good at that point, as if I had\nneeded to failover to the slave (thinking it was a \"good\" copy) I would\nhave been in SERIOUS trouble.\n\n-- Karl", "msg_date": "Tue, 10 Aug 2010 14:28:18 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On 8/10/2010 3:28 PM, Karl Denninger wrote:\n> Brad Nicholson wrote:\n>> On 8/10/2010 2:38 PM, Karl Denninger wrote:\n>>> Scott Marlowe wrote:\n>>>> On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger<[email protected]> wrote:\n>>>> \n>>>>> ANY disk that says \"write is complete\" when it really is not is entirely\n>>>>> unsuitable for ANY real database use. It is simply a matter of time\n>>>>> \n>>>>\n>>>> What about read only slaves where there's a master with 100+spinning\n>>>> hard drives \"getting it right\" and you need a half dozen or so read\n>>>> slaves? I can imagine that being ok, as long as you don't restart a\n>>>> server after a crash without checking on it.\n>>>> \n>>> A read-only slave isn't read-only, is it?\n>>>\n>>> I mean, c'mon - how does the data get there?\n>>>\n>> A valid case is a Slony replica if used for query offloading (not for \n>> DR). It's considered a read-only subscriber from the perspective of \n>> Slony as only Slony can modify the data (although you are \n>> technically correct, it is not read only - controlled write may be \n>> more accurate).\n>>\n>> In case of failure, a rebuild + resubscribe gets you back to the \n>> same consistency. If you have high IO requirements, and don't have \n>> the budget to rack up extra disk arrays to meet them, it could be an \n>> option.\n> CAREFUL with that model and beliefs.\n>\n> Specifically, the following will hose you without warning:\n>\n> 1. SLONY gets a change on the master.\n> 2. SLONY commits it to the (read-only) slave.\n> 3. Confirmation comes back to the master that the change was propagated.\n> 4. Slave CRASHES without actually committing the changed data to \n> stable storage.\n>\nWhat will hose you is assuming that your data will be okay in the case \nof a failure, which is a very bad assumption to make in the case on \nunreliable SSD's. You are assuming I am implying that these should be \ntreated like reliable media - I am not.\n\nIn case of failure, you need to assume data loss until proven \notherwise. If there is a problem, rebuild.\n\n> When the slave restarts it will not know that the transaction was \n> lost. Neither will the master, since it was told that it was \n> committed. Slony will happily go on its way and replicate forward, \n> without any indication of a problem - except that on the slave, there \n> are one or more transactions that are **missing**.\n>\n\nCorrect.\n> Some time later you issue an update that goes to the slave, but the \n> change previously lost causes the slave commit to violate referential \n> integrity. SLONY will fail to propagate that change and all behind \n> it - it effectively locks at that point in time.\n>\nIt will lock data flow to that subscriber, but not to others.\n\n> You can recover from this by dropping the slave from replication and \n> re-inserting it, but that forces a full-table copy of everything in \n> the replication set. The bad news is that the queries to the slave in \n> question may have been returning erroneous data for some unknown \n> period of time prior to the lockup in replication (which hopefully you \n> detect reasonably quickly - you ARE watching SLONY queue depth with \n> some automated process, right?)\n>\nThere are ways around that - run two subscribers and redirect your \nqueries on failure. Don't bring up the failed replica until it is \nverified or rebuilt.\n\n> I can both cause this in the lab and have had it happen in the field. \n> It's a nasty little problem that bit me on a series of disks that \n> claimed to have write caching off, but in fact did not. I was very \n> happy that the data on the master was good at that point, as if I had \n> needed to failover to the slave (thinking it was a \"good\" copy) I \n> would have been in SERIOUS trouble.\n\nIt's very easy to cause those sorts of problems.\n\nWhat I am saying is that the technology can have a use, if you are \naware of the sharp edges, and can both work around them and live with \nthem. Everything you are citing is correct, but is more implying that \nthey they are blindly thrown in without understanding the risks and \nmitigating them.\n\nI'm also not suggesting that this is a configuration I would endorse, \nbut it could potentially save a lot of money in certain use cases.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n\n\n\n\n\n On 8/10/2010 3:28 PM, Karl Denninger wrote:\n \n\n Brad Nicholson wrote:\n \n\n On 8/10/2010 2:38 PM, Karl Denninger wrote:\n \n\n Scott Marlowe wrote:\n \nOn Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger <[email protected]> wrote:\n \n\nANY disk that says \"write is complete\" when it really is not is entirely\nunsuitable for ANY real database use.  It is simply a matter of time\n \n\n\nWhat about read only slaves where there's a master with 100+spinning\nhard drives \"getting it right\" and you need a half dozen or so read\nslaves? I can imagine that being ok, as long as you don't restart a\nserver after a crash without checking on it.\n \n\n A read-only slave isn't read-only, is it?\n\n I mean, c'mon - how does the data get there?\n\n\n A valid case is a Slony replica if used for query offloading\n (not for\n DR).  It's considered a read-only subscriber from the\n perspective of\n Slony as only Slony can modify the data  (although you are\n technically\n correct, it is not read only - controlled write may be more\n accurate).  \n\n  In case of failure, a rebuild + resubscribe gets you back to\n the same\n consistency.  If you have high IO requirements, and don't have\n the\n budget to rack up extra disk arrays to meet them, it could be an\n option.\n CAREFUL with that model and beliefs.\n\n\n\n Specifically, the following will hose you without warning:\n\n 1. SLONY gets a change on the master.\n 2. SLONY commits it to the (read-only) slave.\n 3. Confirmation comes back to the master that the change was\n propagated.\n 4. Slave CRASHES without actually committing the changed data to\n stable\n storage.\n\n\n What will hose you is assuming that your data will be okay in the\n case of a failure, which is a very bad assumption to make in the\n case on unreliable SSD's.  You are assuming I am implying that these\n should be treated like reliable media - I am not.\n\n In case of failure, you need to assume data loss until proven\n otherwise.  If there is a problem, rebuild.\n\n\n When the slave restarts it will not know that the transaction was\n lost.  Neither will the master, since it was told that it was\n committed.  Slony will happily go on its way and replicate\n forward,\n without any indication of a problem - except that on the slave,\n there\n are one or more transactions that are **missing**.\n\n\n\n Correct.\n\n Some time later you issue an update that goes to the slave, but\n the\n change previously lost causes the slave commit to violate\n referential\n integrity.   SLONY will fail to propagate that change and all\n behind it\n - it effectively locks at that point in time.\n\n\n It will lock data flow to that subscriber, but not to others.\n\n\n You can recover from this by dropping the slave from replication\n and\n re-inserting it, but that forces a full-table copy of everything\n in the\n replication set.  The bad news is that the queries to the slave in\n question may have been returning erroneous data for some unknown\n period\n of time prior to the lockup in replication (which hopefully you\n detect\n reasonably quickly - you ARE watching SLONY queue depth with some\n automated process, right?)\n\n\n There are ways around that - run two subscribers and redirect your\n queries on failure.  Don't bring up the failed replica until it is\n verified or rebuilt.\n\n\n I can both cause this in the lab and have had it happen in the\n field. \n It's a nasty little problem that bit me on a series of disks that\n claimed to have write caching off, but in fact did not.  I was\n very\n happy that the data on the master was good at that point, as if I\n had\n needed to failover to the slave (thinking it was a \"good\" copy) I\n would\n have been in SERIOUS trouble.\n\n\n It's very easy to cause those sorts of problems.\n\n What  I am saying is that the technology can have a use, if you are\n aware of the sharp edges, and can both work around them and live\n with them.  Everything you are citing is correct, but is more\n implying that they they are blindly thrown in without understanding\n the risks and mitigating them.\n\n I'm also not suggesting that this is a configuration I would\n endorse, but it could potentially save a lot of money in certain use\n cases.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.", "msg_date": "Tue, 10 Aug 2010 15:51:31 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On Tue, Aug 10, 2010 at 12:38 PM, Karl Denninger <[email protected]> wrote:\n> Scott Marlowe wrote:\n>\n> On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger <[email protected]> wrote:\n>\n>\n> ANY disk that says \"write is complete\" when it really is not is entirely\n> unsuitable for ANY real database use.  It is simply a matter of time\n>\n>\n> What about read only slaves where there's a master with 100+spinning\n> hard drives \"getting it right\" and you need a half dozen or so read\n> slaves? I can imagine that being ok, as long as you don't restart a\n> server after a crash without checking on it.\n>\n>\n> A read-only slave isn't read-only, is it?\n>\n> I mean, c'mon - how does the data get there?\n\nWell, duh. However, what I'm looking at is having two big servers in\nfailover running on solid reliable hardware, and then a small army of\nread only slony slaves that are used for things like sending user rss\nfeeds and creating weekly reports and such. These 1U machines with 12\nto 24 cores and a single SSD drive are \"disposable\" in terms that if\nthey ever crash, there's a simple script to run that reinits the db\nand then subscribes them to the set.\n\nMy point being, no matter how terrible an idea a certain storage media\nis, there's always a use case for it. Even if it's very narrow.\n", "msg_date": "Tue, 10 Aug 2010 13:52:16 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On Tue, Aug 10, 2010 at 3:52 PM, Scott Marlowe <[email protected]> wrote:\n> My point being, no matter how terrible an idea a certain storage media\n> is, there's always a use case for it.  Even if it's very narrow.\n\nThe trouble is, if extra subscribers induce load on the \"master,\"\nwhich they presumably will, then that sliver of \"use case\" may very\nwell get obscured by the cost, such that the sliver should be treated\nas not existing :-(.\n-- \nhttp://linuxfinances.info/info/linuxdistributions.html\n", "msg_date": "Tue, 10 Aug 2010 16:00:24 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On Tue, Aug 10, 2010 at 2:00 PM, Christopher Browne <[email protected]> wrote:\n> On Tue, Aug 10, 2010 at 3:52 PM, Scott Marlowe <[email protected]> wrote:\n>> My point being, no matter how terrible an idea a certain storage media\n>> is, there's always a use case for it.  Even if it's very narrow.\n>\n> The trouble is, if extra subscribers induce load on the \"master,\"\n> which they presumably will, then that sliver of \"use case\" may very\n> well get obscured by the cost, such that the sliver should be treated\n> as not existing :-(.\n\nOne master, one slave, master handles all writes, slave handles all of\nthe other subscribers. I've run a setup like this with as many as 8\nor so slaves at the bottom of the pile with no problems at all.\n", "msg_date": "Tue, 10 Aug 2010 14:06:27 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "\nOn Aug 10, 2010, at 9:21 AM, Greg Smith wrote:\n\n> Scott Carey wrote:\n>> Also, the amount of data at risk in a power loss varies between \n>> drives. For Intel's drives, its a small chunk of data ( < 256K). For \n>> some other drives, the cache can be over 30MB of outstanding writes.\n>> For some workloads this is acceptable\n> \n> No, it isn't ever acceptable. You can expect the type of data loss you \n> get when a cache fails to honor write flush calls results in \n> catastrophic database corruption. It's not \"I lost the last few \n> seconds\";\n\nI never said it was.\n\n> it's \"the database is corrupted and won't start\" after a \n> crash. \n\nWhich is sometimes acceptables. There is NO GUARANTEE that you won't lose data, ever. An increase in the likelihood is an acceptable tradeoff in some situations, especially when it is small. On ANY power loss event, with or without battery backed caches and such, you should do a consistency check on the system proactively. With less reliable hardware, that task becomes much more of a burden, and is much more likely to require restoring data from somewhere.\n\nWhat is the likelihood that your RAID card fails, or that the battery that reported 'good health' only lasts 5 minutes and you lose data before power is restored? What is the likelihood of human error?\nNot that far off from the likelihood of power failure in a datacenter with redundant power. One MUST have a DR plan. Never assume that your perfect hardware won't fail.\n\n> This is why we pound on this topic on this list. A SSD that \n> fails to honor flush requests is completely worthless for anything other \n> than toy databases. \n\nOverblown. Not every DB and use case is a financial application or business critical app. Many are not toys at all. Slave, read only DB's (or simply subset tablespaces) ...\n\nIndexes. (per application, schema)\nTables. (per application, schema)\nSystem tables / indexes.\nWAL.\n\nEach has different reliability requirement and consequences from losing recently written data. less than 8K can be fatal to the WAL, or table data. Corrupting some tablespaces is not a big deal. Corrupting others is catastrophic. The problem with the assertion that this hardware is worthless is that it implies that every user, every use case, is at the far end of the reliability requirement spectrum.\n\nYes, that can be a critical requirement for many, perhaps most, DB's. But there are many uses for slightly unsafe storage systems.\n\n> You can expect significant work to recover any \n> portion of your data after the first unexpected power loss under heavy \n> write load in this environment, during which you're down. We do \n> database corruption recovery at 2ndQuadrant; while I can't talk about \n> the details of some recent incidents, I am not speaking theoretically \n> when I warn about this.\n\nI've done the single-user mode recover system tables by hand thing myself at 4AM, on a system with battery backed RAID 10, redundant power, etc. Raid cards die, and 10TB recovery times from backup are long.\n\nIts a game of balancing your data loss tolerance with the likelihood of power failure. Both of these variables are highly variable, and not just with 'toy' dbs. If you know what you are doing, you can use 'fast but not completely safe' storage for many things safely. Chance of loss is NEVER zero, do not assume that 'good' hardware is flawless.\n\nImagine a common internet case where synchronous_commit=false is fine. Recovery from backups is a pain (but a daily snapshot is taken of the important tables, and weekly for easily recoverable other stuff). If you expect one power related failure every 2 years, it might be perfectly reasonable to use 'unsafe' SSD's in order to support high transaction load on the risk that that once every 2 year downtime is 12 hours long instead of 30 minutes, and includes losing up to a day's information. Applications like this exist all over the place.\n\n\n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n\n", "msg_date": "Wed, 11 Aug 2010 16:05:35 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "\nOn Aug 10, 2010, at 11:28 AM, Greg Smith wrote:\n\n> Brad Nicholson wrote:\n>> What about putting indexes on them? If the drive fails and drops \n>> writes on those, they could be rebuilt - assuming your system can \n>> function without the index(es) temporarily.\n> \n> Dumping indexes on SSD is one of the better uses for them, presuming you \n> can survive what is likely to be an outage from a \"can the site handle \n> full load?\" perspective while they rebuild after a crash. As I'm sure \n> Brad is painfully aware of already, index rebuilding in PostgreSQL can \n> take a while. To spin my broken record here again, the main thing to \n> note when you consider that--relocate indexes onto SSD--is that the ones \n> you are most concerned about the performance of were likely to be \n> already sitting in RAM anyway, meaning the SSD speedup doesn't help \n> reads much. So the giant performance boost just isn't there in that case.\n> \n\nFor an OLTP type system, yeah. But for DW/OLAP and batch processing the gains are pretty big. Those indexes get kicked out of RAM and then pulled back in a lot. I'm talking about a server with 72GB of RAM that can't keep enough indexes in memory to avoid a lot of random access. Putting the indexes on an SSD has lowered the random I/O load on the other drives a lot, letting them get through sequential scans a lot faster.\n\nEstimated power failure, once every 18 months (mostly due to human error). Rebuild indexes offline for 40 minutes every 18 months? No problem.\n\n\n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n\n", "msg_date": "Wed, 11 Aug 2010 16:10:29 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On Aug 10, 2010, at 11:38 AM, Karl Denninger wrote:\n\nScott Marlowe wrote:\n\nOn Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger <[email protected]><mailto:[email protected]> wrote:\n\n\nANY disk that says \"write is complete\" when it really is not is entirely\nunsuitable for ANY real database use. It is simply a matter of time\n\n\n\nWhat about read only slaves where there's a master with 100+spinning\nhard drives \"getting it right\" and you need a half dozen or so read\nslaves? I can imagine that being ok, as long as you don't restart a\nserver after a crash without checking on it.\n\n\nA read-only slave isn't read-only, is it?\n\nI mean, c'mon - how does the data get there?\n\nIF you mean \"a server that only accepts SELECTs, does not accept UPDATEs or INSERTs, and on a crash **reloads the entire database from the master**\", then ok.\n\n\n\"ENTIRE database\"?\n\nDepends on your tablespace setup and schema usage pattern.\nIf:\n* 90% of your data tables are partitioned by date, and untouched a week after insert. Partitions are backed up incrementally.\n* The remaining 10% of it is backed up daily, and of that 9% can be re-generated from data elsewhere if data is lost.\n* System catalog and wal are on 'safest of safe' hardware.\n\nThen your 'bulk' data on a slave can be on less than flawless hardware. Simply restore the tables from the last week from the master or backup when the (rare) power failure occurs. The remaining data is safe, since it is not written to.\nSplit up your 10% of non-date partitioned data into what needs to be on safe hardware and what does not (maybe some indexes, etc).\n\nMost of the time, the incremental cost of getting a BBU is too small to not do it, so the above hardly applies. But if you have data that is known to be read-only, you can do many unconventional things with it safely.\n\n\nMost people who will do this won't reload it after a crash. They'll \"inspect\" the database and say \"ok\", and put it back online. Bad Karma will ensue in the future.\n\nAnyone going with something unconventional better know what they are doing and not just blindly plug it in and think everything will be OK. I'd never recommend unconventional setups for a user that wasn't an expert and understood the tradeoff.\n\n\nIncidentally, that risk is not theoretical either (I know about this one from hard experience. Fortunately the master was still ok and I was able to force a full-table copy.... I didn't like it as the database was a few hundred GB, but I had no choice.)\n\n\nBeen there with 10TB with hardware that should have been perfectly safe. 5 days of copying, and wishing that pg_dump supported lzo compression so that the dump portion had a chance at keeping up with the much faster restore portion with some level of compression on to save the copy bandwidth.\n\n-- Karl\n<karl.vcf>\n\n\nOn Aug 10, 2010, at 11:38 AM, Karl Denninger wrote:\n\nScott Marlowe wrote:\n\nOn Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger <[email protected]> wrote:\n \n\nANY disk that says \"write is complete\" when it really is not is entirely\nunsuitable for ANY real database use.  It is simply a matter of time\n \n\n\nWhat about read only slaves where there's a master with 100+spinning\nhard drives \"getting it right\" and you need a half dozen or so read\nslaves? I can imagine that being ok, as long as you don't restart a\nserver after a crash without checking on it.\n \n\nA read-only slave isn't read-only, is it?\n\nI mean, c'mon - how does the data get there?\n\nIF you mean \"a server that only accepts SELECTs, does not accept\nUPDATEs or INSERTs, and on a crash **reloads the entire database from\nthe master**\", then ok.\"ENTIRE database\"?Depends on your tablespace setup and schema usage pattern.  If:* 90% of your data tables are partitioned by date, and untouched a week after insert.  Partitions are backed up incrementally.* The remaining 10% of it is backed up daily, and of that 9% can be re-generated from data elsewhere if data is lost.* System catalog and wal are on 'safest of safe' hardware.Then your 'bulk' data on a slave can be on less than flawless hardware.  Simply restore the tables from the last week from the master or backup when the (rare) power failure occurs.  The remaining data is safe, since it is not written to.Split up your 10% of non-date partitioned data into what needs to be on safe hardware and what does not (maybe some indexes, etc).Most of the time, the incremental cost of getting a BBU is too small to not do it, so the above hardly applies.  But if you have data that is known to be read-only, you can do many unconventional things with it safely.\n\nMost people who will do this won't reload it after a crash.  They'll\n\"inspect\" the database and say \"ok\", and put it back online.  Bad Karma\nwill ensue in the future.Anyone going with something unconventional better know what they are doing and not just blindly plug it in and think everything will be OK.  I'd never recommend unconventional setups for a user that wasn't an expert and understood the tradeoff.\n\nIncidentally, that risk is not theoretical either (I know about this\none from hard experience.  Fortunately the master was still ok and I\nwas able to force a full-table copy.... I didn't like it as the\ndatabase was a few hundred GB, but I had no choice.)\nBeen there with 10TB with hardware that should have been perfectly safe.  5 days of copying, and wishing that pg_dump supported lzo compression so that the dump portion had a chance at keeping up with the much faster restore portion with some level of compression on to save the copy bandwidth.\n-- Karl\n\n<karl.vcf>", "msg_date": "Wed, 11 Aug 2010 16:52:26 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "Scott Carey wrote:\n>\n> On Aug 10, 2010, at 11:38 AM, Karl Denninger wrote:\n>\n> .....\n>>\n>> Most people who will do this won't reload it after a crash. They'll\n>> \"inspect\" the database and say \"ok\", and put it back online. Bad\n>> Karma will ensue in the future.\n>\n> Anyone going with something unconventional better know what they are\n> doing and not just blindly plug it in and think everything will be OK.\n> I'd never recommend unconventional setups for a user that wasn't an\n> expert and understood the tradeoff.\nTrue.\n>>\n>> Incidentally, that risk is not theoretical either (I know about this\n>> one from hard experience. Fortunately the master was still ok and I\n>> was able to force a full-table copy.... I didn't like it as the\n>> database was a few hundred GB, but I had no choice.)\n>\n> Been there with 10TB with hardware that should have been perfectly\n> safe. 5 days of copying, and wishing that pg_dump supported lzo\n> compression so that the dump portion had a chance at keeping up with\n> the much faster restore portion with some level of compression on to\n> save the copy bandwidth.\nPipe it through ssh -C\n\nPS: This works for SLONY and Bucardo too - set up a tunnel and then\nchange the port temporarily. This is especially useful when the DB\nbeing COPY'd across has big fat honking BYTEA fields in it, which\notherwise expand about 400% - or more - on the wire.\n\n-- Karl", "msg_date": "Wed, 11 Aug 2010 19:45:22 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "Scott Carey wrote:\n> What is the likelihood that your RAID card fails, or that the battery that reported 'good health' only lasts 5 minutes and you lose data before power is restored? What is the likelihood of human error?\n> \n\nThese are all things that happen sometimes, sure. The problem with the \ncheap SSDs is that they happen downright often if you actually test for \nit. If someone is aware of the risk and makes an informed decision, \nfine. But most of the time I see articles like the one that started \nthis thread that are oblivious to the issue, and that's really bad.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 12 Aug 2010 00:30:47 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "\nOn Aug 11, 2010, at 9:30 PM, Greg Smith wrote:\n\n> Scott Carey wrote:\n>> What is the likelihood that your RAID card fails, or that the battery that reported 'good health' only lasts 5 minutes and you lose data before power is restored? What is the likelihood of human error?\n>> \n> \n> These are all things that happen sometimes, sure. The problem with the \n> cheap SSDs is that they happen downright often if you actually test for \n> it. If someone is aware of the risk and makes an informed decision, \n> fine. But most of the time I see articles like the one that started \n> this thread that are oblivious to the issue, and that's really bad.\n> \n\nAgreed. There is a HUGE gap between \"ooh ssd's are fast, look!\" and engineering a solution that uses them properly with all their strengths and faults. And as 'gnuoytr' points out, there is a big difference between an Intel SSD and say, this thing: http://www.nimbusdata.com/products/s-class_overview.html \n\n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n\n", "msg_date": "Thu, 12 Aug 2010 16:40:11 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On 13-8-2010 1:40 Scott Carey wrote:\n> Agreed. There is a HUGE gap between \"ooh ssd's are fast, look!\" and\n> engineering a solution that uses them properly with all their\n> strengths and faults. And as 'gnuoytr' points out, there is a big\n> difference between an Intel SSD and say, this thing:\n> http://www.nimbusdata.com/products/s-class_overview.html\n\n From the description it sounds as if its either FreeBSD or OpenSolaris \nwith ZFS with some webinterface-layer. That's not a bad thing per se, \nbut as the site suggests its 'only' $25k for the smallest (2.5TB?) \ndevice. That makes it very likely that it are \"off the shelf\" MLC flash \ndrives. Given the design of the device and the pricing it probably are \nyour average 2.5\"-drives with 100, 200 or 400GB capacity (maybe OCZ \nvertex 2 pro, which do have such a capacitor?), similar to the Intel SSD \nyou compared it to.\nAnd than we're basically back to square one, unless the devices have a \ncapacitor or ZFS works better with SSD-drives to begin with (it will at \nleast know silent data corruption did occur).\n\nThere are of course devices that are not built on top of normal disk \nform factor SSD-drives like the Ramsan devices or Sun's F5100.\n\nBest regards,\n\nArjen\n", "msg_date": "Fri, 13 Aug 2010 07:59:29 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "As a postscript to these tests.. I just tried the 500GB Monentus XT hybrid\nSSD/HDD drive. I had this fantasy that it would at least do better than the\n7200 rpm desktop drive.\n\nOh lord, my gut was wrong. The performance was inconsistent and never over\n2/3rds the performance of the slowest desktop drive.\n\nOn Sat, Aug 7, 2010 at 4:47 PM, Michael March <[email protected]> wrote:\n\n> If anyone is interested I just completed a series of benchmarks of stock\n> Postgresql running on a normal HDD vs a SSD.\n>\n> If you don't want to read the post, the summary is that SSDs are 5 to 7\n> times faster than a 7200RPM HDD drive under a pgbench load.\n>\n>\n> http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n>\n> Is this what everyone else is seeing?\n>\n> Thanks!\n>\n> --\n> [email protected]\n>\n\n\n\n-- \n<admiral>\n\nMichael F. March ----- [email protected]\n\nAs a postscript to these tests.. I just tried the 500GB Monentus XT hybrid SSD/HDD drive. I had this fantasy that it would at least do better than the 7200 rpm desktop drive. Oh lord, my gut was wrong.  The performance was inconsistent and never over 2/3rds the performance of the slowest desktop drive. \nOn Sat, Aug 7, 2010 at 4:47 PM, Michael March <[email protected]> wrote:\nIf anyone is interested I just completed a series of benchmarks of stock Postgresql running on a normal HDD vs a SSD.  If you don't want to read the post, the summary is that SSDs are 5 to 7 times faster than a 7200RPM HDD drive under a pgbench load.\nhttp://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html\n\nIs this what everyone else is seeing?Thanks!--\[email protected]\n-- <admiral>Michael F. March ----- [email protected]", "msg_date": "Thu, 12 Aug 2010 23:06:59 -0700", "msg_from": "Michael March <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD vs desktop\n\tHDD" } ]
[ { "msg_contents": "\nI'm trying to eke a little bit more performance out of an application, and \nI was wondering if there was a better way to do the following:\n\nI am trying to retrieve, for many sets of rows grouped on a couple of \nfields, the value of an ungrouped field where the row has the highest \nvalue in another ungrouped field. For instance, I have the following table \nsetup:\n\ngroup | whatever type\nvalue | whatever type\nnumber | int\nIndex: group\n\nI then have rows like this:\n\ngroup | value | number\n-------------------------------------\nFoo | foo | 1\nFoo | turnips | 2\nBar | albatross | 3\nBar | monkey | 4\n\nI want to receive results like this:\n\ngroup | value\n-----------------------\nFoo | turnips\nBar | monkey\n\nCurrently, I do this in my application by ordering by the number and only \nusing the last value. I imagine that this is something that can be done in \nthe new Postgres 9, with a sorted group by - something like this:\n\nSELECT group, LAST(value, ORDER BY number) FROM table GROUP BY group\n\nIs this something that is already built in, or would I have to write my \nown LAST aggregate function?\n\nMatthew\n\n-- \n The third years are wandering about all worried at the moment because they\n have to hand in their final projects. Please be sympathetic to them, say\n things like \"ha-ha-ha\", but in a sympathetic tone of voice \n -- Computer Science Lecturer\n", "msg_date": "Tue, 10 Aug 2010 16:40:16 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Sorted group by" }, { "msg_contents": "Matthew Wakeling wrote on 10.08.2010 17:40:\n> Currently, I do this in my application by ordering by the number and\n> only using the last value. I imagine that this is something that can be\n> done in the new Postgres 9, with a sorted group by - something like this:\n>\n> SELECT group, LAST(value, ORDER BY number) FROM table GROUP BY group\n>\n> Is this something that is already built in, or would I have to write my\n> own LAST aggregate function?\n\nNo. It's built in (8.4) and it's called Windowing functions:\nhttp://www.postgresql.org/docs/8.4/static/tutorial-window.html\nhttp://www.postgresql.org/docs/8.4/static/functions-window.html\n\nSELECT group, last_value(value) over(ORDER BY number)\nFROM table\n\nYou don't need the group by then (but you can apply e.g. an ORDER BY GROUP)\n\nThomas\n\n", "msg_date": "Tue, 10 Aug 2010 17:56:46 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted group by" }, { "msg_contents": "On Tue, 10 Aug 2010, Thomas Kellerer wrote:\n> No. It's built in (8.4) and it's called Windowing functions:\n> http://www.postgresql.org/docs/8.4/static/tutorial-window.html\n> http://www.postgresql.org/docs/8.4/static/functions-window.html\n>\n> SELECT group, last_value(value) over(ORDER BY number)\n> FROM table\n\nI may be mistaken, but as I understand it, a windowing function doesn't \nreduce the number of rows in the results?\n\nMatthew\n\n-- \n Don't worry about people stealing your ideas. If your ideas are any good,\n you'll have to ram them down people's throats. -- Howard Aiken\n", "msg_date": "Tue, 10 Aug 2010 17:03:40 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorted group by" }, { "msg_contents": "On 10 August 2010 17:03, Matthew Wakeling <[email protected]> wrote:\n> On Tue, 10 Aug 2010, Thomas Kellerer wrote:\n>>\n>> No. It's built in (8.4) and it's called Windowing functions:\n>> http://www.postgresql.org/docs/8.4/static/tutorial-window.html\n>> http://www.postgresql.org/docs/8.4/static/functions-window.html\n>>\n>> SELECT group, last_value(value) over(ORDER BY number)\n>> FROM table\n>\n> I may be mistaken, but as I understand it, a windowing function doesn't\n> reduce the number of rows in the results?\n>\n\nI think you are mistaken. The last_value function is a window\nfunction aggregate. Give it a try.\n\n-- \nThom Brown\nRegistered Linux user: #516935\n", "msg_date": "Tue, 10 Aug 2010 17:06:16 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted group by" }, { "msg_contents": "On Tue, Aug 10, 2010 at 04:40:16PM +0100, Matthew Wakeling wrote:\n> \n> I'm trying to eke a little bit more performance out of an\n> application, and I was wondering if there was a better way to do the\n> following:\n> \n> I am trying to retrieve, for many sets of rows grouped on a couple\n> of fields, the value of an ungrouped field where the row has the\n> highest value in another ungrouped field. For instance, I have the\n> following table setup:\n> \n> group | whatever type\n> value | whatever type\n> number | int\n> Index: group\n> \n> I then have rows like this:\n> \n> group | value | number\n> -------------------------------------\n> Foo | foo | 1\n> Foo | turnips | 2\n> Bar | albatross | 3\n> Bar | monkey | 4\n> \n> I want to receive results like this:\n> \n> group | value\n> -----------------------\n> Foo | turnips\n> Bar | monkey\n> \n> Currently, I do this in my application by ordering by the number and\n> only using the last value. I imagine that this is something that can\n> be done in the new Postgres 9, with a sorted group by - something\n> like this:\n> \n> SELECT group, LAST(value, ORDER BY number) FROM table GROUP BY group\n> \n> Is this something that is already built in, or would I have to write\n> my own LAST aggregate function?\n\nthis is trivially done when usign 'distinct on':\nselect distinct on (group) *\nfrom table\norder by group desc, number desc;\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Tue, 10 Aug 2010 18:11:07 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted group by" }, { "msg_contents": "Matthew Wakeling wrote on 10.08.2010 18:03:\n> On Tue, 10 Aug 2010, Thomas Kellerer wrote:\n>> No. It's built in (8.4) and it's called Windowing functions:\n>> http://www.postgresql.org/docs/8.4/static/tutorial-window.html\n>> http://www.postgresql.org/docs/8.4/static/functions-window.html\n>>\n>> SELECT group, last_value(value) over(ORDER BY number)\n>> FROM table\n>\n> I may be mistaken, but as I understand it, a windowing function doesn't\n> reduce the number of rows in the results?\n\nYes you are right, a bit too quick on my side ;)\n\nBut this might be what you are after then:\n\nselect group_, value_\nfrom (\n select group_, value_, number_, row_number() over (partition by group_ order by value_ desc) as row_num\n from numbers\n) t\nwhere row_num = 1\norder by group_ desc\n\n", "msg_date": "Tue, 10 Aug 2010 18:22:27 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted group by" }, { "msg_contents": "On 10 August 2010 17:06, Thom Brown <[email protected]> wrote:\n> On 10 August 2010 17:03, Matthew Wakeling <[email protected]> wrote:\n>> On Tue, 10 Aug 2010, Thomas Kellerer wrote:\n>>>\n>>> No. It's built in (8.4) and it's called Windowing functions:\n>>> http://www.postgresql.org/docs/8.4/static/tutorial-window.html\n>>> http://www.postgresql.org/docs/8.4/static/functions-window.html\n>>>\n>>> SELECT group, last_value(value) over(ORDER BY number)\n>>> FROM table\n>>\n>> I may be mistaken, but as I understand it, a windowing function doesn't\n>> reduce the number of rows in the results?\n>>\n>\n> I think you are mistaken.  The last_value function is a window\n> function aggregate.  Give it a try.\n>\n\nD'oh, no, I'm mistaken. My brain has been malfunctioning today.\n\n-- \nThom Brown\nRegistered Linux user: #516935\n", "msg_date": "Tue, 10 Aug 2010 17:28:46 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted group by" }, { "msg_contents": "Matthew Wakeling <[email protected]> wrote:\n \n> I'm trying to eke a little bit more performance out of an\n> application\n \nIn addition to the suggestion from Thomas Kellerer, it would be\ninteresting to try the following and see how performance compares\nusing real data.\n \nselect group, value from tbl x\n where not exists\n (select * from tbl y\n where y.group = x.group and y.number > x.number);\n \nWe have a lot of code using this general technique, and I'm curious\nwhether there are big gains to be had by moving to the windowing\nfunctions. (I suspect there are.)\n \n-Kevin\n", "msg_date": "Tue, 10 Aug 2010 11:37:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted group by" }, { "msg_contents": " \nAnother couple of possible ways:\n\nSelect groupfield,value\n>From tbl x1\nWhere number = (select max(number) from tbl x2 where x2.groupfield=\nx1.groupfield)\n\n\n\nSelect groupfield,value\n>From tbl x1\nWhere (groupfield,number) in (select groupfield,max(number) from tbl group\nby groupfield)\n\nWhich is quickest?\nProbably best to try out and see.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Kevin Grittner\nSent: Tuesday, August 10, 2010 7:38 PM\nTo: Matthew Wakeling; [email protected]\nSubject: Re: [PERFORM] Sorted group by\n\nMatthew Wakeling <[email protected]> wrote:\n \n> I'm trying to eke a little bit more performance out of an application\n \nIn addition to the suggestion from Thomas Kellerer, it would be interesting\nto try the following and see how performance compares using real data.\n \nselect group, value from tbl x\n where not exists\n (select * from tbl y\n where y.group = x.group and y.number > x.number);\n \nWe have a lot of code using this general technique, and I'm curious whether\nthere are big gains to be had by moving to the windowing functions. (I\nsuspect there are.)\n \n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nNo virus found in this incoming message.\nChecked by AVG - www.avg.com\nVersion: 9.0.851 / Virus Database: 271.1.1/3061 - Release Date: 08/09/10\n21:35:00\n\n", "msg_date": "Tue, 10 Aug 2010 20:38:02 +0300", "msg_from": "Jonathan Blitz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted group by" }, { "msg_contents": "Original query:\n\nexplain analyse select * from tracker where objectid < 1200000;\n QUERY PLAN\n-----------------------------------------------------------------------\n Index Scan using tracker_objectid on tracker\n (cost=0.00..915152.62 rows=3684504 width=33)\n (actual time=0.061..5402.608 rows=3790872 loops=1)\n Index Cond: (objectid < 1200000)\n Total runtime: 9134.362 ms\n(3 rows)\n\nOn Tue, 10 Aug 2010, hubert depesz lubaczewski wrote:\n> select distinct on (group) *\n> from table\n> order by group desc, number desc;\n\nThis solution is rather obvious, and works on older versions of Postgres. \nThanks. However, the burden of sorting by two columns (actually, in our \napplication the group is two column, so sorting by three columns instead) \nmakes this significantly slower than just copying the whole data through \nour application (which effectively does a hash aggregation).\n\nexplain analyse select distinct on (objectid, fieldname) objectid, \nfieldname, sourcename, version from tracker where objectid < 1200000 order \nby objectid, fieldname, version desc;\n\n QUERY PLAN\n--------------------------------------------------------------------------\n Unique (cost=1330828.11..1357953.05 rows=361666 width=34)\n (actual time=12815.878..22452.737 rows=1782996 loops=1)\n -> Sort (cost=1330828.11..1339869.76 rows=3616658 width=34)\n (actual time=12815.873..16608.903 rows=3790872 loops=1)\n Sort Key: objectid, fieldname, version\n Sort Method: quicksort Memory: 420980kB\n -> Index Scan using tracker_objectid on tracker\n (cost=0.00..936861.47 rows=3616658 width=34)\n (actual time=0.061..5441.050 rows=3790872 loops=1)\n Index Cond: (objectid < 1200000)\n Total runtime: 24228.724 ms\n(7 rows)\n\nOn Tue, 10 Aug 2010, Thomas Kellerer wrote:\n> select group_, value_\n> from (\n> select group_, value_, number_, row_number() over (partition by group_ \n> order by value_ desc) as row_num\n> from numbers\n> ) t\n> where row_num = 1\n> order by group_ desc\n\nThis looks quite cute, however it is slightly slower than the DISTINCT ON \napproach.\n\nexplain analyse select objectid, fieldname, sourcename from (select \nobjectid, fieldname, sourcename, version, row_number() over (partition by \nobjectid, fieldname order by version desc) as row_num from tracker where \nobjectid < 1200000) as t where row_num = 1;\n QUERY PLAN\n-------------------------------------------------------------------------\n Subquery Scan t (cost=1330828.11..1457411.14 rows=18083 width=68)\n (actual time=12835.553..32220.075 rows=1782996 loops=1)\n Filter: (t.row_num = 1)\n -> WindowAgg (cost=1330828.11..1412202.92 rows=3616658 width=34)\n (actual time=12835.541..26471.802 rows=3790872 loops=1)\n -> Sort (cost=1330828.11..1339869.76 rows=3616658 width=34)\n (actual time=12822.560..16646.112 rows=3790872 loops=1)\n Sort Key: tracker.objectid, tracker.fieldname, tracker.version\n Sort Method: quicksort Memory: 420980kB\n -> Index Scan using tracker_objectid on tracker\n (cost=0.00..936861.47 rows=3616658 width=34)\n (actual time=0.067..5433.790 rows=3790872 loops=1)\n Index Cond: (objectid < 1200000)\n Total runtime: 34002.828 ms\n(9 rows)\n\nOn Tue, 10 Aug 2010, Kevin Grittner wrote:\n> select group, value from tbl x\n> where not exists\n> (select * from tbl y\n> where y.group = x.group and y.number > x.number);\n\nThis is a join, which is quite a bit slower:\n\nexplain analyse select objectid, fieldname, sourcename from tracker as a \nwhere not exists(select * from tracker as b where a.objectid = b.objectid \nand a.fieldname = b.fieldname and a.version < b.version and b.objectid < \n1200000) and a.objectid < 1200000;\n\n QUERY PLAN\n---------------------------------------------------------------------------\n Merge Anti Join (cost=2981427.73..3042564.32 rows=2411105 width=30)\n (actual time=24834.372..53939.131 rows=1802376 loops=1)\n Merge Cond: ((a.objectid = b.objectid) AND (a.fieldname = b.fieldname))\n Join Filter: (a.version < b.version)\n -> Sort (cost=1490713.86..1499755.51 rows=3616658 width=34)\n (actual time=12122.478..15944.255 rows=3790872 loops=1)\n Sort Key: a.objectid, a.fieldname\n Sort Method: quicksort Memory: 420980kB\n -> Index Scan using tracker_objectid on tracker a\n (cost=0.00..1096747.23 rows=3616658 width=34)\n (actual time=0.070..5403.235 rows=3790872 loops=1)\n Index Cond: (objectid < 1200000)\n -> Sort (cost=1490713.86..1499755.51 rows=3616658 width=17)\n (actual time=12710.564..20952.841 rows=8344994 loops=1)\n Sort Key: b.objectid, b.fieldname\n Sort Method: quicksort Memory: 336455kB\n -> Index Scan using tracker_objectid on tracker b\n (cost=0.00..1096747.23 rows=3616658 width=17)\n (actual time=0.084..5781.844 rows=3790872 loops=1)\n Index Cond: (objectid < 1200000)\n Total runtime: 55756.482 ms\n(14 rows)\n\nOn Tue, 10 Aug 2010, Jonathan Blitz wrote:\n> Select groupfield,value\n> From tbl x1\n> Where number = (select max(number) from tbl x2 where x2.groupfield=\n> x1.groupfield)\n\nThis one effectively forces a nested loop join:\n\nexplain analyse select objectid, fieldname, sourcename from tracker as a \nwhere version = (select max(version) from tracker as b where a.objectid = \nb.objectid and a.fieldname = b.fieldname) and a.objectid < 1200000;\n\n QUERY PLAN\n-------------------------------------------------------------------------\n Index Scan using tracker_objectid on tracker a\n (cost=0.00..58443381.75 rows=18083 width=34)\n (actual time=6.482..59803.225 rows=1802376 loops=1)\n Index Cond: (objectid < 1200000)\n Filter: (version = (SubPlan 2))\n SubPlan 2\n -> Result (cost=15.89..15.90 rows=1 width=0)\n (actual time=0.011..0.012 rows=1 loops=3790872)\n InitPlan 1 (returns $2)\n -> Limit (cost=0.00..15.89 rows=1 width=4)\n (actual time=0.007..0.008 rows=1 loops=3790872)\n -> Index Scan Backward using tracker_all on tracker b\n (cost=0.00..31.78 rows=2 width=4)\n (actual time=0.005..0.005 rows=1 loops=3790872)\n Index Cond: (($0 = objectid) AND ($1 = fieldname))\n Filter: (version IS NOT NULL)\n Total runtime: 61649.116 ms\n(11 rows)\n\nOn Tue, 10 Aug 2010, Jonathan Blitz wrote:\n> Select groupfield,value\n> From tbl x1\n> Where (groupfield,number) in (select groupfield,max(number) from tbl group\n> by groupfield)\n\nThis is another join.\n\nexplain analyse select objectid, fieldname, sourcename from tracker where \n(objectid, fieldname, version) in (select objectid, fieldname, \nmax(version) from tracker group by objectid, fieldname);\n\nI terminated this query after about an hour. Here is the EXPLAIN:\n\n QUERY PLAN\n------------------------------------------------------------------------\n Hash Join (cost=55310973.80..72060974.32 rows=55323 width=30)\n Hash Cond: ((public.tracker.objectid = public.tracker.objectid)\n AND (public.tracker.fieldname = public.tracker.fieldname)\n AND (public.tracker.version = (max(public.tracker.version))))\n -> Seq Scan on tracker (cost=0.00..5310600.80 rows=293972480 width=34)\n -> Hash (cost=54566855.96..54566855.96 rows=29397248 width=40)\n -> GroupAggregate\n (cost=50965693.08..54272883.48 rows=29397248 width=17)\n -> Sort\n (cost=50965693.08..51700624.28 rows=293972480 width=17)\n Sort Key: public.tracker.objectid, public.tracker.fieldname\n -> Seq Scan on tracker\n (cost=0.00..5310600.80 rows=293972480 width=17)\n(8 rows)\n\nMatthew\n\n-- \n I quite understand I'm doing algebra on the blackboard and the usual response\n is to throw objects... If you're going to freak out... wait until party time\n and invite me along -- Computer Science Lecturer\n", "msg_date": "Wed, 11 Aug 2010 14:46:36 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorted group by" }, { "msg_contents": "Excerpts from Matthew Wakeling's message of mar ago 10 11:40:16 -0400 2010:\n\n> I am trying to retrieve, for many sets of rows grouped on a couple of \n> fields, the value of an ungrouped field where the row has the highest \n> value in another ungrouped field.\n\nI think this does what you want (schema is from the tenk1 table in the\nregression database):\n\nselect string4 as group,\n (array_agg(stringu1 order by unique1 desc))[1] as value\nfrom tenk1\ngroup by 1 ;\n\nPlease let me know how it performs with your data. The plan is rather simple:\n\nregression=# explain analyze select string4 as group, (array_agg(stringu1 order by unique1 desc))[1] as value from tenk1 group by 1 ;\n QUERY PLAN \n───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n GroupAggregate (cost=0.00..1685.16 rows=4 width=132) (actual time=22.825..88.922 rows=4 loops=1)\n -> Index Scan using ts4 on tenk1 (cost=0.00..1635.11 rows=10000 width=132) (actual time=0.135..33.188 rows=10000 loops=1)\n Total runtime: 89.348 ms\n(3 filas)\n\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 11 Aug 2010 11:54:17 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorted group by" } ]
[ { "msg_contents": "With a 16 CPU, 32 GB Solaris Sparc server, is there any conceivable\nreason to use a 32 bit build rather than a 64 bit build? Apparently the\nSun PostgreSQL package includes a README that indicates you might want\nto think twice about using 64 bit because it is slower -- this seems\nlike outdated advice, but I was looking for confirmation one way or the\nother.\n\nAlso semi-related question: when building from source, using gcc,\nenabling debug (but *not* assert) is normally not much of a performance\nhit. Is the same true if you build with the Sun CC?\n\nThanks in advance for any thoughts/experiences.\n\nJoe\n\n\n\n-- \nJoe Conway\ncredativ LLC: http://www.credativ.us\nLinux, PostgreSQL, and general Open Source\nTraining, Service, Consulting, & 24x7 Support\n\n", "msg_date": "Wed, 11 Aug 2010 08:17:43 -0700", "msg_from": "Joseph Conway <[email protected]>", "msg_from_op": true, "msg_subject": "32 vs 64 bit build on Solaris Sparc" }, { "msg_contents": "Joseph Conway <[email protected]> writes:\n> Also semi-related question: when building from source, using gcc,\n> enabling debug (but *not* assert) is normally not much of a performance\n> hit. Is the same true if you build with the Sun CC?\n\nMost non-gcc compilers disable optimization altogether if you enable\ndebug :-(. Perhaps that isn't true of Sun's, but I'd check its\ndocumentation before considering --enable-debug for a production build.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Aug 2010 11:23:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 32 vs 64 bit build on Solaris Sparc " }, { "msg_contents": "Hi Joe,\n\nthe general rule on Solaris SPARC is:\n - if you need to address a big size of memory (over 4G): compile in 64bit\n - otherwise: compile in 32bit ;-)\n\nIt's true that 32bit code will run faster comparing to 64bit ont the\n64bit SPARC - you'll operate with 2 times shorter addresses, and in\nsome cases SPARC will be able to execute 2 operations in parallel on\n32bit code, while it'll be still one operation on 64bit code.. - But\nit's all about the code, because once you start to do I/O requests all\nkind of optimization on the instructions will be lost due I/O latency\n;-))\n\nSo, as usual, a real answer in each case may be obtained only by a real test..\nJust test both versions and you'll see yourself what is a valid in\nyour case :-))\n\nSame problem regarding compilers: in some cases GCC4 will give a\nbetter result, in some cases Sun Studio will be better (there are many\nposts in blogs about optimal compiler options to use).. - don't\nhesitate to try and don't forget to share here with others :-))\n\nRgds,\n-Dimitri\n\n\nOn 8/11/10, Joseph Conway <[email protected]> wrote:\n> With a 16 CPU, 32 GB Solaris Sparc server, is there any conceivable\n> reason to use a 32 bit build rather than a 64 bit build? Apparently the\n> Sun PostgreSQL package includes a README that indicates you might want\n> to think twice about using 64 bit because it is slower -- this seems\n> like outdated advice, but I was looking for confirmation one way or the\n> other.\n>\n> Also semi-related question: when building from source, using gcc,\n> enabling debug (but *not* assert) is normally not much of a performance\n> hit. Is the same true if you build with the Sun CC?\n>\n> Thanks in advance for any thoughts/experiences.\n>\n> Joe\n>\n>\n>\n> --\n> Joe Conway\n> credativ LLC: http://www.credativ.us\n> Linux, PostgreSQL, and general Open Source\n> Training, Service, Consulting, & 24x7 Support\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Thu, 12 Aug 2010 09:32:39 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 32 vs 64 bit build on Solaris Sparc" } ]
[ { "msg_contents": "A number of amusing aspects to this discussion.\n\n- I've carried out similar tests using the Intel X-25M with both PG and DB2 (both on linux). While it is a simple matter to build parallel databases on DB2, on HDD and SSD, with buffers and tablespaces and logging and on and on set to recreate as many scenarios as one wishes using a single engine instance, not so for PG. While PG is the \"best\" OS database, from a tuning and admin point of view there's rather a long way to go. No one should think that retail SSD should be used to support an enterprise database. People have gotten lulled into thinking otherwise as a result of the blurring of the two use cases in the HDD world where the difference is generally just QA.\n\n- All flash SSD munge the byte stream, some (SandForce controlled in particular) more than others. Industrial strength flash SSD can have 64 internal channels, written in parallel; they don't run on commodity controllers. Treating SSD as just a faster HDD is a trip on the road to perdition. Industrial strength (DRAM) SSDs have been used by serious database folks for a couple of decades, but not the storefront semi-professionals who pervade the web start up world. \n\n- The value of SSD in the database world is not as A Faster HDD(tm). Never was, despite the naive' who assert otherwise. The value of SSD is to enable BCNF datastores. Period. If you're not going to do that, don't bother. Silicon storage will never reach equivalent volumetric density, ever. SSD will never be useful in the byte bloat world of xml and other flat file datastores (resident in databases or not). Industrial strength SSD will always be more expensive/GB, and likely by a lot. (Re)factoring to high normalization strips out an order of magnitude of byte bloat, increases native data integrity by as much, reduces much of the redundant code, and puts the ACID where it belongs. All good things, but not effortless.\n\nYou're arguing about the wrong problem. Sufficiently bulletproof flash SSD exist and have for years, but their names are not well known (no one on this thread has named any), but neither the Intel parts nor any of their retail cousins have any place in the mix except development machines. Real SSD have MTBFs measured in decades; OEMs have qualified such parts, but you won't find them on the shelf at Best Buy. You need to concentrate on understanding what can be done with such drives that can't be done with vanilla HDD that cost 1/50 the dollars. Just being faster won't be the answer. Removing the difference between sequential file processing and true random access is what makes SSD worth the bother; makes true relational datastores second nature rather than rocket science.\n\nRobert\n", "msg_date": "Wed, 11 Aug 2010 20:53:56 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Completely un-tuned Postgresql\n\tbenchmark results: SSD \tvs desktop HDD" }, { "msg_contents": "[email protected] wrote:\n> Sufficiently bulletproof flash SSD exist and have for years, but their names are not well known (no one on this thread has named any)\n\nThe models perceived as bulletproof are the really dangerous ones to \ndeploy. First, people let their guard down and stop being as paranoid \nas they should be when they use them. Second, it becomes much more \ndifficult for them to justify buying more than one of the uber-SSD. \nThat combination makes it easier to go back to having a single copy of \ntheir data, and there's a really bad road to wander down.\n\nThe whole idea that kicked off this thread was to enable building \nsystems cheap enough to allow making more inexpensive copies of the \ndata. My systems at home for example follow this model to some degree. \nThere's not a single drive more expensive than $100 to be found here, \nbut everything important to me is sitting on four of them in two systems \nwithin seconds after I save it. However, even here I've found it worth \ndropping enough money for a real battery-backed write cache, to reduce \nthe odds of write corruption on the more important of the servers. Not \ndoing so would be a dangerously cheap decision. That's similar to how I \nfeel about SSDs right now too. You need them to be expensive enough \nthat corruption is unusual rather than expected after a crash--it's \nridiculous to not spend enough to get something that's not completely \nbroken by design--while not spending so much that you can't afford to \ndeploy many of them. \n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 12 Aug 2010 00:49:06 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On 12-8-2010 2:53 [email protected] wrote:\n> - The value of SSD in the database world is not as A Faster HDD(tm).\n> Never was, despite the naive' who assert otherwise. The value of SSD\n> is to enable BCNF datastores. Period. If you're not going to do\n> that, don't bother. Silicon storage will never reach equivalent\n> volumetric density, ever. SSD will never be useful in the byte bloat\n> world of xml and other flat file datastores (resident in databases or\n> not). Industrial strength SSD will always be more expensive/GB, and\n> likely by a lot. (Re)factoring to high normalization strips out an\n> order of magnitude of byte bloat, increases native data integrity by\n> as much, reduces much of the redundant code, and puts the ACID where\n> it belongs. All good things, but not effortless.\n\nIt is actually quite common to under-utilize (short stroke) hard drives \nin the enterprise world. Simply because 'they' need more IOps per amount \nof data than a completely utilized disk can offer.\nAs such the expense/GB can be much higher than simply dividing the \ncapacity by its price (and if you're looking at fiber channel disks, \nthat price is quite high already). And than it is relatively easy to \nfind enterprise SSD's with better pricing for the whole system as soon \nas the IOps are more important than the capacity.\n\nSo in the current market, you may already be better off, price-wise, \nwith (expensive) SSD if you need IOps rather than huge amounts of \nstorage. And while you're in both cases not comparing separate disks to \nSSD, you're replacing a 'disk based storage system' with a '(flash) \nmemory based storage system' and it basically becomes 'A Faster HDD' ;)\nBut you're right, that for data-heavy applications, completely replacing \nHDD's with some form of SSD is not going to happen soon, maybe never.\n\nBest regards,\n\nArjen\n", "msg_date": "Thu, 12 Aug 2010 09:22:19 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": " On 10-08-12 03:22 AM, Arjen van der Meijden wrote:\n> On 12-8-2010 2:53 [email protected] wrote:\n>> - The value of SSD in the database world is not as A Faster HDD(tm).\n>> Never was, despite the naive' who assert otherwise. The value of SSD\n>> is to enable BCNF datastores. Period. If you're not going to do\n>> that, don't bother. Silicon storage will never reach equivalent\n>> volumetric density, ever. SSD will never be useful in the byte bloat\n>> world of xml and other flat file datastores (resident in databases or\n>> not). Industrial strength SSD will always be more expensive/GB, and\n>> likely by a lot. (Re)factoring to high normalization strips out an\n>> order of magnitude of byte bloat, increases native data integrity by\n>> as much, reduces much of the redundant code, and puts the ACID where\n>> it belongs. All good things, but not effortless.\n>\n> It is actually quite common to under-utilize (short stroke) hard \n> drives in the enterprise world. Simply because 'they' need more IOps \n> per amount of data than a completely utilized disk can offer.\n> As such the expense/GB can be much higher than simply dividing the \n> capacity by its price (and if you're looking at fiber channel disks, \n> that price is quite high already). And than it is relatively easy to \n> find enterprise SSD's with better pricing for the whole system as soon \n> as the IOps are more important than the capacity.\n\nAnd when you compare the ongoing operational costs of rack space, \npowering and cooling for big arrays full of spinning disks to flash \nbased solutions the price comparison evens itself out even more.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Thu, 12 Aug 2010 08:35:14 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "On Thu, 12 Aug 2010, Brad Nicholson wrote:\n\n> On 10-08-12 03:22 AM, Arjen van der Meijden wrote:\n>> On 12-8-2010 2:53 [email protected] wrote:\n>>> - The value of SSD in the database world is not as A Faster HDD(tm).\n>>> Never was, despite the naive' who assert otherwise. The value of SSD\n>>> is to enable BCNF datastores. Period. If you're not going to do\n>>> that, don't bother. Silicon storage will never reach equivalent\n>>> volumetric density, ever. SSD will never be useful in the byte bloat\n>>> world of xml and other flat file datastores (resident in databases or\n>>> not). Industrial strength SSD will always be more expensive/GB, and\n>>> likely by a lot. (Re)factoring to high normalization strips out an\n>>> order of magnitude of byte bloat, increases native data integrity by\n>>> as much, reduces much of the redundant code, and puts the ACID where\n>>> it belongs. All good things, but not effortless.\n>> \n>> It is actually quite common to under-utilize (short stroke) hard drives in \n>> the enterprise world. Simply because 'they' need more IOps per amount of \n>> data than a completely utilized disk can offer.\n>> As such the expense/GB can be much higher than simply dividing the capacity \n>> by its price (and if you're looking at fiber channel disks, that price is \n>> quite high already). And than it is relatively easy to find enterprise \n>> SSD's with better pricing for the whole system as soon as the IOps are more \n>> important than the capacity.\n>\n> And when you compare the ongoing operational costs of rack space, powering \n> and cooling for big arrays full of spinning disks to flash based solutions \n> the price comparison evens itself out even more.\n\ncheck your SSD specs, some of the high performance ones draw quite a bit \nof power.\n\nDavid Lang\n\n", "msg_date": "Tue, 17 Aug 2010 23:37:26 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Completely un-tuned Postgresql benchmark results: SSD\n\tvs desktop HDD" }, { "msg_contents": "If you can cite a specific device that draws more than 10% of the equivalently performing (e.g., short stroked) array, I would be very interested. There may be a DRAM SSD that draws more than a flash SSD, but I'd be really surprised to find a flash SSD that draws the same as any HDD, even at gross capacity.\n\nRobert\n\n---- Original message ----\n>Date: Tue, 17 Aug 2010 23:37:26 -0700 (PDT)\n>From: [email protected] (on behalf of [email protected])\n>Subject: Re: [PERFORM] Completely un-tuned Postgresql benchmark results: SSD vs desktop HDD \n>To: Brad Nicholson <[email protected]>\n>Cc: [email protected]\n>\n>On Thu, 12 Aug 2010, Brad Nicholson wrote:\n>\n>> On 10-08-12 03:22 AM, Arjen van der Meijden wrote:\n>>> On 12-8-2010 2:53 [email protected] wrote:\n>>>> - The value of SSD in the database world is not as A Faster HDD(tm).\n>>>> Never was, despite the naive' who assert otherwise. The value of SSD\n>>>> is to enable BCNF datastores. Period. If you're not going to do\n>>>> that, don't bother. Silicon storage will never reach equivalent\n>>>> volumetric density, ever. SSD will never be useful in the byte bloat\n>>>> world of xml and other flat file datastores (resident in databases or\n>>>> not). Industrial strength SSD will always be more expensive/GB, and\n>>>> likely by a lot. (Re)factoring to high normalization strips out an\n>>>> order of magnitude of byte bloat, increases native data integrity by\n>>>> as much, reduces much of the redundant code, and puts the ACID where\n>>>> it belongs. All good things, but not effortless.\n>>> \n>>> It is actually quite common to under-utilize (short stroke) hard drives in \n>>> the enterprise world. Simply because 'they' need more IOps per amount of \n>>> data than a completely utilized disk can offer.\n>>> As such the expense/GB can be much higher than simply dividing the capacity \n>>> by its price (and if you're looking at fiber channel disks, that price is \n>>> quite high already). And than it is relatively easy to find enterprise \n>>> SSD's with better pricing for the whole system as soon as the IOps are more \n>>> important than the capacity.\n>>\n>> And when you compare the ongoing operational costs of rack space, powering \n>> and cooling for big arrays full of spinning disks to flash based solutions \n>> the price comparison evens itself out even more.\n>\n>check your SSD specs, some of the high performance ones draw quite a bit \n>of power.\n>\n>David Lang\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 18 Aug 2010 07:49:19 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Completely un-tuned Postgresql\n\tbenchmark results: SSD \tvs desktop HDD" } ]
[ { "msg_contents": "Ref these two queries against a view:\n\n-- QUERY 1, executes < 0.5 secs\nSELECT *\nFROM mdx_core.vw_provider AS p\nWHERE provider_id IN (13083101)\n\n-- QUERY 2, executes > 13.5 secs\nSELECT *\nFROM mdx_core.vw_provider AS p\nWHERE provider_id IN (SELECT 13083101)\n\nI am using the simple IN (SELECT n) in QUERY 2 to simplify the problem. I \nnoticed the oddity of the behaviour when I used a proper \"IN (SELECT myId \nFROM myTable)\" but the planner shows the same behaviour even if not \nselecting from a table - just the SELECT keyword is enough.\n\nPlans are below. The view has an internal UNION.\nAny explanation as to why this happens?\n\nThe actualt view is listed at the very bottom, if relevant.\n\nCarlo\n\n\n\nQUERY 1 PLAN\n\"Unique (cost=25.48..25.69 rows=2 width=417) (actual time=0.180..0.190\nrows=2 loops=1)\"\n\" -> Sort (cost=25.48..25.48 rows=2 width=417) (actual time=0.179..0.180\nrows=2 loops=1)\"\n\" Sort Key: \"*SELECT* 1\".provider_id, (NULL::integer), \"*SELECT*\n1\".master_id, \"*SELECT* 1\".client_ids, \"*SELECT* 1\".upin, \"*SELECT*\n1\".medical_education_number, \"*SELECT* 1\".abmsuid, \"*SELECT* 1\".npi,\n\"*SELECT* 1\".npi_status_code, \"*SELECT* 1\".cc_id, \"*SELECT* 1\".aoa_id,\n\"*SELECT* 1\".last_name, \"*SELECT* 1\".first_name, \"*SELECT* 1\".middle_name,\n\"*SELECT* 1\".suffix, \"*SELECT* 1\".display_name, \"*SELECT* 1\".display_title,\n\"*SELECT* 1\".nickname, \"*SELECT* 1\".familiar_name, \"*SELECT* 1\".pubmed_name,\n\"*SELECT* 1\".master_name, \"*SELECT* 1\".display_name_orig, (NULL::text),\n\"*SELECT* 1\".gender, \"*SELECT* 1\".birth_year, \"*SELECT* 1\".birth_month,\n\"*SELECT* 1\".birth_day, \"*SELECT* 1\".clinical_interest, \"*SELECT*\n1\".research_interest, \"*SELECT* 1\".summary, \"*SELECT* 1\".comments, \"*SELECT*\n1\".degree_types, \"*SELECT* 1\".provider_type_ids, \"*SELECT*\n1\".provider_status_code, \"*SELECT* 1\".provider_status_year, \"*SELECT*\n1\".created, \"*SELECT* 1\".unique_flag, \"*SELECT* 1\".is_locked, \"*SELECT*\n1\".provider_standing_code, \"*SELECT* 1\".impt_source_date, \"*SELECT*\n1\".input_resource_id, \"*SELECT* 1\".input_source_ids\"\n\" Sort Method: quicksort Memory: 27kB\"\n\" -> Append (cost=0.00..25.47 rows=2 width=417) (actual\ntime=0.078..0.143 rows=2 loops=1)\"\n\" -> Subquery Scan \"*SELECT* 1\" (cost=0.00..8.59 rows=1\nwidth=408) (actual time=0.078..0.079 rows=1 loops=1)\"\n\" -> Index Scan using provider_provider_id_idx on\nprovider p (cost=0.00..8.58 rows=1 width=408) (actual time=0.076..0.077\nrows=1 loops=1)\"\n\" Index Cond: (provider_id = 13083101)\"\n\" -> Subquery Scan \"*SELECT* 2\" (cost=0.00..16.87 rows=1\nwidth=417) (actual time=0.061..0.062 rows=1 loops=1)\"\n\" -> Nested Loop (cost=0.00..16.86 rows=1 width=417)\n(actual time=0.055..0.056 rows=1 loops=1)\"\n\" -> Index Scan using\nprovider_name_pid_rec_stat_idx on provider_alias pa (cost=0.00..8.27 rows=1\nwidth=32) (actual time=0.047..0.047 rows=1 loops=1)\"\n\" Index Cond: (provider_id = 13083101)\"\n\" -> Index Scan using provider_provider_id_idx on\nprovider p (cost=0.00..8.58 rows=1 width=389) (actual time=0.005..0.006\nrows=1 loops=1)\"\n\" Index Cond: (p.provider_id = 13083101)\"\n\"Total runtime: 0.371 ms\"\n\nQUERY 2 PLAN\n\"Merge IN Join (cost=2421241.80..3142039.99 rows=30011 width=2032) (actual\ntime=13778.400..13778.411 rows=2 loops=1)\"\n\" Merge Cond: (\"*SELECT* 1\".provider_id = (13083101))\"\n\" -> Unique (cost=2421241.77..3066486.33 rows=6002275 width=417) (actual\ntime=13778.119..13778.372 rows=110 loops=1)\"\n\" -> Sort (cost=2421241.77..2436247.46 rows=6002275 width=417)\n(actual time=13778.118..13778.163 rows=110 loops=1)\"\n\" Sort Key: \"*SELECT* 1\".provider_id, (NULL::integer),\n\"*SELECT* 1\".master_id, \"*SELECT* 1\".client_ids, \"*SELECT* 1\".upin,\n\"*SELECT* 1\".medical_education_number, \"*SELECT* 1\".abmsuid, \"*SELECT*\n1\".npi, \"*SELECT* 1\".npi_status_code, \"*SELECT* 1\".cc_id, \"*SELECT*\n1\".aoa_id, \"*SELECT* 1\".last_name, \"*SELECT* 1\".first_name, \"*SELECT*\n1\".middle_name, \"*SELECT* 1\".suffix, \"*SELECT* 1\".display_name, \"*SELECT*\n1\".display_title, \"*SELECT* 1\".nickname, \"*SELECT* 1\".familiar_name,\n\"*SELECT* 1\".pubmed_name, \"*SELECT* 1\".master_name, \"*SELECT*\n1\".display_name_orig, (NULL::text), \"*SELECT* 1\".gender, \"*SELECT*\n1\".birth_year, \"*SELECT* 1\".birth_month, \"*SELECT* 1\".birth_day, \"*SELECT*\n1\".clinical_interest, \"*SELECT* 1\".research_interest, \"*SELECT* 1\".summary,\n\"*SELECT* 1\".comments, \"*SELECT* 1\".degree_types, \"*SELECT*\n1\".provider_type_ids, \"*SELECT* 1\".provider_status_code, \"*SELECT*\n1\".provider_status_year, \"*SELECT* 1\".created, \"*SELECT* 1\".unique_flag,\n\"*SELECT* 1\".is_locked, \"*SELECT* 1\".provider_standing_code, \"*SELECT*\n1\".impt_source_date, \"*SELECT* 1\".input_resource_id, \"*SELECT*\n1\".input_source_ids\"\n\" Sort Method: external merge Disk: 423352kB\"\n\" -> Append (cost=0.00..596598.30 rows=6002275 width=417)\n(actual time=0.039..7879.715 rows=1312637 loops=1)\"\n\" -> Subquery Scan \"*SELECT* 1\" (cost=0.00..543238.96\nrows=5994998 width=408) (actual time=0.039..7473.664 rows=1305360 loops=1)\"\n\" -> Seq Scan on provider p (cost=0.00..483288.98\nrows=5994998 width=408) (actual time=0.037..6215.112 rows=1305360 loops=1)\"\n\" -> Subquery Scan \"*SELECT* 2\" (cost=0.00..53359.34\nrows=7277 width=417) (actual time=0.049..186.643 rows=7277 loops=1)\"\n\" -> Nested Loop (cost=0.00..53286.57 rows=7277\nwidth=417) (actual time=0.043..176.134 rows=7277 loops=1)\"\n\" -> Seq Scan on provider_alias pa\n(cost=0.00..157.77 rows=7277 width=32) (actual time=0.018..3.134 rows=7277\nloops=1)\"\n\" -> Index Scan using\nprovider_provider_id_idx on provider p (cost=0.00..7.29 rows=1 width=389)\n(actual time=0.021..0.021 rows=1 loops=7277)\"\n\" Index Cond: (p.provider_id =\npa.provider_id)\"\n\" -> Sort (cost=0.03..0.04 rows=1 width=4) (actual time=0.014..0.014\nrows=1 loops=1)\"\n\" Sort Key: (13083101)\"\n\" Sort Method: quicksort Memory: 25kB\"\n\" -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.001..0.001 rows=1 loops=1)\"\n\"Total runtime: 13959.905 ms\"\n\n\nREATE OR REPLACE VIEW mdx_core.vw_provider AS\nSELECT\n p.provider_id,\n NULL AS provider_alias_id,\n p.master_id,\n p.client_ids,\n p.upin,\n p.medical_education_number,\n p.abmsuid,\n p.npi,\n p.npi_status_code,\n p.cc_id,\n p.aoa_id,\n p.last_name,\n p.first_name,\n p.middle_name,\n p.suffix,\n p.display_name,\n p.display_title,\n p.nickname,\n p.familiar_name,\n p.pubmed_name,\n p.master_name,\n p.display_name_orig,\n NULL::text AS is_primary,\n p.gender,\n p.birth_year,\n p.birth_month,\n p.birth_day,\n p.clinical_interest,\n p.research_interest,\n p.summary,\n p.comments,\n p.degree_types,\n p.provider_type_ids,\n p.provider_status_code,\n p.provider_status_year,\n p.created,\n p.unique_flag,\n p.is_locked,\n p.provider_standing_code,\n p.impt_source_date,\n p.input_resource_id,\n p.input_source_ids\nFROM mdx_core.provider AS p\n\nUNION SELECT\n p.provider_id,\n pa.provider_alias_id,\n p.master_id,\n p.client_ids,\n p.upin,\n p.medical_education_number,\n p.abmsuid,\n p.npi,\n p.npi_status_code,\n p.cc_id,\n p.aoa_id,\n pa.last_name,\n pa.first_name,\n pa.middle_name,\n pa.suffix,\n p.display_name,\n p.display_title,\n p.nickname,\n p.familiar_name,\n p.pubmed_name,\n p.master_name,\n p.display_name_orig,\n pa.is_primary,\n p.gender,\n p.birth_year,\n p.birth_month,\n p.birth_day,\n p.clinical_interest,\n p.research_interest,\n p.summary,\n p.comments,\n p.degree_types,\n p.provider_type_ids,\n p.provider_status_code,\n p.provider_status_year,\n p.created,\n p.unique_flag,\n p.is_locked,\n p.provider_standing_code,\n p.impt_source_date,\n p.input_resource_id,\n p.input_source_ids\nFROM mdx_core.provider_alias AS pa\nJOIN mdx_core.provider AS p USING (provider_id);\n\n", "msg_date": "Thu, 12 Aug 2010 17:47:30 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Very bad plan when using VIEW and IN (SELECT...*)" }, { "msg_contents": "\"Carlo Stonebanks\" <[email protected]> wrote:\n \n> SELECT *\n> FROM mdx_core.vw_provider AS p\n> WHERE provider_id IN (SELECT 13083101)\n> \n> I am using the simple IN (SELECT n) in QUERY 2 to simplify the\n> problem. I noticed the oddity of the behaviour when I used a\n> proper \"IN (SELECT myId FROM myTable)\"\n \nDid you try?:\n \nSELECT *\nFROM mdx_core.vw_provider AS p\nWHERE EXISTS (SELECT * FROM myTable WHERE myId = provider_id)\n \nFor any follow-up you should probably mention what version of\nPostgreSQL this is and how it's configured.\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n", "msg_date": "Fri, 13 Aug 2010 08:28:47 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very bad plan when using VIEW and IN\n\t (SELECT...*)" }, { "msg_contents": "Unfortunately I had double-posted this - originally in General.\n\nTom Lane pointed out (in PG-GENERAL) that the planner will take any IN\n(SELECT...) statement and do a JOIN, which is what is causing the planner\nproblem - even though the SELECT was just returning a constant. Obviously,\nthe real query this was testing was something more real-world.\n\nSO, I took my original query and turned it to this:\n\nSELECT *\nFROM mdx_core.vw_provider AS p\nWHERE provider_id = ANY array(\n SELECT provider_id\n FROM mdx_core.provider_alias\n)\n\nBLISTERINGLY fast!\n\nPG version is 8.3 - as for configuration, I didn't want to throw too much\ninfo as my concern was actually whether views were as klunky as other DB\nplatforms.\n\nCarlo\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: August 13, 2010 9:29 AM\nTo: [email protected]; Carlo Stonebanks\nSubject: Re: [PERFORM] Very bad plan when using VIEW and IN (SELECT...*)\n\n\"Carlo Stonebanks\" <[email protected]> wrote:\n \n> SELECT *\n> FROM mdx_core.vw_provider AS p\n> WHERE provider_id IN (SELECT 13083101)\n> \n> I am using the simple IN (SELECT n) in QUERY 2 to simplify the\n> problem. I noticed the oddity of the behaviour when I used a\n> proper \"IN (SELECT myId FROM myTable)\"\n \nDid you try?:\n \nSELECT *\nFROM mdx_core.vw_provider AS p\nWHERE EXISTS (SELECT * FROM myTable WHERE myId = provider_id)\n \nFor any follow-up you should probably mention what version of\nPostgreSQL this is and how it's configured.\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n\n", "msg_date": "Fri, 13 Aug 2010 10:51:00 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very bad plan when using VIEW and IN (SELECT...*)" } ]
[ { "msg_contents": "List,\n\nI see benefits to using the 8.4 WINDOW clause in some cases but I'm having\ntrouble seeing if I could morph the following query using it.\n\nwxd0812=# EXPLAIN ANALYZE\nwxd0812-# SELECT * FROM\nwxd0812-# (SELECT DISTINCT ON (key1_id,key2_id) * FROM sid120.data ORDER BY\nkey1_id,key2_id,time_id DESC) x\nwxd0812-# WHERE NOT deleted;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan x (cost=739953.35..787617.03 rows=280375 width=52) (actual\ntime=45520.385..55412.327 rows=391931 loops=1)\n Filter: (NOT deleted)\n -> Unique (cost=739953.35..782009.53 rows=560750 width=52) (actual\ntime=45520.378..54780.824 rows=591037 loops=1)\n -> Sort (cost=739953.35..753972.08 rows=5607490 width=52) (actual\ntime=45520.374..50520.177 rows=5607490 loops=1)\n Sort Key: key1_id, key2_id, time_id\n -> Seq Scan on data (cost=0.00..111383.90 rows=5607490\nwidth=52) (actual time=0.074..6579.367 rows=5607490 loops=1) Total runtime:\n55721.241 ms\n(7 rows)\n\n\nThe purpose of this query is to identify the most recent versions of key1_id\n& key2_id pairs according to time_id which increases over time.\n\nTIA,\nGreg\n\nList,I see benefits to using the 8.4 WINDOW clause in some cases but I'm having trouble seeing if I could morph the following query using it.wxd0812=# EXPLAIN ANALYZEwxd0812-# SELECT * FROMwxd0812-#  (SELECT DISTINCT ON (key1_id,key2_id) * FROM sid120.data ORDER BY key1_id,key2_id,time_id DESC) x\nwxd0812-#  WHERE NOT deleted;                                                                QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan x  (cost=739953.35..787617.03 rows=280375 width=52) (actual time=45520.385..55412.327 rows=391931 loops=1)   Filter: (NOT deleted)   ->  Unique  (cost=739953.35..782009.53 rows=560750 width=52) (actual time=45520.378..54780.824 rows=591037 loops=1)\n         ->  Sort  (cost=739953.35..753972.08 rows=5607490 width=52) (actual time=45520.374..50520.177 rows=5607490 loops=1)               Sort Key: key1_id, key2_id, time_id               ->  Seq Scan on data  (cost=0.00..111383.90 rows=5607490 width=52) (actual time=0.074..6579.367 rows=5607490 loops=1) Total runtime: 55721.241 ms\n(7 rows)The purpose of this query is to identify the most recent versions of key1_id & key2_id pairs according to time_id which increases over time.  TIA,Greg", "msg_date": "Fri, 13 Aug 2010 10:43:58 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Can WINDOW be used?" }, { "msg_contents": "\n> wxd0812=# EXPLAIN ANALYZE\n> wxd0812-# SELECT * FROM\n> wxd0812-# (SELECT DISTINCT ON (key1_id,key2_id) * FROM sid120.data\n> ORDER BY key1_id,key2_id,time_id DESC) x\n> wxd0812-# WHERE NOT deleted;\n\nSELECT * FROM (\n SELECT data.*,\n rank() as rank over\n ( partition by key1_id, key2_id order by time_id DESC )\n FROM data\n) as rankings\nWHERE rank = 1 AND NOT deleted;\n\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 13 Aug 2010 14:17:02 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can WINDOW be used?" } ]
[ { "msg_contents": "Hi,\n\nI'm hoping someone can offer some help here. The query and explain analyze and table layout are below and attached in a text file if the formatting is bad. \n\nThe query is part of a bigger query that our front end runs. This is the part that takes forever (84 minutes in this case) to finish and more often than not the front end times out. The table (answerselectinstance) has 168664317 rows while the member table has 626435 rows.\n\nPostgres Version 8.25\nCentOs 5.2\n16 Gig RAM\n192MB work_mem (increasing to 400MB didn't change the outcome)\nvery light use on this server, it ais a slave to a slony replicated master/slave setup.\n\nAgain, apologies if the formatting got munged, the attached text file has the same info.\n\nThanking you in advance for any help and suggestions.\n\nAaron\n\nexplain analyze select distinct(id) from member where id in (select memberid from answerselectinstance where nswerid = 127443 OR answerid = 127444 OR answerid = 127445 OR answerid = 127446 OR answerid = 127447 OR answerid = 127448 ) ;\n\nLOG: duration: 5076038.709 ms statement: explain analyze select distinct(id) from member where id in (select memberid from answerselectinstance where answerid = 127443 OR answerid = 127444 OR answerid = 127445 OR answerid = 127446 OR answerid = 127447 OR answerid = 127448 ) ;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=101357.24..101357.28 rows=9 width=4) (actual time=5075511.974..5075911.077 rows=143520 loops=1)\n -> Sort (cost=101357.24..101357.26 rows=9 width=4) (actual time=5075511.971..5075644.323 rows=143520 loops=1)\n Sort Key: member.id\n -> Nested Loop IN Join (cost=0.00..101357.10 rows=9 width=4) (actual time=19.867..5075122.724 rows=143520 loops=1)\n -> Seq Scan on member (cost=0.00..78157.65 rows=626265 width=4) (actual time=3.338..2003.582 rows=626410 loops=1)\n -> Index Scan using asi_memberid_idx on answerselectinstance (cost=0.00..444.46 rows=9 width=4) (actual time=8.096..8.096 rows=0 loops=626410)\n Index Cond: (member.id = answerselectinstance.memberid)\n Filter: ((answerid = 127443) OR (answerid = 127444) OR (answerid = 127445) OR (answerid = 127446) OR (answerid = 127447) OR (answerid = 127448))\n Total runtime: 5076034.203 ms\n(9 rows)\n\n Column | Type | Modifiers \n----------------+-----------------------------+------------------------------------------------------------\n memberid | integer | not null\n answerid | integer | not null\n taskinstanceid | integer | not null default 0\n created | timestamp without time zone | default \"timestamp\"('now'::text)\n id | integer | not null default nextval(('\"asi_id_seq\"'::text)::regclass)\nIndexes:\n \"asi_pkey\" PRIMARY KEY, btree (id)\n \"asi_answerid_idx\" btree (answerid)\n \"asi_memberid_idx\" btree (memberid)\n \"asi_taskinstanceid_idx\" btree (taskinstanceid)\nTriggers:\n _bzzprod_cluster_denyaccess_301 BEFORE INSERT OR DELETE OR UPDATE ON answerselectinstance FOR EACH ROW EXECUTE PROCEDURE _bzzprod_cluster.denyaccess('_bzzprod_cluster')", "msg_date": "Mon, 16 Aug 2010 21:06:30 -0400", "msg_from": "\"Aaron Burnett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Very poor performance" }, { "msg_contents": "This is weird - is there a particular combination of memberid/answered in answerselectindex that has a very high rowcount?\n\nFirst change I would suggest looking into would be to try changing sub-query logic to check existence and limit the result set of the sub-query to a single row\n\nSelect distinct(m.id)\n>From member m\nWhere exists (\n Select 1\n From answerselectinstance a\n Where a.member_id = m.id\n And a.answerid between 127443 and 127448\n Limit 1\n)\n\n\nIf member.id is a primary key, you can eliminate the \"distinct\" i.e. the sort.\n\n\nSecond would be to build a partial index on answersselectindex to index only the memberid's you are interested in:\n\n\"Create index <new_index_name> on answersselectindex(memberid) where answerid between 127443 and 127448\"\n\n\n\nMr\n\n\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Aaron Burnett\nSent: Monday, August 16, 2010 6:07 PM\nTo: [email protected]\nSubject: [PERFORM] Very poor performance\n\n\n\nHi,\n\nI'm hoping someone can offer some help here. The query and explain analyze and table layout are below and attached in a text file if the formatting is bad.\n\nThe query is part of a bigger query that our front end runs. This is the part that takes forever (84 minutes in this case) to finish and more often than not the front end times out. The table (answerselectinstance) has 168664317 rows while the member table has 626435 rows.\n\nPostgres Version 8.25\nCentOs 5.2\n16 Gig RAM\n192MB work_mem (increasing to 400MB didn't change the outcome)\nvery light use on this server, it ais a slave to a slony replicated master/slave setup.\n\nAgain, apologies if the formatting got munged, the attached text file has the same info.\n\nThanking you in advance for any help and suggestions.\n\nAaron\n\nexplain analyze select distinct(id) from member where id in (select memberid from answerselectinstance where nswerid = 127443 OR answerid = 127444 OR answerid = 127445 OR answerid = 127446 OR answerid = 127447 OR answerid = 127448 ) ;\n\nLOG: duration: 5076038.709 ms statement: explain analyze select distinct(id) from member where id in (select memberid from answerselectinstance where answerid = 127443 OR answerid = 127444 OR answerid = 127445 OR answerid = 127446 OR answerid = 127447 OR answerid = 127448 ) ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=101357.24..101357.28 rows=9 width=4) (actual time=5075511.974..5075911.077 rows=143520 loops=1)\n -> Sort (cost=101357.24..101357.26 rows=9 width=4) (actual time=5075511.971..5075644.323 rows=143520 loops=1)\n Sort Key: member.id\n -> Nested Loop IN Join (cost=0.00..101357.10 rows=9 width=4) (actual time=19.867..5075122.724 rows=143520 loops=1)\n -> Seq Scan on member (cost=0.00..78157.65 rows=626265 width=4) (actual time=3.338..2003.582 rows=626410 loops=1)\n -> Index Scan using asi_memberid_idx on answerselectinstance (cost=0.00..444.46 rows=9 width=4) (actual time=8.096..8.096 rows=0 loops=626410)\n Index Cond: (member.id = answerselectinstance.memberid)\n Filter: ((answerid = 127443) OR (answerid = 127444) OR (answerid = 127445) OR (answerid = 127446) OR (answerid = 127447) OR (answerid = 127448))\n Total runtime: 5076034.203 ms\n(9 rows)\n\n Column | Type | Modifiers\n----------------+-----------------------------+------------------------------------------------------------\n memberid | integer | not null\n answerid | integer | not null\n taskinstanceid | integer | not null default 0\n created | timestamp without time zone | default \"timestamp\"('now'::text)\n id | integer | not null default nextval(('\"asi_id_seq\"'::text)::regclass)\nIndexes:\n \"asi_pkey\" PRIMARY KEY, btree (id)\n \"asi_answerid_idx\" btree (answerid)\n \"asi_memberid_idx\" btree (memberid)\n \"asi_taskinstanceid_idx\" btree (taskinstanceid)\nTriggers:\n _bzzprod_cluster_denyaccess_301 BEFORE INSERT OR DELETE OR UPDATE ON answerselectinstance FOR EACH ROW EXECUTE PROCEDURE _bzzprod_cluster.denyaccess('_bzzprod_cluster')\n\n\nVery poor performanceThis is weird – is there a particular combination of memberid/answered in answerselectindex that has a very high rowcount? First change I would suggest looking into would be to try changing sub-query logic to check existence and limit the result set of the sub-query to a single row Select distinct(m.id)From member mWhere exists (      Select 1      From answerselectinstance a      Where a.member_id = m.id      And a.answerid between 127443 and 127448      Limit 1)  If member.id is a primary key, you can eliminate the “distinct” i.e. the sort.  Second would be to build a partial index on answersselectindex to index only the memberid’s you are interested in: “Create index <new_index_name> on answersselectindex(memberid) where answerid between 127443 and 127448”   Mr   From: [email protected] [mailto:[email protected]] On Behalf Of Aaron BurnettSent: Monday, August 16, 2010 6:07 PMTo: [email protected]: [PERFORM] Very poor performance  Hi,I'm hoping someone can offer some help here. The query and explain analyze and table layout are below and attached in a text file if the formatting is bad.The query is part of a bigger query that our front end runs. This is the part that takes forever (84 minutes in this case) to finish and more often than not the front end times out. The table (answerselectinstance) has 168664317 rows while the member table has 626435 rows.Postgres Version 8.25CentOs 5.216 Gig RAM192MB work_mem  (increasing to 400MB didn't change the outcome)very light use on this server, it ais a slave to a slony replicated master/slave setup.Again, apologies if the formatting got munged, the attached text file has the same info.Thanking you in advance for any help and suggestions.Aaronexplain analyze select distinct(id) from member  where id in (select memberid from answerselectinstance where   nswerid = 127443  OR  answerid = 127444  OR  answerid = 127445  OR  answerid = 127446  OR  answerid = 127447  OR  answerid = 127448   ) ;LOG:  duration: 5076038.709 ms  statement: explain analyze select distinct(id) from member  where id in (select memberid from answerselectinstance where   answerid = 127443  OR  answerid = 127444  OR  answerid = 127445  OR  answerid = 127446  OR  answerid = 127447  OR  answerid = 127448   ) ;                                                                              QUERY PLAN                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- Unique  (cost=101357.24..101357.28 rows=9 width=4) (actual time=5075511.974..5075911.077 rows=143520 loops=1)   ->  Sort  (cost=101357.24..101357.26 rows=9 width=4) (actual time=5075511.971..5075644.323 rows=143520 loops=1)         Sort Key: member.id         ->  Nested Loop IN Join  (cost=0.00..101357.10 rows=9 width=4) (actual time=19.867..5075122.724 rows=143520 loops=1)               ->  Seq Scan on member  (cost=0.00..78157.65 rows=626265 width=4) (actual time=3.338..2003.582 rows=626410 loops=1)               ->  Index Scan using asi_memberid_idx on answerselectinstance  (cost=0.00..444.46 rows=9 width=4) (actual time=8.096..8.096 rows=0 loops=626410)                     Index Cond: (member.id = answerselectinstance.memberid)                     Filter: ((answerid = 127443) OR (answerid = 127444) OR (answerid = 127445) OR (answerid = 127446) OR (answerid = 127447) OR (answerid = 127448)) Total runtime: 5076034.203 ms(9 rows)     Column     |            Type             |                         Modifiers                         ----------------+-----------------------------+------------------------------------------------------------ memberid       | integer                     | not null answerid       | integer                     | not null taskinstanceid | integer                     | not null default 0 created        | timestamp without time zone | default \"timestamp\"('now'::text) id             | integer                     | not null default nextval(('\"asi_id_seq\"'::text)::regclass)Indexes:    \"asi_pkey\" PRIMARY KEY, btree (id)    \"asi_answerid_idx\" btree (answerid)    \"asi_memberid_idx\" btree (memberid)    \"asi_taskinstanceid_idx\" btree (taskinstanceid)Triggers:    _bzzprod_cluster_denyaccess_301 BEFORE INSERT OR DELETE OR UPDATE ON answerselectinstance FOR EACH ROW EXECUTE PROCEDURE _bzzprod_cluster.denyaccess('_bzzprod_cluster')", "msg_date": "Mon, 16 Aug 2010 18:51:26 -0700", "msg_from": "Mark Rostron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance" }, { "msg_contents": "Thanks Mark,\n\nYeah, I apologize, I forgot to mention a couple of things.\n\nm.id is the primary key but the biggest problem is that the query loops 626410 times because at one time people were allowed to delete member.id rows which now will break the application if the a.memberid comes out and it doesn't exist in the member table.\n\nThe version you sent me yields pretty much the same results.\n\nAll I really SHOULD have to do is query the a.memberid column to get distinct memberid and the query takes less than 2 seconds. The join to the member table and subsequnt 600K loops are the killer. The answerselectinstance table has 166 million rows... so the math is pretty easy on why it's painfully slow.\n\nOther than delting data in the answerselectinstance table to get rid of the orphan memberid's I was hoping someone had a better way to do this.\n\n\n-----Original Message-----\nFrom: Mark Rostron [mailto:[email protected]]\nSent: Mon 8/16/2010 9:51 PM\nTo: Aaron Burnett; [email protected]\nSubject: RE: Very poor performance\n \nThis is weird - is there a particular combination of memberid/answered in answerselectindex that has a very high rowcount?\n\nFirst change I would suggest looking into would be to try changing sub-query logic to check existence and limit the result set of the sub-query to a single row\n\nSelect distinct(m.id)\n>From member m\nWhere exists (\n Select 1\n From answerselectinstance a\n Where a.member_id = m.id\n And a.answerid between 127443 and 127448\n Limit 1\n)\n\n\nIf member.id is a primary key, you can eliminate the \"distinct\" i.e. the sort.\n\n\nSecond would be to build a partial index on answersselectindex to index only the memberid's you are interested in:\n\n\"Create index <new_index_name> on answersselectindex(memberid) where answerid between 127443 and 127448\"\n\n\n\nMr\n\n\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Aaron Burnett\nSent: Monday, August 16, 2010 6:07 PM\nTo: [email protected]\nSubject: [PERFORM] Very poor performance\n\n\n\nHi,\n\nI'm hoping someone can offer some help here. The query and explain analyze and table layout are below and attached in a text file if the formatting is bad.\n\nThe query is part of a bigger query that our front end runs. This is the part that takes forever (84 minutes in this case) to finish and more often than not the front end times out. The table (answerselectinstance) has 168664317 rows while the member table has 626435 rows.\n\nPostgres Version 8.25\nCentOs 5.2\n16 Gig RAM\n192MB work_mem (increasing to 400MB didn't change the outcome)\nvery light use on this server, it ais a slave to a slony replicated master/slave setup.\n\nAgain, apologies if the formatting got munged, the attached text file has the same info.\n\nThanking you in advance for any help and suggestions.\n\nAaron\n\nexplain analyze select distinct(id) from member where id in (select memberid from answerselectinstance where nswerid = 127443 OR answerid = 127444 OR answerid = 127445 OR answerid = 127446 OR answerid = 127447 OR answerid = 127448 ) ;\n\nLOG: duration: 5076038.709 ms statement: explain analyze select distinct(id) from member where id in (select memberid from answerselectinstance where answerid = 127443 OR answerid = 127444 OR answerid = 127445 OR answerid = 127446 OR answerid = 127447 OR answerid = 127448 ) ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=101357.24..101357.28 rows=9 width=4) (actual time=5075511.974..5075911.077 rows=143520 loops=1)\n -> Sort (cost=101357.24..101357.26 rows=9 width=4) (actual time=5075511.971..5075644.323 rows=143520 loops=1)\n Sort Key: member.id\n -> Nested Loop IN Join (cost=0.00..101357.10 rows=9 width=4) (actual time=19.867..5075122.724 rows=143520 loops=1)\n -> Seq Scan on member (cost=0.00..78157.65 rows=626265 width=4) (actual time=3.338..2003.582 rows=626410 loops=1)\n -> Index Scan using asi_memberid_idx on answerselectinstance (cost=0.00..444.46 rows=9 width=4) (actual time=8.096..8.096 rows=0 loops=626410)\n Index Cond: (member.id = answerselectinstance.memberid)\n Filter: ((answerid = 127443) OR (answerid = 127444) OR (answerid = 127445) OR (answerid = 127446) OR (answerid = 127447) OR (answerid = 127448))\n Total runtime: 5076034.203 ms\n(9 rows)\n\n Column | Type | Modifiers\n----------------+-----------------------------+------------------------------------------------------------\n memberid | integer | not null\n answerid | integer | not null\n taskinstanceid | integer | not null default 0\n created | timestamp without time zone | default \"timestamp\"('now'::text)\n id | integer | not null default nextval(('\"asi_id_seq\"'::text)::regclass)\nIndexes:\n \"asi_pkey\" PRIMARY KEY, btree (id)\n \"asi_answerid_idx\" btree (answerid)\n \"asi_memberid_idx\" btree (memberid)\n \"asi_taskinstanceid_idx\" btree (taskinstanceid)\nTriggers:\n _bzzprod_cluster_denyaccess_301 BEFORE INSERT OR DELETE OR UPDATE ON answerselectinstance FOR EACH ROW EXECUTE PROCEDURE _bzzprod_cluster.denyaccess('_bzzprod_cluster')\n\n\n\n\n\n\n\nRE: Very poor performance\n\n\n\n\nThanks Mark,\n\nYeah, I apologize, I forgot to mention a couple of things.\n\nm.id is the primary key but the biggest problem is that the query loops 626410 times because at one time people were allowed to delete member.id rows which now will break the application if the a.memberid comes out and it doesn't exist in the member table.\n\nThe version you sent me yields pretty much the same results.\n\nAll I really SHOULD have to do is query the a.memberid column to get distinct memberid and the query takes less than 2 seconds. The join to the member table and subsequnt 600K loops are the killer. The answerselectinstance table has 166 million rows... so the math is pretty easy on why it's painfully slow.\n\nOther than delting data in the answerselectinstance table to get rid of the orphan memberid's I was hoping someone had a better way to do this.\n\n\n-----Original Message-----\nFrom: Mark Rostron [mailto:[email protected]]\nSent: Mon 8/16/2010 9:51 PM\nTo: Aaron Burnett; [email protected]\nSubject: RE: Very poor performance\n\nThis is weird - is there a particular combination of memberid/answered in answerselectindex that has a very high rowcount?\n\nFirst change I would suggest looking into would be to try changing sub-query logic to check existence and limit the result set of the sub-query to a single row\n\nSelect distinct(m.id)\n>From member m\nWhere exists (\n      Select 1\n      From answerselectinstance a\n      Where a.member_id = m.id\n      And a.answerid between 127443 and 127448\n      Limit 1\n)\n\n\nIf member.id is a primary key, you can eliminate the \"distinct\" i.e. the sort.\n\n\nSecond would be to build a partial index on answersselectindex to index only the memberid's you are interested in:\n\n\"Create index <new_index_name> on answersselectindex(memberid) where answerid between 127443 and 127448\"\n\n\n\nMr\n\n\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Aaron Burnett\nSent: Monday, August 16, 2010 6:07 PM\nTo: [email protected]\nSubject: [PERFORM] Very poor performance\n\n\n\nHi,\n\nI'm hoping someone can offer some help here. The query and explain analyze and table layout are below and attached in a text file if the formatting is bad.\n\nThe query is part of a bigger query that our front end runs. This is the part that takes forever (84 minutes in this case) to finish and more often than not the front end times out. The table (answerselectinstance) has 168664317 rows while the member table has 626435 rows.\n\nPostgres Version 8.25\nCentOs 5.2\n16 Gig RAM\n192MB work_mem  (increasing to 400MB didn't change the outcome)\nvery light use on this server, it ais a slave to a slony replicated master/slave setup.\n\nAgain, apologies if the formatting got munged, the attached text file has the same info.\n\nThanking you in advance for any help and suggestions.\n\nAaron\n\nexplain analyze select distinct(id) from member  where id in (select memberid from answerselectinstance where   nswerid = 127443  OR  answerid = 127444  OR  answerid = 127445  OR  answerid = 127446  OR  answerid = 127447  OR  answerid = 127448   ) ;\n\nLOG:  duration: 5076038.709 ms  statement: explain analyze select distinct(id) from member  where id in (select memberid from answerselectinstance where   answerid = 127443  OR  answerid = 127444  OR  answerid = 127445  OR  answerid = 127446  OR  answerid = 127447  OR  answerid = 127448   ) ;\n                                                                              QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique  (cost=101357.24..101357.28 rows=9 width=4) (actual time=5075511.974..5075911.077 rows=143520 loops=1)\n   ->  Sort  (cost=101357.24..101357.26 rows=9 width=4) (actual time=5075511.971..5075644.323 rows=143520 loops=1)\n         Sort Key: member.id\n         ->  Nested Loop IN Join  (cost=0.00..101357.10 rows=9 width=4) (actual time=19.867..5075122.724 rows=143520 loops=1)\n               ->  Seq Scan on member  (cost=0.00..78157.65 rows=626265 width=4) (actual time=3.338..2003.582 rows=626410 loops=1)\n               ->  Index Scan using asi_memberid_idx on answerselectinstance  (cost=0.00..444.46 rows=9 width=4) (actual time=8.096..8.096 rows=0 loops=626410)\n                     Index Cond: (member.id = answerselectinstance.memberid)\n                     Filter: ((answerid = 127443) OR (answerid = 127444) OR (answerid = 127445) OR (answerid = 127446) OR (answerid = 127447) OR (answerid = 127448))\n Total runtime: 5076034.203 ms\n(9 rows)\n\n     Column     |            Type             |                         Modifiers\n----------------+-----------------------------+------------------------------------------------------------\n memberid       | integer                     | not null\n answerid       | integer                     | not null\n taskinstanceid | integer                     | not null default 0\n created        | timestamp without time zone | default \"timestamp\"('now'::text)\n id             | integer                     | not null default nextval(('\"asi_id_seq\"'::text)::regclass)\nIndexes:\n    \"asi_pkey\" PRIMARY KEY, btree (id)\n    \"asi_answerid_idx\" btree (answerid)\n    \"asi_memberid_idx\" btree (memberid)\n    \"asi_taskinstanceid_idx\" btree (taskinstanceid)\nTriggers:\n    _bzzprod_cluster_denyaccess_301 BEFORE INSERT OR DELETE OR UPDATE ON answerselectinstance FOR EACH ROW EXECUTE PROCEDURE _bzzprod_cluster.denyaccess('_bzzprod_cluster')", "msg_date": "Mon, 16 Aug 2010 22:19:48 -0400", "msg_from": "\"Aaron Burnett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very poor performance" }, { "msg_contents": "\"Aaron Burnett\" <[email protected]> wrote:\n \n> Postgres Version 8.25\n \nDo you mean 8.2.5? (There is no PostgreSQL version 8.25.)\n \nIf you're concerned about performance and you're still on 8.2, you\nmight want to consider updating to a new major version.\n \n> 16 Gig RAM\n> 192MB work_mem (increasing to 400MB didn't change the outcome)\n \nWhat other non-default settings do you have?\n \n> explain analyze select distinct(id) from member where id in\n> (select memberid from answerselectinstance where nswerid =\n> 127443 OR answerid = 127444 OR answerid = 127445 OR answerid\n> = 127446 OR answerid = 127447 OR answerid = 127448 ) ;\n \nHow does this do?:\n \nexplain analyze\nselect distinct(m.id)\n from answerselectinstance a\n join member m\n on m.id = a.memberid\n where a.answerid between 127443 and 127448\n;\n \n-Kevin\n", "msg_date": "Tue, 17 Aug 2010 09:18:21 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n> \"Aaron Burnett\" <[email protected]> wrote:\n \n>> Postgres Version 8.25\n> \n> Do you mean 8.2.5? (There is no PostgreSQL version 8.25.)\n \nI just noticed that there's an 8.0.25 -- if that's what you're\nrunning, it's a bit silly trying to optimize individual slow queries\n-- performance has improved dramatically since then. Upgrade and\nsee if you still have any issues, and tune from there.\n \nBy the way, 8.0 is going out of support as soon as the 9.0 release\ncomes out; likely next month.\n \n-Kevin\n", "msg_date": "Tue, 17 Aug 2010 10:43:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> By the way, 8.0 is going out of support as soon as the 9.0 release\n> comes out; likely next month.\n\nSmall clarification on that: the plan is that there will be exactly\none more minor update of 8.0 (and 7.4). So it'll go out of support\nafter the next set of back-branch update releases, which will most\nlikely *not* be synchronized with 9.0.0 release.\n\n\"Next month\" might be an accurate statement anyway. I think we're\nprobably overdue to make updates.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Aug 2010 11:57:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance " }, { "msg_contents": "So, building the partial index will avoid the table lookup.\nCurrently answerselectindex only has single-column indexes on memberid and answerid, so any query with a predicate on both columns is gonna be forced to do an index lookup on one column followed by a table lookup to get the other one (which is what the plan shows).\nThis will be slower than if you can get it to lookup only an index.\nI suggested a partial index (and not a two-column index) to keep it small, and to reduce the likelihood that it will screw up another query.\nAnyway - good luck man.\n?\n\n\nFrom: Aaron Burnett [mailto:[email protected]]\nSent: Monday, August 16, 2010 7:20 PM\nTo: Mark Rostron; [email protected]\nSubject: RE: Very poor performance\n\n\n\nThanks Mark,\n\nYeah, I apologize, I forgot to mention a couple of things.\n\nm.id is the primary key but the biggest problem is that the query loops 626410 times because at one time people were allowed to delete member.id rows which now will break the application if the a.memberid comes out and it doesn't exist in the member table.\n\nThe version you sent me yields pretty much the same results.\n\nAll I really SHOULD have to do is query the a.memberid column to get distinct memberid and the query takes less than 2 seconds. The join to the member table and subsequnt 600K loops are the killer. The answerselectinstance table has 166 million rows... so the math is pretty easy on why it's painfully slow.\n\nOther than delting data in the answerselectinstance table to get rid of the orphan memberid's I was hoping someone had a better way to do this.\n\n\n-----Original Message-----\nFrom: Mark Rostron [mailto:[email protected]]\nSent: Mon 8/16/2010 9:51 PM\nTo: Aaron Burnett; [email protected]\nSubject: RE: Very poor performance\n\nThis is weird - is there a particular combination of memberid/answered in answerselectindex that has a very high rowcount?\n\nFirst change I would suggest looking into would be to try changing sub-query logic to check existence and limit the result set of the sub-query to a single row\n\nSelect distinct(m.id)\n>From member m\nWhere exists (\n Select 1\n From answerselectinstance a\n Where a.member_id = m.id\n And a.answerid between 127443 and 127448\n Limit 1\n)\n\n\nIf member.id is a primary key, you can eliminate the \"distinct\" i.e. the sort.\n\n\nSecond would be to build a partial index on answersselectindex to index only the memberid's you are interested in:\n\n\"Create index <new_index_name> on answersselectindex(memberid) where answerid between 127443 and 127448\"\n\n\n\nMr\n\n\n\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]] On Behalf Of Aaron Burnett\nSent: Monday, August 16, 2010 6:07 PM\nTo: [email protected]<mailto:[email protected]>\nSubject: [PERFORM] Very poor performance\n\n\n\nHi,\n\nI'm hoping someone can offer some help here. The query and explain analyze and table layout are below and attached in a text file if the formatting is bad.\n\nThe query is part of a bigger query that our front end runs. This is the part that takes forever (84 minutes in this case) to finish and more often than not the front end times out. The table (answerselectinstance) has 168664317 rows while the member table has 626435 rows.\n\nPostgres Version 8.25\nCentOs 5.2\n16 Gig RAM\n192MB work_mem (increasing to 400MB didn't change the outcome)\nvery light use on this server, it ais a slave to a slony replicated master/slave setup.\n\nAgain, apologies if the formatting got munged, the attached text file has the same info.\n\nThanking you in advance for any help and suggestions.\n\nAaron\n\nexplain analyze select distinct(id) from member where id in (select memberid from answerselectinstance where nswerid = 127443 OR answerid = 127444 OR answerid = 127445 OR answerid = 127446 OR answerid = 127447 OR answerid = 127448 ) ;\n\nLOG: duration: 5076038.709 ms statement: explain analyze select distinct(id) from member where id in (select memberid from answerselectinstance where answerid = 127443 OR answerid = 127444 OR answerid = 127445 OR answerid = 127446 OR answerid = 127447 OR answerid = 127448 ) ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=101357.24..101357.28 rows=9 width=4) (actual time=5075511.974..5075911.077 rows=143520 loops=1)\n -> Sort (cost=101357.24..101357.26 rows=9 width=4) (actual time=5075511.971..5075644.323 rows=143520 loops=1)\n Sort Key: member.id\n -> Nested Loop IN Join (cost=0.00..101357.10 rows=9 width=4) (actual time=19.867..5075122.724 rows=143520 loops=1)\n -> Seq Scan on member (cost=0.00..78157.65 rows=626265 width=4) (actual time=3.338..2003.582 rows=626410 loops=1)\n -> Index Scan using asi_memberid_idx on answerselectinstance (cost=0.00..444.46 rows=9 width=4) (actual time=8.096..8.096 rows=0 loops=626410)\n Index Cond: (member.id = answerselectinstance.memberid)\n Filter: ((answerid = 127443) OR (answerid = 127444) OR (answerid = 127445) OR (answerid = 127446) OR (answerid = 127447) OR (answerid = 127448))\n Total runtime: 5076034.203 ms\n(9 rows)\n\n Column | Type | Modifiers\n----------------+-----------------------------+------------------------------------------------------------\n memberid | integer | not null\n answerid | integer | not null\n taskinstanceid | integer | not null default 0\n created | timestamp without time zone | default \"timestamp\"('now'::text)\n id | integer | not null default nextval(('\"asi_id_seq\"'::text)::regclass)\nIndexes:\n \"asi_pkey\" PRIMARY KEY, btree (id)\n \"asi_answerid_idx\" btree (answerid)\n \"asi_memberid_idx\" btree (memberid)\n \"asi_taskinstanceid_idx\" btree (taskinstanceid)\nTriggers:\n _bzzprod_cluster_denyaccess_301 BEFORE INSERT OR DELETE OR UPDATE ON answerselectinstance FOR EACH ROW EXECUTE PROCEDURE _bzzprod_cluster.denyaccess('_bzzprod_cluster')\n\n\nRE: Very poor performanceSo, building the partial index will avoid the table lookup.Currently answerselectindex only has single-column indexes on memberid and answerid, so any query with a predicate on both columns is gonna be forced to do an index lookup on one column followed by a table lookup to get the other one (which is what the plan shows).This will be slower than if you can get it to lookup only an index.I suggested a partial index (and not a two-column index) to keep it small, and to reduce the likelihood that it will screw up another query.Anyway – good luck man.?  From: Aaron Burnett [mailto:[email protected]] Sent: Monday, August 16, 2010 7:20 PMTo: Mark Rostron; [email protected]: RE: Very poor performance  Thanks Mark,Yeah, I apologize, I forgot to mention a couple of things.m.id is the primary key but the biggest problem is that the query loops 626410 times because at one time people were allowed to delete member.id rows which now will break the application if the a.memberid comes out and it doesn't exist in the member table.The version you sent me yields pretty much the same results.All I really SHOULD have to do is query the a.memberid column to get distinct memberid and the query takes less than 2 seconds. The join to the member table and subsequnt 600K loops are the killer. The answerselectinstance table has 166 million rows... so the math is pretty easy on why it's painfully slow.Other than delting data in the answerselectinstance table to get rid of the orphan memberid's I was hoping someone had a better way to do this.-----Original Message-----From: Mark Rostron [mailto:[email protected]]Sent: Mon 8/16/2010 9:51 PMTo: Aaron Burnett; [email protected]: RE: Very poor performanceThis is weird - is there a particular combination of memberid/answered in answerselectindex that has a very high rowcount?First change I would suggest looking into would be to try changing sub-query logic to check existence and limit the result set of the sub-query to a single rowSelect distinct(m.id)From member mWhere exists (      Select 1      From answerselectinstance a      Where a.member_id = m.id      And a.answerid between 127443 and 127448      Limit 1)If member.id is a primary key, you can eliminate the \"distinct\" i.e. the sort.Second would be to build a partial index on answersselectindex to index only the memberid's you are interested in:\"Create index <new_index_name> on answersselectindex(memberid) where answerid between 127443 and 127448\"MrFrom: [email protected] [mailto:[email protected]] On Behalf Of Aaron BurnettSent: Monday, August 16, 2010 6:07 PMTo: [email protected]: [PERFORM] Very poor performanceHi,I'm hoping someone can offer some help here. The query and explain analyze and table layout are below and attached in a text file if the formatting is bad.The query is part of a bigger query that our front end runs. This is the part that takes forever (84 minutes in this case) to finish and more often than not the front end times out. The table (answerselectinstance) has 168664317 rows while the member table has 626435 rows.Postgres Version 8.25CentOs 5.216 Gig RAM192MB work_mem  (increasing to 400MB didn't change the outcome)very light use on this server, it ais a slave to a slony replicated master/slave setup.Again, apologies if the formatting got munged, the attached text file has the same info.Thanking you in advance for any help and suggestions.Aaronexplain analyze select distinct(id) from member  where id in (select memberid from answerselectinstance where   nswerid = 127443  OR  answerid = 127444  OR  answerid = 127445  OR  answerid = 127446  OR  answerid = 127447  OR  answerid = 127448   ) ;LOG:  duration: 5076038.709 ms  statement: explain analyze select distinct(id) from member  where id in (select memberid from answerselectinstance where   answerid = 127443  OR  answerid = 127444  OR  answerid = 127445  OR  answerid = 127446  OR  answerid = 127447  OR  answerid = 127448   ) ;                                                                              QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------------- Unique  (cost=101357.24..101357.28 rows=9 width=4) (actual time=5075511.974..5075911.077 rows=143520 loops=1)   ->  Sort  (cost=101357.24..101357.26 rows=9 width=4) (actual time=5075511.971..5075644.323 rows=143520 loops=1)         Sort Key: member.id         ->  Nested Loop IN Join  (cost=0.00..101357.10 rows=9 width=4) (actual time=19.867..5075122.724 rows=143520 loops=1)               ->  Seq Scan on member  (cost=0.00..78157.65 rows=626265 width=4) (actual time=3.338..2003.582 rows=626410 loops=1)               ->  Index Scan using asi_memberid_idx on answerselectinstance  (cost=0.00..444.46 rows=9 width=4) (actual time=8.096..8.096 rows=0 loops=626410)                     Index Cond: (member.id = answerselectinstance.memberid)                     Filter: ((answerid = 127443) OR (answerid = 127444) OR (answerid = 127445) OR (answerid = 127446) OR (answerid = 127447) OR (answerid = 127448)) Total runtime: 5076034.203 ms(9 rows)     Column     |            Type             |                         Modifiers----------------+-----------------------------+------------------------------------------------------------ memberid       | integer                     | not null answerid       | integer                     | not null taskinstanceid | integer                     | not null default 0 created        | timestamp without time zone | default \"timestamp\"('now'::text) id             | integer                     | not null default nextval(('\"asi_id_seq\"'::text)::regclass)Indexes:    \"asi_pkey\" PRIMARY KEY, btree (id)    \"asi_answerid_idx\" btree (answerid)    \"asi_memberid_idx\" btree (memberid)    \"asi_taskinstanceid_idx\" btree (taskinstanceid)Triggers:    _bzzprod_cluster_denyaccess_301 BEFORE INSERT OR DELETE OR UPDATE ON answerselectinstance FOR EACH ROW EXECUTE PROCEDURE _bzzprod_cluster.denyaccess('_bzzprod_cluster')", "msg_date": "Tue, 17 Aug 2010 09:21:24 -0700", "msg_from": "Mark Rostron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance" }, { "msg_contents": "\nThanks for the response kevin. Answers interspersed below.\n\n\nOn 8/17/10 10:18 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n> \"Aaron Burnett\" <[email protected]> wrote:\n> \n>> Postgres Version 8.25\n> \n> Do you mean 8.2.5? (There is no PostgreSQL version 8.25.)\n> \n\nYeah, missed a '.', it's 8.2.5\n\n> If you're concerned about performance and you're still on 8.2, you\n> might want to consider updating to a new major version.\n> \n>> 16 Gig RAM\n>> 192MB work_mem (increasing to 400MB didn't change the outcome)\n> \n> What other non-default settings do you have?\n\nmaintenance_work_mem = 1024MB\nmax_stack_depth = 8MB\nmax_fsm_pages = 8000000\nmax_fsm_relations = 2000\n\n> \n>> explain analyze select distinct(id) from member where id in\n>> (select memberid from answerselectinstance where nswerid =\n>> 127443 OR answerid = 127444 OR answerid = 127445 OR answerid\n>> = 127446 OR answerid = 127447 OR answerid = 127448 ) ;\n> \n> How does this do?:\n> \n> explain analyze\n> select distinct(m.id)\n> from answerselectinstance a\n> join member m\n> on m.id = a.memberid\n> where a.answerid between 127443 and 127448\n> ;\n> \n> -Kevin\n\nUnfortunately because of the way the application does the building of the\nvariables (answerid) and the query, these were only coincidentally in\nnumeric order, so the query and resulting plan will look more like this:\n(and it finishes fast)\n\nLOG: duration: 4875.943 ms statement: explain analyze select\ndistinct(m.id)\n from answerselectinstance a\n join member m\n on m.id = a.memberid\n where a.answerid = 127443 OR answerid = 127444 OR\na.answerid = 127445 OR a.answerid = 127446 OR a.answerid = 127447 OR\na.answerid = 127448;\n \nQUERY PLAN \n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------------------\n Unique (cost=265346.57..265884.69 rows=107623 width=4) (actual\ntime=4362.948..4751.042 rows=143563 loops=1)\n -> Sort (cost=265346.57..265615.63 rows=107623 width=4) (actual\ntime=4362.945..4489.002 rows=143820 loops=1)\n Sort Key: m.id\n -> Hash Join (cost=112462.72..256351.64 rows=107623 width=4)\n(actual time=2246.333..4134.240 rows=143820 loops=1)\n Hash Cond: (a.memberid = m.id)\n -> Bitmap Heap Scan on answerselectinstance a\n(cost=1363.57..142561.92 rows=107623 width=4) (actual time=84.082..1447.093\nrows=143820 loops=1)\n Recheck Cond: ((answerid = 127443) OR (answerid =\n127444) OR (answerid = 127445) OR (answerid = 127446) OR (answerid = 127447)\nOR (answerid = 127448))\n -> BitmapOr (cost=1363.57..1363.57 rows=107651\nwidth=0) (actual time=41.723..41.723 rows=0 loops=1)\n -> Bitmap Index Scan on asi_answerid_idx\n(cost=0.00..200.36 rows=17942 width=0) (actual time=8.133..8.133 rows=32614\nloops=1)\n Index Cond: (answerid = 127443)\n -> Bitmap Index Scan on asi_answerid_idx\n(cost=0.00..200.36 rows=17942 width=0) (actual time=6.498..6.498 rows=23539\nloops=1)\n Index Cond: (answerid = 127444)\n -> Bitmap Index Scan on asi_answerid_idx\n(cost=0.00..200.36 rows=17942 width=0) (actual time=5.935..5.935 rows=20368\nloops=1)\n Index Cond: (answerid = 127445)\n -> Bitmap Index Scan on asi_answerid_idx\n(cost=0.00..200.36 rows=17942 width=0) (actual time=6.619..6.619 rows=21812\nloops=1)\n Index Cond: (answerid = 127446)\n -> Bitmap Index Scan on asi_answerid_idx\n(cost=0.00..200.36 rows=17942 width=0) (actual time=3.039..3.039 rows=9562\nloops=1)\n Index Cond: (answerid = 127447)\n -> Bitmap Index Scan on asi_answerid_idx\n(cost=0.00..200.36 rows=17942 width=0) (actual time=11.490..11.490\nrows=35925 loops=1)\n Index Cond: (answerid = 127448)\n -> Hash (cost=103267.40..103267.40 rows=626540 width=4)\n(actual time=2161.933..2161.933 rows=626626 loops=1)\n -> Seq Scan on member m (cost=0.00..103267.40\nrows=626540 width=4) (actual time=0.009..1467.145 rows=626626 loops=1)\n Total runtime: 4875.015 ms\n\nI got it to run a million times faster than in it's original form simply by\nremoving the 'distinct' on the m.id because m.id is a primary key and adding\nthe distinct to a.memberid, but by changing the query in any way it breaks\nsome other part of the application as this is just a small part of the total\n\"building process\".\n\nI may be stuck between a rock and a very hard place as we don't have the\nresources at this time for someone to rewite the whole building (this is\njust a tiny part of the process that does what we call 'group building')\nprocedure.\n\nThanks to everyone that has responded thus far. Your input is appreciated\nand welcomed.\n\nAaron\n\n\n\n", "msg_date": "Tue, 17 Aug 2010 13:54:09 -0400", "msg_from": "Aaron Burnett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance" }, { "msg_contents": "Aaron Burnett <[email protected]> wrote:\n \n>>> 16 Gig RAM\n \n>>> 192MB work_mem (increasing to 400MB didn't change the outcome)\n>> \n>> What other non-default settings do you have?\n> \n> maintenance_work_mem = 1024MB\n> max_stack_depth = 8MB\n> max_fsm_pages = 8000000\n> max_fsm_relations = 2000\n \nSince you haven't set effective_cache_size, you're discouraging some\ntypes of plans which might be worth considering. This should\nnormally be set to the sum of your shared_buffers setting and\nwhatever is cached by the OS; try setting effective_cache_size to\n15MB. Speaking of shared_buffers, are you really at the default for\nthat, too? If so, try setting it to somewhere between 1GB and 4GB. \n(I would test at 1, 2, and 4 if possible, since the best setting is\ndependent on workload.)\n \nYou may also want to try adjustments to random_page_cost and\nseq_page_cost to see if you get a better plan. How large is the\nactive (frequently accessed) portion of your database? If your RAM\nis large enough to cover that, you should probably set both to equal\nvalues somewhere in the range of 0.1 to 0.005. (Again, testing with\nyour queries is important.) If your caching is significant (which I\nwould expect) but not enough to cover the active portion, you might\nwant to leave seq_page_cost alone and bring random_page_cost down to\nsomewhere around 2.\n \nAll of these except shared_buffers can be set in your session and\ntested quickly and easily, without any need to restart PostgreSQL.\n \nFor more information, check the manual and this Wiki page:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \n-Kevin\n", "msg_date": "Tue, 17 Aug 2010 13:19:00 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance" }, { "msg_contents": "On 18/08/10 06:19, Kevin Grittner wrote:\n>\n>\n> Since you haven't set effective_cache_size, you're discouraging some\n> types of plans which might be worth considering. This should\n> normally be set to the sum of your shared_buffers setting and\n> whatever is cached by the OS; try setting effective_cache_size to\n> 15MB. Speaking of shared_buffers, are you really at the default for\n> that, too? If so, try setting it to somewhere between 1GB and 4GB.\n> (I would test at 1, 2, and 4 if possible, since the best setting is\n> dependent on workload.)\n>\n>\n> \n\nKevin - I'm guessing you meant to suggest setting effective_cache_size \nto 15GB (not 15MB)....\n\nCheers\n\nMark\n", "msg_date": "Wed, 18 Aug 2010 10:27:45 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance" }, { "msg_contents": "On Tue, Aug 17, 2010 at 7:54 PM, Aaron Burnett <[email protected]> wrote:\n> Yeah, missed a '.', it's 8.2.5\n\nCentos 5.5 has postgresql 8.4.4 available from the main repository.\nYou might consider an upgrade.\n", "msg_date": "Wed, 18 Aug 2010 13:48:07 +0200", "msg_from": "Hannes Frederic Sowa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance" } ]
[ { "msg_contents": "A little background - I have various multi-column indexes whenever I\nhave queries which restrict the output based on the values of the 2\nfields (for example, a client code and the date of a transaction).\n\nIs there a performance gain using this approach as opposed to using 2\nseparate indexes, one on the first column and one on the second column?\n\n \n\nThe reason I am asking is that my coding convetion goes back to the days\nwhere I used ISAM tables, so the systems did not know how to use more\nthan a single index.\n\nIn some cases, I may have an index on (columna, columnb) and one on\n(columnb, columna) due to the data access patterns. If there are no\nperformance gains in having these multi-part indexes, and performance\nwill be the same as having one index solely on columna and one solely on\ncolumnb, then I can reduce the disk usage significantly in some cases.\n\n\n\n\n\n\n\n\n\n\n\nA little background – I have various multi-column\nindexes whenever I have queries which restrict the output based on the values\nof the 2 fields (for example, a client code and the date of a transaction).\nIs there a performance gain using this approach as opposed\nto using 2 separate indexes, one on the first column and one on the second\ncolumn?\n \nThe reason I am asking is that my coding convetion goes back\nto the days where I used ISAM tables, so the systems did not know how to use\nmore than a single index.\nIn some cases, I may have an index on (columna, columnb)  and\none on (columnb, columna) due to the data access patterns.  If there are\nno performance gains in having these multi-part indexes, and performance will\nbe the same as having one index solely on columna and one solely on columnb,\nthen I can reduce the disk usage significantly in some cases.", "msg_date": "Mon, 16 Aug 2010 21:22:57 -0600", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Quesion on the use of indexes" }, { "msg_contents": "\"Benjamin Krajmalnik\" <[email protected]> writes:\n> A little background - I have various multi-column indexes whenever I\n> have queries which restrict the output based on the values of the 2\n> fields (for example, a client code and the date of a transaction).\n\n> Is there a performance gain using this approach as opposed to using 2\n> separate indexes, one on the first column and one on the second column?\n\nMaybe, maybe not ... it's going to depend on a bunch of factors, one of\nwhich is what your update load is like compared to the queries that read\nthe indexes. There's a bit of coverage of this in the fine manual: see\nhttp://www.postgresql.org/docs/8.4/static/indexes-multicolumn.html\nand the next few pages.\n\nThe short of it is that there's no substitute for doing your own\nexperiments for your own application ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Aug 2010 23:33:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quesion on the use of indexes " }, { "msg_contents": "Excerpts from Tom Lane's message of lun ago 16 23:33:29 -0400 2010:\n> \"Benjamin Krajmalnik\" <[email protected]> writes:\n> > A little background - I have various multi-column indexes whenever I\n> > have queries which restrict the output based on the values of the 2\n> > fields (for example, a client code and the date of a transaction).\n> \n> > Is there a performance gain using this approach as opposed to using 2\n> > separate indexes, one on the first column and one on the second column?\n> \n> Maybe, maybe not ... it's going to depend on a bunch of factors, one of\n> which is what your update load is like compared to the queries that read\n> the indexes. There's a bit of coverage of this in the fine manual: see\n> http://www.postgresql.org/docs/8.4/static/indexes-multicolumn.html\n> and the next few pages.\n\nAnother important factor is how selective is each clause in isolation\ncompared to how selective they are together. We have found that doing\nBitmapAnd of two bitmap-scanned indexes is sometimes much too slow\ncompared to a two-column index. (I have yet to see a case where indexes\nbeyond two columns are useful; at this point, combined bitmap indexscans\nare enough.)\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 17 Aug 2010 11:07:39 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quesion on the use of indexes" }, { "msg_contents": "Here's a quote from the docs:\n\nTo combine multiple indexes, the system scans each needed index and prepares a bitmap in memory giving the locations of table rows that are reported as matching that index's conditions. The bitmaps are then ANDed and ORed together as needed by the query. Finally, the actual table rows are visited and returned. The table rows are visited in physical order, because that is how the bitmap is laid out; this means that any ordering of the original indexes is lost, and so a separate sort step will be needed if the query has an ORDER BY clause. For this reason, and because each additional index scan adds extra time, the planner will sometimes choose to use a simple index scan even though additional indexes are available that could have been used as well\n\nSo, if you have a multi-column index, supply values from the major component on down, not skipping any (but not necessarily supplying all components, leaving the tail components), and the index is clustered, then you will get the best performance on a range scan. For equality scans, who knows? For high selectivity (meaning here, few hits) of single indexes the cost of preparing the bitmaps and such may be less than traversing the multi-index and visiting the table. For non-clustered multi-column, my bet would be on the single indexes up to some small number of indexes.\n\nAnd, as the docs say, the optimizer may well decide that it isn't worth the effort to use more than the most selective single index.\n\nRobert\n\n---- Original message ----\n>Date: Tue, 17 Aug 2010 11:07:39 -0400\n>From: [email protected] (on behalf of Alvaro Herrera <[email protected]>)\n>Subject: Re: [PERFORM] Quesion on the use of indexes \n>To: Tom Lane <[email protected]>\n>Cc: Benjamin Krajmalnik <[email protected]>,pgsql-performance <[email protected]>\n>\n>Excerpts from Tom Lane's message of lun ago 16 23:33:29 -0400 2010:\n>> \"Benjamin Krajmalnik\" <[email protected]> writes:\n>> > A little background - I have various multi-column indexes whenever I\n>> > have queries which restrict the output based on the values of the 2\n>> > fields (for example, a client code and the date of a transaction).\n>> \n>> > Is there a performance gain using this approach as opposed to using 2\n>> > separate indexes, one on the first column and one on the second column?\n>> \n>> Maybe, maybe not ... it's going to depend on a bunch of factors, one of\n>> which is what your update load is like compared to the queries that read\n>> the indexes. There's a bit of coverage of this in the fine manual: see\n>> http://www.postgresql.org/docs/8.4/static/indexes-multicolumn.html\n>> and the next few pages.\n>\n>Another important factor is how selective is each clause in isolation\n>compared to how selective they are together. We have found that doing\n>BitmapAnd of two bitmap-scanned indexes is sometimes much too slow\n>compared to a two-column index. (I have yet to see a case where indexes\n>beyond two columns are useful; at this point, combined bitmap indexscans\n>are enough.)\n>\n>-- \n>Álvaro Herrera <[email protected]>\n>The PostgreSQL Company - Command Prompt, Inc.\n>PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 17 Aug 2010 11:27:56 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quesion on the use of indexes" } ]
[ { "msg_contents": "\nHi,\n\nI've database of lyrics and I'm using this query for suggest box.\nSELECT views, title, id FROM songs WHERE title ILIKE 'bey%' ORDER BY views DESC LIMIT 15;\nIn query plan is this line: -> Seq Scan on songs (cost=0.00..11473.56 rows=5055 width=23) (actual time=1.088..89.863 rows=77 loops=1)\nit takes about 90ms\n\nbut when i modify query (remove sort)\nSELECT views, title, id FROM songs WHERE title ILIKE 'bey%' LIMIT 15;\nIn query plan -> Seq Scan on songs (cost=0.00..11473.56 rows=5055 width=23) (actual time=1.020..20.601 rows=15 loops=1\nseq scan takes only 20ms now, why?\n\nOr any suggestion to optimize this query?\nIn table songs are about 150.000 rows.\n\nThank you for your reply.\n\nBest regards.\nMarek Fiala\n", "msg_date": "Tue, 17 Aug 2010 09:26:19 +0200", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Search query is curious" }, { "msg_contents": "On 17 August 2010 08:26, <[email protected]> wrote:\n>\n> Hi,\n>\n> I've database of lyrics and I'm using this query for suggest box.\n> SELECT views, title, id FROM songs  WHERE title ILIKE 'bey%' ORDER BY views DESC LIMIT 15;\n> In query plan is this line:  ->  Seq Scan on songs  (cost=0.00..11473.56 rows=5055 width=23) (actual time=1.088..89.863 rows=77 loops=1)\n> it takes about 90ms\n>\n> but when i modify query (remove sort)\n> SELECT views, title, id FROM songs  WHERE title ILIKE 'bey%' LIMIT 15;\n> In query plan ->  Seq Scan on songs  (cost=0.00..11473.56 rows=5055 width=23) (actual time=1.020..20.601 rows=15 loops=1\n> seq scan takes only 20ms now, why?\n\nSorts have a cost, so will take longer.\n\n> Or any suggestion to optimize this query?\n> In table songs are about 150.000 rows.\n\nIt might be an idea to add an index to your views column to prevent\nthe need for a sequential scan to sort. Also, ILIKE won't be able to\nuse an index, so if you wish to match against title, you may wish to\nchange your query to use:\n\nWHERE lower(title) LIKE ....\n\nAnd then create an index on lower(title).\n\nRegards\n\n-- \nThom Brown\nRegistered Linux user: #516935\n", "msg_date": "Tue, 17 Aug 2010 11:44:39 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Search query is curious" }, { "msg_contents": "Hello\n\n2010/8/17 <[email protected]>:\n>\n> Hi,\n>\n> I've database of lyrics and I'm using this query for suggest box.\n> SELECT views, title, id FROM songs  WHERE title ILIKE 'bey%' ORDER BY views DESC LIMIT 15;\n> In query plan is this line:  ->  Seq Scan on songs  (cost=0.00..11473.56 rows=5055 width=23) (actual time=1.088..89.863 rows=77 loops=1)\n> it takes about 90ms\n>\n> but when i modify query (remove sort)\n> SELECT views, title, id FROM songs  WHERE title ILIKE 'bey%' LIMIT 15;\n> In query plan ->  Seq Scan on songs  (cost=0.00..11473.56 rows=5055 width=23) (actual time=1.020..20.601 rows=15 loops=1\n> seq scan takes only 20ms now, why?\n>\n> Or any suggestion to optimize this query?\n\nwithout ORDER BY database returns first 15 rows where predicate is\ntrue. With ORDER BY the database has to find all rows where predicate\nis true and then has to sort it. So first case can be a much faster\nbecause there are not necessary full table scan.\n\nregards\n\nPavel Stehule\n\n> In table songs are about 150.000 rows.\n>\n> Thank you for your reply.\n>\n> Best regards.\n> Marek Fiala\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 17 Aug 2010 12:49:05 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Search query is curious" }, { "msg_contents": "> without ORDER BY database returns first 15 rows where predicate is\n> true. With ORDER BY the database has to find all rows where predicate\n> is true and then has to sort it. So first case can be a much faster\n> because there are not necessary full table scan.\n\nRight. Essentialy, the ORDER BY happens before the LIMIT (so you have\nto sort everything before you take the first 15). If it were the other\nway around, you would take the first 15 rows Postgres happens to find\n(in an arbitrary order) and then sort these 15, which is probably not\nthat useful. Consider Thom's suggestion.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Tue, 17 Aug 2010 08:16:25 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Search query is curious" } ]
[ { "msg_contents": "Hi,\n\nPG 8.4.4\n\nI have an strange problem:\n\ncarmen=# VACUUM FULL verbose tp93t;\nINFO: vacuuming \"public.tp93t\"\nINFO: \"tp93t\": found 0 removable, 71984 nonremovable row versions in 17996\npages\nDETAIL: 70632 dead row versions cannot be removed yet.\nNonremovable row versions range from 1848 to 2032 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 1523648 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.03u sec elapsed 0.03 sec.\nINFO: index \"tp93t_pkey\" now contains 71984 row versions in 868 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: vacuuming \"pg_toast.pg_toast_24274\"\nINFO: \"pg_toast_24274\": found 0 removable, 0 nonremovable row versions in 0\npages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_24274_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\narmen=# cluster tp93t;\nCLUSTER\n\ncarmen=# VACUUM FULL verbose tp93t;\nINFO: vacuuming \"public.tp93t\"\nINFO: \"tp93t\": found 0 removable, 71984 nonremovable row versions in 17996\npages\nDETAIL: 70632 dead row versions cannot be removed yet.\nNonremovable row versions range from 1848 to 2032 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 1523648 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.03u sec elapsed 0.03 sec.\nINFO: index \"tp93t_pkey\" now contains 71984 row versions in 868 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: vacuuming \"pg_toast.pg_toast_24274\"\nINFO: \"pg_toast_24274\": found 0 removable, 0 nonremovable row versions in 0\npages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_24274_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\ncarmen=# select count(*) from tp93t;\n count\n-------\n 1352\n(1 row)\n\n\nI did't see any transactions locking this table and I think that CLUSTER\nwill recreate the table.\n\nThis is a temporary table, with one DELETE, Some INSERTs and a lot of\nUPDATES. And the UPDATES become slow and slow every time.\nThe only way to correct, is truncating the table.\n\nBest regards,\n\nAlexandre\n\nHi,PG 8.4.4I have an strange problem:carmen=# VACUUM FULL verbose tp93t;INFO:  vacuuming \"public.tp93t\"INFO:  \"tp93t\": found 0 removable, 71984 nonremovable row versions in 17996 pages\nDETAIL:  70632 dead row versions cannot be removed yet.Nonremovable row versions range from 1848 to 2032 bytes long.There were 0 unused item pointers.Total free space (including removable row versions) is 1523648 bytes.\n0 pages are or will become empty, including 0 at the end of the table.0 pages containing 0 free bytes are potential move destinations.CPU 0.00s/0.03u sec elapsed 0.03 sec.INFO:  index \"tp93t_pkey\" now contains 71984 row versions in 868 pages\nDETAIL:  0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  vacuuming \"pg_toast.pg_toast_24274\"INFO:  \"pg_toast_24274\": found 0 removable, 0 nonremovable row versions in 0 pages\nDETAIL:  0 dead row versions cannot be removed yet.Nonremovable row versions range from 0 to 0 bytes long.There were 0 unused item pointers.Total free space (including removable row versions) is 0 bytes.0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  index \"pg_toast_24274_index\" now contains 0 row versions in 1 pagesDETAIL:  0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.VACUUMarmen=# cluster tp93t;CLUSTERcarmen=# VACUUM FULL verbose tp93t;INFO:  vacuuming \"public.tp93t\"INFO:  \"tp93t\": found 0 removable, 71984 nonremovable row versions in 17996 pages\nDETAIL:  70632 dead row versions cannot be removed yet.Nonremovable row versions range from 1848 to 2032 bytes long.There were 0 unused item pointers.Total free space (including removable row versions) is 1523648 bytes.\n0 pages are or will become empty, including 0 at the end of the table.0 pages containing 0 free bytes are potential move destinations.CPU 0.00s/0.03u sec elapsed 0.03 sec.INFO:  index \"tp93t_pkey\" now contains 71984 row versions in 868 pages\nDETAIL:  0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  vacuuming \"pg_toast.pg_toast_24274\"INFO:  \"pg_toast_24274\": found 0 removable, 0 nonremovable row versions in 0 pages\nDETAIL:  0 dead row versions cannot be removed yet.Nonremovable row versions range from 0 to 0 bytes long.There were 0 unused item pointers.Total free space (including removable row versions) is 0 bytes.0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  index \"pg_toast_24274_index\" now contains 0 row versions in 1 pagesDETAIL:  0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.VACUUMcarmen=# select count(*) from tp93t; count -------  1352(1 row)I did't see any transactions locking this table and I think that CLUSTER will recreate the table.\nThis is a temporary table, with one DELETE, Some INSERTs and a lot of UPDATES. And the UPDATES become slow and slow every time.The only way to correct, is truncating the table.Best regards,Alexandre", "msg_date": "Tue, 17 Aug 2010 16:19:32 -0300", "msg_from": "Alexandre de Arruda Paes <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum Full + Cluster + Vacuum full = non removable dead rows" }, { "msg_contents": "On Tue, Aug 17, 2010 at 1:19 PM, Alexandre de Arruda Paes\n<[email protected]> wrote:\n> Hi,\n>\n> PG 8.4.4\n> I did't see any transactions locking this table and I think that CLUSTER\n> will recreate the table.\n\nPrepared transactions?\n\n> This is a temporary table, with one DELETE, Some INSERTs and a lot of\n> UPDATES. And the UPDATES become slow and slow every time.\n> The only way to correct, is truncating the table.\n\nAnd you're sure there aren't any \"idle in transaction\" connections/\n", "msg_date": "Tue, 17 Aug 2010 13:24:29 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum Full + Cluster + Vacuum full = non removable\n dead rows" }, { "msg_contents": "I'm forwarding again this email to list, because Me and Scoot unfortunately\nwas talking alone. (thanks Scott)\n\n>So what do:\n>select * from pg_stat_activity where current_query ilike '%transaction%';\n>and\n>select * from pg_stat_activity where now()-current_query > '1\nminute'::interval;\n>say?\n>You should really avoid vacuum full, and stick to vacuum (plain). At\n>least until you can get the tuples to be freed up. Each time you run\n>it you bloat your indexes.\n\nTo clarify:\n\nThis is a production server with lots of connection and the commands above\nreturns a lot of rows, but nothing related with this table (see bellow).\nI know the problem with VACUUM FULL and bloated Indexes, but I don't\nunderstand why the table that is not in use by nobody, cant be vacuumed or\nclustered to avoid dead tuples.\nSingle VACUUM cant recover this dead tuples too.\n\nI see an opened transaction (this is a tomcat servlet webpage), but killing\nthis transaction does not help the VACUUM:\n\n<webpa 192.168.1.1 2010-08-17 18:36:40.459 BRT 243345>LOG: execute S_1:\nBEGIN\n<webpa 192.168.1.1 2010-08-17 18:36:40.459 BRT 243346>LOG: duration: 0.010\nms\n<webpa 192.168.1.1 2010-08-17 18:36:40.459 BRT 243347>LOG: duration: 0.362\nms\n<webpa 192.168.1.1 2010-08-17 18:36:40.460 BRT 243348>LOG: duration: 0.703\nms\n<webpa 192.168.1.1 2010-08-17 18:36:40.460 BRT 243349>LOG: execute\n<unnamed>: SELECT TP93usuari, TP93Objeto, TP93Ca251, TP93Nm0805, TP93Nm0804,\nTP93Ca501, TP93Ca2001, TP93Nm1521, TP93Nm0803, TP93Ca253, TP93Nm1522,\nTP93Nm0801, TP93Nm0802, TP93Chave FROM TP93T WHERE (TP93usuari = $1) AND\n(TP93Objeto = 'PC0658PP') AND (TP93Ca251 >= $2) ORDER BY TP93Chave\n<webpa 192.168.1.1 2010-08-17 18:36:40.460 BRT 243350>DETAIL: parameters:\n$1 = 'WEBCLIENTE ', $2 = ' '\n<webpa 192.168.1.1 2010-08-17 18:36:40.469 BRT 243351>LOG: duration: 9.302\nms\n\n[postgres@servernew logs]$ psql carmen\npsql (8.4.4)\nType \"help\" for help.\n\ncarmen=# select * from vlocks where relname='tp93t'; select * from\npg_stat_activity where usename='webpa';\n\n datname | relname | virtualtransaction | mode | granted |\nusename | substr | query_start |\nage | procpid\n---------+---------+--------------------+-----------------+---------+---------+-----------------------+-------------------------------+-----------------+---------\n carmen | tp93t | 25/4319 | AccessShareLock | t |\nwebpa | <IDLE> in transaction | 2010-08-17 18:36:40.460657-03 |\n00:01:09.455456 | 1917\n(1 row)\n\n\n datid | datname | procpid | usesysid | usename | current_query |\nwaiting | xact_start | query_start\n| backend_start | client_addr | client_port\n-------+---------+---------+-----------+---------+-----------------------+---------+-------------------------------+-------------------------------+-------------------------------+-------------+-------------\n 16745 | carmen | 1917 | 750377993 | webpa | <IDLE> in transaction |\nf | 2010-08-17 18:36:40.459531-03 | 2010-08-17 18:36:40.460657-03 |\n2010-08-17 18:36:09.917687-03 | 192.168.1.1 | 39027\n(1 row)\n\ncarmen=# select * from vlocks where usename='webpa';\n\n datname | relname | virtualtransaction | mode | granted |\nusename | substr | query_start |\nage | procpid\n---------+------------+--------------------+-----------------+---------+---------+-----------------------+-------------------------------+-----------------+---------\n carmen | tp93t_pkey | 25/4319 | AccessShareLock | t |\nwebpa | <IDLE> in transaction | 2010-08-17 18:36:40.460657-03 |\n00:01:16.618563 | 1917\n carmen | tp93t | 25/4319 | AccessShareLock | t |\nwebpa | <IDLE> in transaction | 2010-08-17 18:36:40.460657-03 |\n00:01:16.618563 | 1917\n carmen | | 25/4319 | ExclusiveLock | t |\nwebpa | <IDLE> in transaction | 2010-08-17 18:36:40.460657-03 |\n00:01:16.618563 | 1917\n(3 rows)\n\n-----------------------------------------------------------------------------------------------\n\nOK, I will kill the backend and run vacuum:\n\ncarmen=# select pg_terminate_backend(1917);\n pg_terminate_backend\n----------------------\n t\n(1 row)\n\ncarmen=# select * from vlocks where relname='tp93t'; select * from\npg_stat_activity where usename='webpa';\n\n datname | relname | virtualtransaction | mode | granted | usename | substr\n| query_start | age | procpid\n---------+---------+--------------------+------+---------+---------+--------+-------------+-----+---------\n(0 rows)\n\n datid | datname | procpid | usesysid | usename | current_query |\nwaiting | xact_start | query_start\n| backend_start | client_addr | client_port\n-------+---------+---------+--\n---------+---------+-----------------------+---------+-------------------------------+-------------------------------+-------------------------------+-------------+-------------\n(0 rows)\n\n\ncarmen=# VACUUM verbose tp93t;\nINFO: vacuuming \"public.tp93t\"\nINFO: index \"tp93t_pkey\" now contains 5592 row versions in 103 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"tp93t\": found 0 removable, 19336 nonremovable row versions in 4887\nout of 4887 pages\nDETAIL: 19126 dead row versions cannot be removed yet.\n\n\ncarmen=# VACUUM FULL verbose tp93t;\nINFO: vacuuming \"public.tp93t\"\nINFO: \"tp93t\": found 0 removable, 19336 nonremovable row versions in 4887\npages\nDETAIL: 19126 dead row versions cannot be removed yet.\nNonremovable row versions range from 1853 to 2029 bytes long.\nThere were 210 unused item pointers.\n(...)\n\n\n\n\n\n\n\n2010/8/17 Scott Marlowe <[email protected]>\n\n> On Tue, Aug 17, 2010 at 2:28 PM, Alexandre de Arruda Paes\n>\n> <[email protected]> wrote:\n>\n> So what do:\n> select * from pg_stat_activity where current_query ilike '%transaction%';\n> and\n> select * from pg_stat_activity where now()-current_query > '1\n> minute'::interval;\n> say?\n>\n> > And its the dead rows is growing:\n> >\n> > carmen=# VACUUM FULL verbose tp93t;\n>\n> You should really avoid vacuum full, and stick to vacuum (plain). At\n> least until you can get the tuples to be freed up. Each time you run\n> it you bloat your indexes.\n>\n> > INFO: vacuuming \"public.tp93t\"\n> > INFO: \"tp93t\": found 1309 removable, 313890 nonremovable row versions in\n> > 78800 pages\n> > DETAIL: 312581 dead row versions cannot be removed yet.\n> > Nonremovable row versions range from 1845 to 2032 bytes long.\n> > There were 3014 unused item pointers.\n>\n>\n> --\n> To understand recursion, one must first understand recursion.\n>\n\nI'm forwarding again this email to list, because Me and Scoot unfortunately was talking alone. (thanks Scott)>So what do:\n>select * from pg_stat_activity where current_query ilike '%transaction%';\n>and\n>select * from pg_stat_activity where now()-current_query > '1 minute'::interval;\n>say?\n>You should really avoid vacuum full, and stick to vacuum (plain).  At\n>least until you can get the tuples to be freed up.  Each time you run\n>it you bloat your indexes.To clarify:This is a production server with lots of connection and the commands above returns a lot of rows, but nothing related with this table (see bellow).I know the problem with VACUUM FULL and bloated Indexes, but I don't understand why the table that is not in use by nobody, cant be vacuumed or clustered to avoid dead tuples.\n\nSingle VACUUM cant recover this dead tuples too. I see an opened transaction (this is a tomcat servlet webpage), but killing this transaction does not help the VACUUM:\n<webpa 192.168.1.1 2010-08-17 18:36:40.459 BRT 243345>LOG:  execute S_1: BEGIN<webpa 192.168.1.1 2010-08-17 18:36:40.459 BRT 243346>LOG:  duration: 0.010 ms<webpa 192.168.1.1 2010-08-17 18:36:40.459 BRT 243347>LOG:  duration: 0.362 ms\n\n<webpa 192.168.1.1 2010-08-17 18:36:40.460 BRT 243348>LOG:  duration: 0.703 ms<webpa 192.168.1.1 2010-08-17 18:36:40.460 BRT 243349>LOG:  execute <unnamed>: SELECT TP93usuari, TP93Objeto, TP93Ca251, TP93Nm0805, TP93Nm0804, TP93Ca501, TP93Ca2001, TP93Nm1521, TP93Nm0803, TP93Ca253, TP93Nm1522, TP93Nm0801, TP93Nm0802, TP93Chave FROM TP93T WHERE (TP93usuari = $1) AND (TP93Objeto = 'PC0658PP') AND (TP93Ca251 >= $2) ORDER BY TP93Chave \n\n<webpa 192.168.1.1 2010-08-17 18:36:40.460 BRT 243350>DETAIL:  parameters: $1 = 'WEBCLIENTE          ', $2 = '                    '<webpa 192.168.1.1 2010-08-17 18:36:40.469 BRT 243351>LOG:  duration: 9.302 ms\n[postgres@servernew logs]$ psql carmenpsql (8.4.4)Type \"help\" for help.carmen=# select * from vlocks where relname='tp93t'; select * from pg_stat_activity where usename='webpa';\n\n datname | relname | virtualtransaction |      mode       | granted | usename |        substr         |          query_start          |       age       | procpid ---------+---------+--------------------+-----------------+---------+---------+-----------------------+-------------------------------+-----------------+---------\n\n carmen  | tp93t   | 25/4319            | AccessShareLock | t       | webpa   | <IDLE> in transaction | 2010-08-17 18:36:40.460657-03 | 00:01:09.455456 |    1917(1 row) datid | datname | procpid | usesysid  | usename |     current_query     | waiting |          xact_start           |          query_start          |         backend_start         | client_addr | client_port \n\n-------+---------+---------+-----------+---------+-----------------------+---------+-------------------------------+-------------------------------+-------------------------------+-------------+------------- 16745 | carmen  |    1917 | 750377993 | webpa   | <IDLE> in transaction | f       | 2010-08-17 18:36:40.459531-03 | 2010-08-17 18:36:40.460657-03 | 2010-08-17 18:36:09.917687-03 | 192.168.1.1 |       39027\n\n(1 row)carmen=# select * from vlocks where usename='webpa'; datname |  relname   | virtualtransaction |      mode       | granted | usename |        substr         |          query_start          |       age       | procpid \n\n---------+------------+--------------------+-----------------+---------+---------+-----------------------+-------------------------------+-----------------+--------- carmen  | tp93t_pkey | 25/4319            | AccessShareLock | t       | webpa   | <IDLE> in transaction | 2010-08-17 18:36:40.460657-03 | 00:01:16.618563 |    1917\n\n carmen  | tp93t      | 25/4319            | AccessShareLock | t       | webpa   | <IDLE> in transaction | 2010-08-17 18:36:40.460657-03 | 00:01:16.618563 |    1917 carmen  |            | 25/4319            | ExclusiveLock   | t       | webpa   | <IDLE> in transaction | 2010-08-17 18:36:40.460657-03 | 00:01:16.618563 |    1917\n\n(3 rows)-----------------------------------------------------------------------------------------------OK, I will kill the backend and run vacuum:carmen=# select pg_terminate_backend(1917);\n pg_terminate_backend ---------------------- t(1 row)carmen=# select * from vlocks where relname='tp93t'; select * from pg_stat_activity where usename='webpa'; datname | relname | virtualtransaction | mode | granted | usename | substr | query_start | age | procpid \n\n---------+---------+--------------------+------+---------+---------+--------+-------------+-----+---------(0 rows) datid | datname | procpid | usesysid  | usename |     \ncurrent_query     | waiting |          xact_start           |          \nquery_start          |         backend_start         | client_addr | \nclient_port \n-------+---------+---------+-----------+---------+-----------------------+---------+-------------------------------+-------------------------------+-------------------------------+-------------+-------------\n\n(0 rows)carmen=# VACUUM verbose tp93t;INFO:  vacuuming \"public.tp93t\"INFO:  index \"tp93t_pkey\" now contains 5592 row versions in 103 pages\nDETAIL:  0 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  \"tp93t\": found 0 removable, 19336 nonremovable row versions in 4887 out of 4887 pages\n\nDETAIL:  19126 dead row versions cannot be removed yet.carmen=# VACUUM FULL verbose tp93t;INFO:  vacuuming \"public.tp93t\"INFO:  \"tp93t\": found 0 removable, 19336 nonremovable row versions in 4887 pages\n\nDETAIL:  19126 dead row versions cannot be removed yet.Nonremovable row versions range from 1853 to 2029 bytes long.There were 210 unused item pointers.(...)\n\n2010/8/17 Scott Marlowe <[email protected]>\n\nOn Tue, Aug 17, 2010 at 2:28 PM, Alexandre de Arruda Paes\n<[email protected]> wrote:\n\nSo what do:\nselect * from pg_stat_activity where current_query ilike '%transaction%';\nand\nselect * from pg_stat_activity where now()-current_query > '1 minute'::interval;\nsay?\n\n> And its the dead rows is growing:\n>\n> carmen=# VACUUM FULL verbose tp93t;\n\nYou should really avoid vacuum full, and stick to vacuum (plain).  At\nleast until you can get the tuples to be freed up.  Each time you run\nit you bloat your indexes.\n\n> INFO:  vacuuming \"public.tp93t\"\n> INFO:  \"tp93t\": found 1309 removable, 313890 nonremovable row versions in\n> 78800 pages\n> DETAIL:  312581 dead row versions cannot be removed yet.\n> Nonremovable row versions range from 1845 to 2032 bytes long.\n> There were 3014 unused item pointers.\n\n\n--\nTo understand recursion, one must first understand recursion.", "msg_date": "Wed, 18 Aug 2010 10:07:17 -0300", "msg_from": "Alexandre de Arruda Paes <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Vacuum Full + Cluster + Vacuum full = non removable\n dead rows" }, { "msg_contents": "Alexandre de Arruda Paes <[email protected]> writes:\n> I know the problem with VACUUM FULL and bloated Indexes, but I don't\n> understand why the table that is not in use by nobody, cant be vacuumed or\n> clustered to avoid dead tuples.\n\nThere's an open transaction somewhere that VACUUM is preserving the\ntuples for. This transaction need not ever have touched the table,\nor ever intend to touch the table --- but VACUUM cannot know that,\nso it saves any tuples that the transaction might be entitled to see\nif it looked.\n\n> carmen=# select * from vlocks where relname='tp93t'; select * from\n> pg_stat_activity where usename='webpa';\n\nYou keep on showing us only subsets of pg_stat_activity :-(\n\nAlso, if you don't see anything in pg_stat_activity, try pg_prepared_xacts.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Aug 2010 09:22:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n\trows" }, { "msg_contents": "Hi Tom,\n\nBelow, the pg_prepared_xacts result.\nThe only way to restore the table is with TRUNCATE.\nVacuum, Vacuum full, cluster not help and subsequent updates will become\nslow and slow.\n\n\ncarmen=# select * from vlocks where relname='tp93t'; select * from\npg_stat_activity where usename='webpa'; select * from pg_prepared_xacts;\n datname | relname | virtualtransaction | mode | granted | usename | substr\n| query_start | age | procpid\n---------+---------+--------------------+------+---------+---------+--------+-------------+-----+---------\n(0 rows)\n\n datid | datname | procpid | usesysid | usename | current_query | waiting |\nxact_start | query_start | backend_start |\nclient_addr | client_port\n-------+---------+---------+-----------+---------+---------------+---------+------------+-------------------------------+-------------------------------+-------------+-------------\n 16745 | carmen | 19345 | 750377993 | webpa | <IDLE> | f\n| | 2010-08-19 09:40:44.295753-03 | 2010-08-19 09:38:45.637543-03\n| 192.168.1.1 | 59867\n(1 row)\n\n transaction | gid | prepared | owner | database\n-------------+-----+----------+-------+----------\n(0 rows)\n\ncarmen=# VACUUM full verbose tp93t;\nINFO: vacuuming \"public.tp93t\"\nINFO: \"tp93t\": found 0 removable, 38588 nonremovable row versions in 9700\npages\nDETAIL: 38378 dead row versions cannot be removed yet.\nNonremovable row versions range from 1853 to 2029 bytes long.\nThere were 317 unused item pointers.\nTotal free space (including removable row versions) is 1178860 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n190 pages containing 442568 free bytes are potential move destinations.\nCPU 0.00s/0.02u sec elapsed 0.02 sec.\nINFO: index \"tp93t_pkey\" now contains 11597 row versions in 195 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"tp93t\": moved 0 row versions, truncated 9700 to 9700 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: vacuuming \"pg_toast.pg_toast_24274\"\nINFO: \"pg_toast_24274\": found 0 removable, 0 nonremovable row versions in 0\npages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_24274_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\n\n2010/8/18 Tom Lane <[email protected]>\n\n> Alexandre de Arruda Paes <[email protected]> writes:\n> > I know the problem with VACUUM FULL and bloated Indexes, but I don't\n> > understand why the table that is not in use by nobody, cant be vacuumed\n> or\n> > clustered to avoid dead tuples.\n>\n> There's an open transaction somewhere that VACUUM is preserving the\n> tuples for. This transaction need not ever have touched the table,\n> or ever intend to touch the table --- but VACUUM cannot know that,\n> so it saves any tuples that the transaction might be entitled to see\n> if it looked.\n>\n> > carmen=# select * from vlocks where relname='tp93t'; select * from\n> > pg_stat_activity where usename='webpa';\n>\n> You keep on showing us only subsets of pg_stat_activity :-(\n>\n> Also, if you don't see anything in pg_stat_activity, try pg_prepared_xacts.\n>\n> regards, tom lane\n>\n\nHi Tom,Below, the pg_prepared_xacts result.The only way to restore the table is with TRUNCATE.Vacuum, Vacuum full,  cluster not help and subsequent updates will become slow and slow.carmen=# select * from vlocks where relname='tp93t'; select * from pg_stat_activity where usename='webpa'; select * from pg_prepared_xacts;\n datname | relname | virtualtransaction | mode | granted | usename | substr | query_start | age | procpid ---------+---------+--------------------+------+---------+---------+--------+-------------+-----+---------(0 rows)\n datid | datname | procpid | usesysid  | usename | current_query | waiting | xact_start |          query_start          |         backend_start         | client_addr | client_port -------+---------+---------+-----------+---------+---------------+---------+------------+-------------------------------+-------------------------------+-------------+-------------\n 16745 | carmen  |   19345 | 750377993 | webpa   | <IDLE>        | f       |            | 2010-08-19 09:40:44.295753-03 | 2010-08-19 09:38:45.637543-03 | 192.168.1.1 |       59867(1 row) transaction | gid | prepared | owner | database \n-------------+-----+----------+-------+----------(0 rows)carmen=# VACUUM full verbose tp93t;INFO:  vacuuming \"public.tp93t\"INFO:  \"tp93t\": found 0 removable, 38588 nonremovable row versions in 9700 pages\nDETAIL:  38378 dead row versions cannot be removed yet.Nonremovable row versions range from 1853 to 2029 bytes long.There were 317 unused item pointers.Total free space (including removable row versions) is 1178860 bytes.\n0 pages are or will become empty, including 0 at the end of the table.190 pages containing 442568 free bytes are potential move destinations.CPU 0.00s/0.02u sec elapsed 0.02 sec.INFO:  index \"tp93t_pkey\" now contains 11597 row versions in 195 pages\nDETAIL:  0 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  \"tp93t\": moved 0 row versions, truncated 9700 to 9700 pages\nDETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  vacuuming \"pg_toast.pg_toast_24274\"INFO:  \"pg_toast_24274\": found 0 removable, 0 nonremovable row versions in 0 pagesDETAIL:  0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.There were 0 unused item pointers.Total free space (including removable row versions) is 0 bytes.0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  index \"pg_toast_24274_index\" now contains 0 row versions in 1 pagesDETAIL:  0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.VACUUM2010/8/18 Tom Lane <[email protected]>\nAlexandre de Arruda Paes <[email protected]> writes:\n> I know the problem with VACUUM FULL and bloated Indexes, but I don't\n> understand why the table that is not in use by nobody, cant be vacuumed or\n> clustered to avoid dead tuples.\n\nThere's an open transaction somewhere that VACUUM is preserving the\ntuples for.  This transaction need not ever have touched the table,\nor ever intend to touch the table --- but VACUUM cannot know that,\nso it saves any tuples that the transaction might be entitled to see\nif it looked.\n\n> carmen=# select * from vlocks where relname='tp93t'; select * from\n> pg_stat_activity where usename='webpa';\n\nYou keep on showing us only subsets of pg_stat_activity :-(\n\nAlso, if you don't see anything in pg_stat_activity, try pg_prepared_xacts.\n\n                        regards, tom lane", "msg_date": "Thu, 19 Aug 2010 09:57:12 -0300", "msg_from": "Alexandre de Arruda Paes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n rows" }, { "msg_contents": "Alexandre de Arruda Paes <[email protected]> wrote:\n> 2010/8/18 Tom Lane <[email protected]>\n \n>> There's an open transaction somewhere that VACUUM is preserving\n>> the tuples for. This transaction need not ever have touched the\n>> table, or ever intend to touch the table --- but VACUUM cannot\n>> know that, so it saves any tuples that the transaction might be\n>> entitled to see if it looked.\n>>\n>> > carmen=# select * from vlocks where relname='tp93t'; select *\n>> > from pg_stat_activity where usename='webpa';\n>>\n>> You keep on showing us only subsets of pg_stat_activity :-(\n \n> select * from pg_stat_activity where usename='webpa';\n \nYou keep on showing us only subsets of pg_stat_activity :-(\n \n*ANY* open transaction, including \"idle in transaction\" including\ntransactions by other users in other databases will prevent vacuum\nfrom cleaning up rows, for the reasons Tom already gave you.\n \nWhat do you get from?:\n \nselect * from pg_stat_activity where current_query <> '<IDLE>'\n order by xact_start limit 10;\n \n-Kevin\n", "msg_date": "Thu, 19 Aug 2010 08:41:38 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n rows" }, { "msg_contents": "Alexandre de Arruda Paes <[email protected]> writes:\n> Below, the pg_prepared_xacts result.\n\nOK, so you don't have any prepared transactions, but you're still not\nshowing us the full content of pg_stat_activity.\n\nJust out of curiosity, how many rows does \"select count(*) from tp93t\"\nthink there are?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Aug 2010 09:56:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n\trows" }, { "msg_contents": "So, does it mean that VACUUM will never clean dead rows if you have a\nnon-stop transactional activity in your PG database???... (24/7 OLTP\nfor ex.)\n\nRgds,\n-Dimitri\n\n\nOn 8/19/10, Kevin Grittner <[email protected]> wrote:\n> Alexandre de Arruda Paes <[email protected]> wrote:\n>> 2010/8/18 Tom Lane <[email protected]>\n>\n>>> There's an open transaction somewhere that VACUUM is preserving\n>>> the tuples for. This transaction need not ever have touched the\n>>> table, or ever intend to touch the table --- but VACUUM cannot\n>>> know that, so it saves any tuples that the transaction might be\n>>> entitled to see if it looked.\n>>>\n>>> > carmen=# select * from vlocks where relname='tp93t'; select *\n>>> > from pg_stat_activity where usename='webpa';\n>>>\n>>> You keep on showing us only subsets of pg_stat_activity :-(\n>\n>> select * from pg_stat_activity where usename='webpa';\n>\n> You keep on showing us only subsets of pg_stat_activity :-(\n>\n> *ANY* open transaction, including \"idle in transaction\" including\n> transactions by other users in other databases will prevent vacuum\n> from cleaning up rows, for the reasons Tom already gave you.\n>\n> What do you get from?:\n>\n> select * from pg_stat_activity where current_query <> '<IDLE>'\n> order by xact_start limit 10;\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sat, 21 Aug 2010 10:25:45 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n rows" }, { "msg_contents": "No, it means it can't clean rows that are younger than the oldest\ntransaction currently in progress. if you started a transaction 5\nhours ago, then all the dead tuples created in the last 5 hours are\nnot recoverable. Dead tuples created before that transaction are\nrecoverable. If you run transactions for days or weeks, then you're\ngonna have issues.\n\nOn Sat, Aug 21, 2010 at 2:25 AM, Dimitri <[email protected]> wrote:\n> So, does it mean that VACUUM will never clean dead rows if you have a\n> non-stop transactional activity in your PG database???... (24/7 OLTP\n> for ex.)\n>\n> Rgds,\n> -Dimitri\n>\n>\n> On 8/19/10, Kevin Grittner <[email protected]> wrote:\n>> Alexandre de Arruda Paes <[email protected]> wrote:\n>>> 2010/8/18 Tom Lane <[email protected]>\n>>\n>>>> There's an open transaction somewhere that VACUUM is preserving\n>>>> the tuples for.  This transaction need not ever have touched the\n>>>> table, or ever intend to touch the table --- but VACUUM cannot\n>>>> know that, so it saves any tuples that the transaction might be\n>>>> entitled to see if it looked.\n>>>>\n>>>> > carmen=# select * from vlocks where relname='tp93t'; select *\n>>>> > from pg_stat_activity where usename='webpa';\n>>>>\n>>>> You keep on showing us only subsets of pg_stat_activity :-(\n>>\n>>> select * from pg_stat_activity where usename='webpa';\n>>\n>> You keep on showing us only subsets of pg_stat_activity :-(\n>>\n>> *ANY* open transaction, including \"idle in transaction\" including\n>> transactions by other users in other databases will prevent vacuum\n>> from cleaning up rows, for the reasons Tom already gave you.\n>>\n>> What do you get from?:\n>>\n>> select * from pg_stat_activity where current_query <> '<IDLE>'\n>>   order by xact_start limit 10;\n>>\n>> -Kevin\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Sat, 21 Aug 2010 02:58:29 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n rows" }, { "msg_contents": "Great! - it's what I expected until now :-)\nbut discussion in this thread put my mind in trouble :-))\n\nSo, the advice for Alexandre here is just to check the age of the\noldest running transaction and the last time when the table in\nquestion was modified.. - if modification time is older than the\noldest transaction = we have a problem in PG.. Otherwise it works as\nexpected to match MVCC.\n\nRgds,\n-Dimitri\n\n\nOn 8/21/10, Scott Marlowe <[email protected]> wrote:\n> No, it means it can't clean rows that are younger than the oldest\n> transaction currently in progress. if you started a transaction 5\n> hours ago, then all the dead tuples created in the last 5 hours are\n> not recoverable. Dead tuples created before that transaction are\n> recoverable. If you run transactions for days or weeks, then you're\n> gonna have issues.\n>\n> On Sat, Aug 21, 2010 at 2:25 AM, Dimitri <[email protected]> wrote:\n>> So, does it mean that VACUUM will never clean dead rows if you have a\n>> non-stop transactional activity in your PG database???... (24/7 OLTP\n>> for ex.)\n>>\n>> Rgds,\n>> -Dimitri\n>>\n>>\n>> On 8/19/10, Kevin Grittner <[email protected]> wrote:\n>>> Alexandre de Arruda Paes <[email protected]> wrote:\n>>>> 2010/8/18 Tom Lane <[email protected]>\n>>>\n>>>>> There's an open transaction somewhere that VACUUM is preserving\n>>>>> the tuples for.  This transaction need not ever have touched the\n>>>>> table, or ever intend to touch the table --- but VACUUM cannot\n>>>>> know that, so it saves any tuples that the transaction might be\n>>>>> entitled to see if it looked.\n>>>>>\n>>>>> > carmen=# select * from vlocks where relname='tp93t'; select *\n>>>>> > from pg_stat_activity where usename='webpa';\n>>>>>\n>>>>> You keep on showing us only subsets of pg_stat_activity :-(\n>>>\n>>>> select * from pg_stat_activity where usename='webpa';\n>>>\n>>> You keep on showing us only subsets of pg_stat_activity :-(\n>>>\n>>> *ANY* open transaction, including \"idle in transaction\" including\n>>> transactions by other users in other databases will prevent vacuum\n>>> from cleaning up rows, for the reasons Tom already gave you.\n>>>\n>>> What do you get from?:\n>>>\n>>> select * from pg_stat_activity where current_query <> '<IDLE>'\n>>>   order by xact_start limit 10;\n>>>\n>>> -Kevin\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n> --\n> To understand recursion, one must first understand recursion.\n>\n", "msg_date": "Sat, 21 Aug 2010 11:12:54 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n rows" }, { "msg_contents": "2010/8/21 Dimitri <[email protected]>\n\n> Great! - it's what I expected until now :-)\n> but discussion in this thread put my mind in trouble :-))\n>\n> So, the advice for Alexandre here is just to check the age of the\n> oldest running transaction and the last time when the table in\n> question was modified.. - if modification time is older than the\n> oldest transaction = we have a problem in PG.. Otherwise it works as\n> expected to match MVCC.\n>\n> Rgds,\n> -Dimitri\n>\n>\nUnfortunately, the customer can't wait for the solution and the programmer\neliminated the\nuse of this table by using a in-memory array.\n\nI understood that all transactions, touching this table or not, can affect\nthe ability of the vacuum to recover the dead tuples.\nIn my scenario, it's too bad because I have long transactions and I really\nnot know when I will recover this tuples.\nAnd, like I sad, the table will become more slow every time.\n\nOnly for discussion: the CLUSTER command, in my little knowledge, is a\nintrusive command that's cannot recover the dead tuples too.\n\nOnly TRUNCATE can do this job, but obviously is not applicable all the time.\n\nBest regards,\n\nAlexandre\n\n\n\n> On 8/21/10, Scott Marlowe <[email protected]> wrote:\n> > No, it means it can't clean rows that are younger than the oldest\n> > transaction currently in progress. if you started a transaction 5\n> > hours ago, then all the dead tuples created in the last 5 hours are\n> > not recoverable. Dead tuples created before that transaction are\n> > recoverable. If you run transactions for days or weeks, then you're\n> > gonna have issues.\n> >\n> > On Sat, Aug 21, 2010 at 2:25 AM, Dimitri <[email protected]> wrote:\n> >> So, does it mean that VACUUM will never clean dead rows if you have a\n> >> non-stop transactional activity in your PG database???... (24/7 OLTP\n> >> for ex.)\n> >>\n> >> Rgds,\n> >> -Dimitri\n> >>\n> >>\n> >> On 8/19/10, Kevin Grittner <[email protected]> wrote:\n> >>> Alexandre de Arruda Paes <[email protected]> wrote:\n> >>>> 2010/8/18 Tom Lane <[email protected]>\n> >>>\n> >>>>> There's an open transaction somewhere that VACUUM is preserving\n> >>>>> the tuples for. This transaction need not ever have touched the\n> >>>>> table, or ever intend to touch the table --- but VACUUM cannot\n> >>>>> know that, so it saves any tuples that the transaction might be\n> >>>>> entitled to see if it looked.\n> >>>>>\n> >>>>> > carmen=# select * from vlocks where relname='tp93t'; select *\n> >>>>> > from pg_stat_activity where usename='webpa';\n> >>>>>\n> >>>>> You keep on showing us only subsets of pg_stat_activity :-(\n> >>>\n> >>>> select * from pg_stat_activity where usename='webpa';\n> >>>\n> >>> You keep on showing us only subsets of pg_stat_activity :-(\n> >>>\n> >>> *ANY* open transaction, including \"idle in transaction\" including\n> >>> transactions by other users in other databases will prevent vacuum\n> >>> from cleaning up rows, for the reasons Tom already gave you.\n> >>>\n> >>> What do you get from?:\n> >>>\n> >>> select * from pg_stat_activity where current_query <> '<IDLE>'\n> >>> order by xact_start limit 10;\n> >>>\n> >>> -Kevin\n> >>>\n> >>> --\n> >>> Sent via pgsql-performance mailing list\n> >>> ([email protected])\n> >>> To make changes to your subscription:\n> >>> http://www.postgresql.org/mailpref/pgsql-performance\n> >>>\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list (\n> [email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >>\n> >\n> >\n> >\n> > --\n> > To understand recursion, one must first understand recursion.\n> >\n>\n\n2010/8/21 Dimitri <[email protected]>\nGreat! - it's what I expected until now :-)\nbut discussion in this thread put my mind in trouble :-))\n\nSo, the advice for Alexandre here is just to check the age of the\noldest running transaction and the last time when the table in\nquestion was modified.. - if modification time is older than the\noldest transaction = we have a problem in PG.. Otherwise it works as\nexpected to match MVCC.\n\nRgds,\n-Dimitri\nUnfortunately, the customer can't wait for the solution and the programmer eliminated the \nuse of this table by using a in-memory array.\n\nI understood that all transactions, touching this table or not, can affect the ability of the vacuum to recover the dead tuples.\nIn my scenario, it's too bad because I have long transactions and I really not know when I will recover this tuples.\nAnd, like I sad, the table will become more slow every time.\n\nOnly for discussion: the CLUSTER command, in my little knowledge, is a \nintrusive command that's cannot recover the dead tuples too.\nOnly TRUNCATE can do this job, but obviously is not applicable all the time.Best regards,Alexandre\n\n\nOn 8/21/10, Scott Marlowe <[email protected]> wrote:\n> No, it means it can't clean rows that are younger than the oldest\n> transaction currently in progress.  if you started a transaction 5\n> hours ago, then all the dead tuples created in the last 5 hours are\n> not recoverable.  Dead tuples created before that transaction are\n> recoverable.  If you run transactions for days or weeks, then you're\n> gonna have issues.\n>\n> On Sat, Aug 21, 2010 at 2:25 AM, Dimitri <[email protected]> wrote:\n>> So, does it mean that VACUUM will never clean dead rows if you have a\n>> non-stop transactional activity in your PG database???... (24/7 OLTP\n>> for ex.)\n>>\n>> Rgds,\n>> -Dimitri\n>>\n>>\n>> On 8/19/10, Kevin Grittner <[email protected]> wrote:\n>>> Alexandre de Arruda Paes <[email protected]> wrote:\n>>>> 2010/8/18 Tom Lane <[email protected]>\n>>>\n>>>>> There's an open transaction somewhere that VACUUM is preserving\n>>>>> the tuples for.  This transaction need not ever have touched the\n>>>>> table, or ever intend to touch the table --- but VACUUM cannot\n>>>>> know that, so it saves any tuples that the transaction might be\n>>>>> entitled to see if it looked.\n>>>>>\n>>>>> > carmen=# select * from vlocks where relname='tp93t'; select *\n>>>>> > from pg_stat_activity where usename='webpa';\n>>>>>\n>>>>> You keep on showing us only subsets of pg_stat_activity :-(\n>>>\n>>>> select * from pg_stat_activity where usename='webpa';\n>>>\n>>> You keep on showing us only subsets of pg_stat_activity :-(\n>>>\n>>> *ANY* open transaction, including \"idle in transaction\" including\n>>> transactions by other users in other databases will prevent vacuum\n>>> from cleaning up rows, for the reasons Tom already gave you.\n>>>\n>>> What do you get from?:\n>>>\n>>> select * from pg_stat_activity where current_query <> '<IDLE>'\n>>>   order by xact_start limit 10;\n>>>\n>>> -Kevin\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n> --\n> To understand recursion, one must first understand recursion.\n>", "msg_date": "Sat, 21 Aug 2010 10:49:25 -0300", "msg_from": "Alexandre de Arruda Paes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n rows" }, { "msg_contents": "On Sat, Aug 21, 2010 at 9:49 AM, Alexandre de Arruda Paes\n<[email protected]> wrote:\n> Only for discussion: the CLUSTER command, in my little knowledge, is a\n> intrusive command that's cannot recover the dead tuples too.\n>\n> Only TRUNCATE can do this job, but obviously is not applicable all the time.\n\nEither VACUUM or CLUSTER will recover *dead* tuples. What you can't\nrecover are tuples that are still visible to some running transaction.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Sun, 22 Aug 2010 07:35:38 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n rows" }, { "msg_contents": "The problem here is that we're trying to keep an image of a whole\nworld for any transaction which is in most cases will need to get a\nlook on few streets around.. ;-)\nI understand well that it's respecting the standard and so on, but the\nbackground problem that you may see your table bloated just because\nthere is a long running transaction appeared in another database, and\nif it's maintained/used/etc by another team - the problem very quickly\nmay become human rather technical :-))\n\nSo, why simply don't add a FORCE option to VACUUM?.. - In this case if\none executes \"VACUUM FORCE TABLE\" will be just aware about what he's\ndoing and be sure no one of the active transactions will be ever\naccess this table.\n\nWhat do you think?.. ;-)\n\nRgds,\n-Dimitri\n\n\nOn 8/22/10, Robert Haas <[email protected]> wrote:\n> On Sat, Aug 21, 2010 at 9:49 AM, Alexandre de Arruda Paes\n> <[email protected]> wrote:\n>> Only for discussion: the CLUSTER command, in my little knowledge, is a\n>> intrusive command that's cannot recover the dead tuples too.\n>>\n>> Only TRUNCATE can do this job, but obviously is not applicable all the\n>> time.\n>\n> Either VACUUM or CLUSTER will recover *dead* tuples. What you can't\n> recover are tuples that are still visible to some running transaction.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sun, 22 Aug 2010 18:17:23 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n rows" }, { "msg_contents": "Dimitri wrote:\n> I understand well that it's respecting the standard and so on, but the\n> background problem that you may see your table bloated just because\n> there is a long running transaction appeared in another database, and\n> if it's maintained/used/etc by another team - the problem very quickly\n> may become human rather technical :-))\n> \n\nThe way VACUUM and autovacuum work by default, it's OK to expect just \nover 20% of the database rows to be bloat from dead rows. On some \nsystems that much overhead is still too much, but on others the system \ncontinues to operate just fine with that quantity of bloat. It's not \nunreasonable, and is recoverable once the long running transaction finishes.\n\nIf your application has a component to it that allows a transaction to \nrun for so long that more than 20% of a table can be dead before it \ncompletes, you have a technical problem. The technical solution may not \nbe simple or obvious, but you need to find one--not say \"the person \nshouldn't have done that\". Users should never have gotten an API \nexposed to them where it's possible for them to screw things up that \nbadly. The usual first round of refactoring here is to figuring out how \nto break transactions into smaller chunks usefully, which tends to \nimprove other performance issues too, and then they don't run for so \nlong either.\n\n> So, why simply don't add a FORCE option to VACUUM?.. - In this case if\n> one executes \"VACUUM FORCE TABLE\" will be just aware about what he's\n> doing and be sure no one of the active transactions will be ever\n> access this table.\n> \n\nSee above. If you've gotten into this situation, you do not need a \nbetter hammer to smack the part of the server that is stuck. One would \nbe almost impossible to build, and have all sorts of side effects it's \ncomplicated to explain. It's far simpler to just avoid to known and \ncommon design patterns that lead to this class of problem in the first \nplace. This is a database application coding problem, not really a \ndatabase internals one.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sun, 22 Aug 2010 13:53:04 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n rows" }, { "msg_contents": "Alexandre de Arruda Paes wrote:\n> Unfortunately, the customer can't wait for the solution and the \n> programmer eliminated the\n> use of this table by using a in-memory array.\n\nWell that will be fun. Now they've traded their old problem for a new \none--cache inconsistency between the data in memory and what sitting in \nthe database. The fun apart about that is that the cache mismatch bugs \nyou'll run into are even more subtle, frustrating, and difficult to \nreplicate on demand than the VACUUM ones.\n\n> Only for discussion: the CLUSTER command, in my little knowledge, is a \n> intrusive command that's cannot recover the dead tuples too.\n> Only TRUNCATE can do this job, but obviously is not applicable all the \n> time.\n\nYes, CLUSTER takes a full lock on the table and rewrites a new one with \nall the inactive data removed. The table is unavailable to anyone else \nwhile that's happening.\n\nSome designs separate their data into partitions in a way that it's \npossible to TRUNCATE/DROP the ones that are no longer relevant (and are \npossibly filled with lots of dead rows) in order to clean them up \nwithout using VACUUM. This won't necessarily help with long-running \ntransactions though. If those are still referring to do data in those \nold partitions, removing them will be blocked for the same reason VACUUM \ncan't clean up inside of them--they data is still being used by an \nactive transaction.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sun, 22 Aug 2010 14:00:49 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Vacuum Full + Cluster + Vacuum full = non removable dead\n rows" } ]
[ { "msg_contents": "Mark Kirkwood wrote:\n \n> I'm guessing you meant to suggest setting effective_cache_size \n> to 15GB (not 15MB)....\n \nYes. Sorry about that.\n \n-Kevin\n\n", "msg_date": "Tue, 17 Aug 2010 22:08:25 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very poor performance" } ]
[ { "msg_contents": "Hi,\n\nAre indices for columns marked with \"PRIMARY KEY\" automatically generated by\npostgresql, or do I have to do it manually?\nThe question might seem dumb, I ask because I remember from working with\nMySQL it generates indices automatically in this case.\n\nThank you in advance, Clemens\n\nHi,Are indices for columns marked with \"PRIMARY KEY\" automatically generated by postgresql, or do I have to do it manually?The question might seem dumb, I ask because I remember from working with MySQL it generates indices automatically in this case.\nThank you in advance, Clemens", "msg_date": "Wed, 18 Aug 2010 15:51:22 +0200", "msg_from": "Clemens Eisserer <[email protected]>", "msg_from_op": true, "msg_subject": "Are Indices automatically generated for primary keys?" }, { "msg_contents": "On Wed, Aug 18, 2010 at 03:51:22PM +0200, Clemens Eisserer wrote:\n> Hi,\n> \n> Are indices for columns marked with \"PRIMARY KEY\" automatically generated by\n> postgresql, or do I have to do it manually?\n> The question might seem dumb, I ask because I remember from working with\n> MySQL it generates indices automatically in this case.\n\nthey are generated automatically.\n\nBest regards,\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Wed, 18 Aug 2010 15:54:56 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are Indices automatically generated for primary keys?" }, { "msg_contents": "Clemens Eisserer <[email protected]> wrote:\n \n> Are indices for columns marked with \"PRIMARY KEY\" automatically\n> generated by postgresql, or do I have to do it manually?\n \nIf you look at the documentation page for CREATE TABLE, you'll see\nthe following:\n \n| PostgreSQL automatically creates an index for each unique\n| constraint and primary key constraint to enforce uniqueness. Thus,\n| it is not necessary to create an index explicitly for primary key\n| columns. (See CREATE INDEX for more information.)\n \nhttp://www.postgresql.org/docs/current/interactive/sql-createtable.html\n \nThere's a lot of information on the page, but if you use your\nbrowser to search for PRIMARY KEY within the page, it's not too hard\nto find.\n \nAlso, if you create a primary key or a unique constraint on a table,\nyou should see a notice informing you of the creation of the index,\nand its name.\n \n-Kevin\n", "msg_date": "Wed, 18 Aug 2010 08:59:34 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are Indices automatically generated for primary\n\t keys?" }, { "msg_contents": "Hi,\n\n> they are generated automatically.\n\nThanks depesz!\nThe reason why I asked was because pgAdmin doesn't display the\nautomatically created indices, which confused me.\n\nThanks, Clemens\n\nPS:\n\n> If you look at the documentation page for CREATE TABLE, you'll see\n> the following ..... but if you use your\n> browser to search for PRIMARY KEY within the page, it's not too hard\n> to find.\nIts quite harsh to imply I didn't look for documentation.\nI looked at the \"Indexes and ORDER BY\" which doesn't mention it, or\nI've overlook it.\nDoesn't make a difference anyway.\n\n> Also, if you create a primary key or a unique constraint on a table,\n> you should see a notice informing you of the creation of the index,\n> and its name.\nI use Hibernate, and it generates the DDL for me.\nEven with debug/DDL/SQL-output enabled, I don't get any hint that an\nindex was created.\n", "msg_date": "Wed, 18 Aug 2010 16:15:26 +0200", "msg_from": "Clemens Eisserer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Are Indices automatically generated for primary keys?" }, { "msg_contents": "Clemens Eisserer <[email protected]> wrote:\n \n> Its quite harsh to imply I didn't look for documentation.\n \nSorry; I didn't mean to be harsh. PostgreSQL has excellent\ndocumentation, and we strive to make it better all the time. \nSometimes people coming from some other products aren't used to that\n-- I was just trying to point you in the direction of being able to\nfind things in the future, to save you trouble and delay.\n \n> I looked at the \"Indexes and ORDER BY\" which doesn't mention it,\n> or I've overlook it.\n> Doesn't make a difference anyway.\n \nWell, it very much does make a difference, because when someone\nmakes the effort to find something in our documentation, and in\nspite of their best efforts they can't, we tend to consider that a\nbug in the documentation. I'll take a look at the page you\nmentioned and see if I can work in a suitable reference. I'm sure\nyou can see, though, why the *main* place it was documented was the\nstatement which is generally used to create a primary key.\n \nThanks for responding with the info on where you looked.\n \n-Kevin\n", "msg_date": "Wed, 18 Aug 2010 09:24:37 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are Indices automatically generated for primary\n\t keys?" }, { "msg_contents": "Hi Kevin,\n\n> Sorry; I didn't mean to be harsh.\nI also overreacted, sorry about that.\n\nIndeed the documentation is well done, as is the software itself =)\n\nThanks, Clemens\n\n> Sometimes people coming from some other products aren't used to that\n> -- I was just trying to point you in the direction of being able to\n> find things in the future, to save you trouble and delay.\n>\n>> I looked at the \"Indexes and ORDER BY\" which doesn't mention it,\n>> or I've overlook it.\n>> Doesn't make a difference anyway.\n>\n> Well, it very much does make a difference, because when someone\n> makes the effort to find something in our documentation, and in\n> spite of their best efforts they can't, we tend to consider that a\n> bug in the documentation.  I'll take a look at the page you\n> mentioned and see if I can work in a suitable reference.  I'm sure\n> you can see, though, why the *main* place it was documented was the\n> statement which is generally used to create a primary key.\n>\n> Thanks for responding with the info on where you looked.\n>\n> -Kevin\n>\n", "msg_date": "Wed, 18 Aug 2010 16:33:43 +0200", "msg_from": "Clemens Eisserer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Are Indices automatically generated for primary keys?" }, { "msg_contents": "Clemens Eisserer <[email protected]> wrote:\n \n> I looked at the \"Indexes and ORDER BY\" which doesn't mention it\n \nThe doesn't seem like an appropriate place to discuss when indexes\nare created. Do you think you'd have found a mention in the\nIntroduction page for indexes? Since we talk about the CREATE INDEX\nstatement there, it seems reasonable to me to add a mention of where\nthey are automatically created by constraints, too.\n \nDid you try the documentation index? If so, where did you look?\n \n-Kevin\n", "msg_date": "Wed, 18 Aug 2010 09:52:57 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are Indices automatically generated for primary\n\t keys?" }, { "msg_contents": "On 18 August 2010 17:06, Justin Graf <[email protected]> wrote:\n> On 8/18/2010 9:15 AM, Clemens Eisserer wrote:\n>> Hi,\n>>\n>>\n>>> they are generated automatically.\n>>>\n>> Thanks depesz!\n>> The reason why I asked was because pgAdmin doesn't display the\n>> automatically created indices, which confused me.\n>>\n>> Thanks, Clemens\n>>\n> PGAdmin caches all database layout locally, the tree view can get very\n> stale.  So refresh the treeview with either F5 or right click an item in\n> the treeview click refresh to rebuild the list.\n>\n\nI don't think PgAdmin will display indexes created by primary keys,\nonly if indisprimary is false.\n\n-- \nThom Brown\nRegistered Linux user: #516935\n", "msg_date": "Wed, 18 Aug 2010 16:23:17 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are Indices automatically generated for primary keys?" }, { "msg_contents": "On 8/18/2010 9:15 AM, Clemens Eisserer wrote:\n> Hi,\n>\n>\n>> they are generated automatically.\n>>\n> Thanks depesz!\n> The reason why I asked was because pgAdmin doesn't display the\n> automatically created indices, which confused me.\n>\n> Thanks, Clemens\n>\nPGAdmin caches all database layout locally, the tree view can get very \nstale. So refresh the treeview with either F5 or right click an item in \nthe treeview click refresh to rebuild the list.\n\n\n\n**snip***\n\n\n\n\nAll legitimate Magwerks Corporation quotations are sent in a .PDF file attachment with a unique ID number generated by our proprietary quotation system. Quotations received via any other form of communication will not be honored.\n\nCONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally privileged, confidential or other information proprietary to Magwerks Corporation and is intended solely for the use of the individual to whom it addresses. If the reader of this e-mail is not the intended recipient or authorized agent, the reader is hereby notified that any unauthorized viewing, dissemination, distribution or copying of this e-mail is strictly prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and destroy all occurrences of this e-mail immediately.\nThank you.\n\n", "msg_date": "Wed, 18 Aug 2010 11:06:39 -0500", "msg_from": "Justin Graf <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are Indices automatically generated for primary keys?" }, { "msg_contents": "Le 18/08/2010 17:23, Thom Brown a �crit :\n> On 18 August 2010 17:06, Justin Graf <[email protected]> wrote:\n>> On 8/18/2010 9:15 AM, Clemens Eisserer wrote:\n>>> Hi,\n>>>\n>>>\n>>>> they are generated automatically.\n>>>>\n>>> Thanks depesz!\n>>> The reason why I asked was because pgAdmin doesn't display the\n>>> automatically created indices, which confused me.\n>>>\n>>> Thanks, Clemens\n>>>\n>> PGAdmin caches all database layout locally, the tree view can get very\n>> stale. So refresh the treeview with either F5 or right click an item in\n>> the treeview click refresh to rebuild the list.\n>>\n> \n> I don't think PgAdmin will display indexes created by primary keys,\n> only if indisprimary is false.\n> \n\npgAdmin doesn't display indexes for primary keys and unique constraints.\nThese objects are already displayed in the constraints nodes. The fact\nthat they use an index to enforce the constraints is an implementation\ndetail.\n\n\n-- \nGuillaume\n http://www.postgresql.fr\n http://dalibo.com\n", "msg_date": "Wed, 25 Aug 2010 08:58:59 +0200", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are Indices automatically generated for primary keys?" } ]
[ { "msg_contents": "I am having severe COPY performance issues after adding indices. What used\nto take a few minutes (without indices) now takes several hours (with\nindices). I've tried to tweak the database configuration (based on Postgres\ndocumentation and forums), but it hasn't helped as yet. Perhaps, I haven't\nincreased the limits sufficiently. Dropping and recreating indices may not\nbe an option due to a long time it takes to rebuild all indices.\n\nI'll appreciate someone looking at my configuration and giving me a few\nideas on how to increase the copy performance.\n\nThanks.\nSaadat.\n\nTable structure:\n===========\ntable C:\n Table \"public.C\"\n Column | Type | Modifiers\n----------+------------------+-----------\n sclk | double precision | not null\n chan | smallint | not null\n det | smallint | not null\n x | real | not null\n y | real | not null\n z | real | not null\n r | real |\n t | real |\n lat | real |\n lon | real |\n a | real |\n b | real |\n c | real |\n time | real |\n qa | smallint | not null\n qb | smallint | not null\n qc | smallint | not null\nIndexes:\n \"C_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n\n\npartitioned into *19* sub-tables covering lat bands. For example:\n\nsub-table C0:\n Inherits: C\n Check constraints:\n \"C0_lat_check\" CHECK (lat >= (-10::real) AND lat < 0::real)\n Indexes:\n \"C0_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n \"C0_lat\" btree (lat)\n \"C0_time\" btree (time)\n \"C0_lon\" btree (lon)\n\nsub-table C1:\n Inherits: C\n Check constraints:\n \"C1_lat_check\" CHECK (lat >= (-20::real) AND lat < -10::real)\n Indexes:\n \"C1_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n \"C1_lat\" btree (lat)\n \"C1_time\" btree (time)\n \"C1_lon\" btree (lon)\n\nThe partitions C?s are ~30G (328,000,000 rows) each except one, which is\n~65G (909,000,000 rows). There are no rows in umbrella table C from which\nC1, C2, ..., C19 inherit. The data is partitioned in C1, C2, ..., C19 in\norder to promote better access. Most people will access the data in C by\nspecifying a lat range. Also, C?s can become quite large over time.\n\nThe COPY operation copies one file per partition, for each of the 19\npartitions. Each file is between 300,000 - 600,000 records.\n\n\nSystem configuration:\n================\n1. RHEL5 x86_64\n2. 32G RAM\n3. 8T RAID5 partition for database on a Dell PERC 5/E controller\n (I understand that I'll never get fast inserts/updates on it based on\n http://wiki.postgresql.org/wiki/SlowQueryQuestions but cannot change\n to a RAID0+1 for now).\n Database's filesystem is ext4 on LVM on RAID5.\n4. Postgres 8.4.2\n shared_buffers = 10GB\n temp_buffers = 16MB\n work_mem = 2GB\n maintenance_work_mem = 256MB\n max_files_per_process = 1000\n effective_io_concurrency = 3\n wal_buffers = 8MB\n checkpoint_segments = 40\n enable_seqscan = off\n effective_cache_size = 16GB\n5. analyze verbose; ran on the database before copy operation\n\nBonnie++ output:\n=============\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\ndbtest 64240M 78829 99 266172 42 47904 6 58410 72 116247 9 767.9\n1\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 256 16229 98 371704 99 20258 36 16115 97 445680 99\n17966 36\ndbtest,64240M,78829,99,266172,42,47904,6,58410,72,116247,9,767.9,1,256,16229,98,371704,99,20258,36,16115,97,445680,99,17966,36\n\nI am having severe COPY performance issues after adding indices. What used to take a few minutes (without indices) now takes several hours (with indices). I've tried to tweak the database configuration (based on Postgres documentation and forums), but it hasn't helped as yet. Perhaps, I haven't increased the limits sufficiently. Dropping and recreating indices may not be an option due to a long time it takes to rebuild all indices.\nI'll appreciate someone looking at my configuration and giving me a few ideas on how to increase the copy performance. Thanks.Saadat.Table structure:===========table C:\n           Table \"public.C\"\n  Column  |       Type       | Modifiers----------+------------------+-----------\n sclk     | double precision | not null chan     | smallint         | not null\n det      | smallint         | not null x        | real             | not null\n y        | real             | not null z        | real             | not null\n r        | real             | t        | real             |\n lat      | real             | lon      | real             |\n a        | real             | b        | real             |\n c        | real             | time     | real             |\n qa       | smallint         | not null qb       | smallint         | not null\n qc       | smallint         | not nullIndexes:\n    \"C_pkey\" PRIMARY KEY, btree (sclk, chan, det)partitioned into 19 sub-tables covering lat bands. For example:\nsub-table C0:   Inherits: C\n   Check constraints:       \"C0_lat_check\" CHECK (lat >= (-10::real) AND lat < 0::real)\n   Indexes:       \"C0_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n       \"C0_lat\" btree (lat)       \"C0_time\" btree (time)       \"C0_lon\" btree (lon)sub-table C1:\n   Inherits: C\n   Check constraints:\n       \"C1_lat_check\" \nCHECK (lat >= (-20::real) AND lat < -10::real)\n   Indexes:\n       \"C1_pkey\" \nPRIMARY KEY, btree (sclk, chan, det)\n       \"C1_lat\" btree \n(lat)\n       \"C1_time\" btree (time)\n       \"C1_lon\" btree (lon)\nThe partitions C?s are ~30G (328,000,000 rows) each except one, which is ~65G (909,000,000 rows). There are no rows in umbrella table C from which C1, C2, ..., C19 inherit. The data is partitioned in C1, C2, ..., C19 in order to promote better access. Most people will access the data in C by specifying a lat range. Also, C?s can become quite large over time.\nThe COPY operation copies one file per partition, for each of the 19 partitions. Each file is between 300,000 - 600,000 records. System configuration:================\n\n1. RHEL5 x86_64\n2. 32G RAM\n3. 8T RAID5 partition for database on a Dell PERC 5/E controller\n   (I understand that I'll never get fast inserts/updates on it based on    http://wiki.postgresql.org/wiki/SlowQueryQuestions but cannot change\n\n\n\n    to a RAID0+1 for now).    Database's filesystem is ext4 on LVM on RAID5.\n4. Postgres 8.4.2    shared_buffers = 10GB    temp_buffers = 16MB    work_mem = 2GB    maintenance_work_mem = 256MB    max_files_per_process = 1000    effective_io_concurrency = 3    wal_buffers = 8MB\n\n\n\n    checkpoint_segments = 40    enable_seqscan = off    effective_cache_size = 16GB5. analyze verbose; ran on the database before copy operationBonnie++ output:=============Version  1.03       ------Sequential Output------ --Sequential Input- --Random-\n\n                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CPdbtest    64240M 78829  99 266172  42 47904   6 58410  72 116247   9 767.9   1\n\n                    ------Sequential Create------ --------Random Create--------                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP\n\n                256 16229  98 371704  99 20258  36 16115  97 445680  99 17966  36dbtest,64240M,78829,99,266172,42,47904,6,58410,72,116247,9,767.9,1,256,16229,98,371704,99,20258,36,16115,97,445680,99,17966,36", "msg_date": "Wed, 18 Aug 2010 09:25:39 -0700", "msg_from": "Saadat Anwar <[email protected]>", "msg_from_op": true, "msg_subject": "Copy performance issues" }, { "msg_contents": "I am having severe COPY performance issues after adding indices. What used\nto take a few minutes (without indices) now takes several hours (with\nindices). I've tried to tweak the database configuration (based on Postgres\ndocumentation and forums), but it hasn't helped as yet. Perhaps, I haven't\nincreased the limits sufficiently. Dropping and recreating indices may not\nbe an option due to a long time it takes to rebuild all indices.\n\nI'll appreciate someone looking at my configuration and giving me a few\nideas on how to increase the copy performance.\n\nThanks.\nSaadat.\n\nTable structure:\n===========\ntable C:\n Table \"public.C\"\n Column | Type | Modifiers\n----------+------------------+-----------\n sclk | double precision | not null\n chan | smallint | not null\n det | smallint | not null\n x | real | not null\n y | real | not null\n z | real | not null\n r | real |\n t | real |\n lat | real |\n lon | real |\n a | real |\n b | real |\n c | real |\n time | real |\n qa | smallint | not null\n qb | smallint | not null\n qc | smallint | not null\nIndexes:\n \"C_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n\n\npartitioned into *19* sub-tables covering lat bands. For example:\n\nsub-table C0:\n Inherits: C\n Check constraints:\n \"C0_lat_check\" CHECK (lat >= (-10::real) AND lat < 0::real)\n Indexes:\n \"C0_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n \"C0_lat\" btree (lat)\n \"C0_time\" btree (time)\n \"C0_lon\" btree (lon)\n\nsub-table C1:\n Inherits: C\n Check constraints:\n \"C1_lat_check\" CHECK (lat >= (-20::real) AND lat < -10::real)\n Indexes:\n \"C1_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n \"C1_lat\" btree (lat)\n \"C1_time\" btree (time)\n \"C1_lon\" btree (lon)\n\nThe partitions C?s are ~30G (328,000,000 rows) each except one, which is\n~65G (909,000,000 rows). There are no rows in umbrella table C from which\nC1, C2, ..., C19 inherit. The data is partitioned in C1, C2, ..., C19 in\norder to promote better access. Most people will access the data in C by\nspecifying a lat range. Also, C?s can become quite large over time.\n\nThe COPY operation copies one file per partition, for each of the 19\npartitions. Each file is between 300,000 - 600,000 records.\n\n\nSystem configuration:\n================\n1. RHEL5 x86_64\n2. 32G RAM\n3. 8T RAID5 partition for database on a Dell PERC 5/E controller\n (I understand that I'll never get fast inserts/updates on it based on\n http://wiki.postgresql.org/wiki/SlowQueryQuestions but cannot change\n to a RAID0+1 for now).\n Database's filesystem is ext4 on LVM on RAID5.\n4. Postgres 8.4.2\n shared_buffers = 10GB\n temp_buffers = 16MB\n work_mem = 2GB\n maintenance_work_mem = 256MB\n max_files_per_process = 1000\n effective_io_concurrency = 3\n wal_buffers = 8MB\n checkpoint_segments = 40\n enable_seqscan = off\n effective_cache_size = 16GB\n5. analyze verbose; ran on the database before copy operation\n\nBonnie++ output:\n=============\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\ndbtest 64240M 78829 99 266172 42 47904 6 58410 72 116247 9 767.9\n1\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 256 16229 98 371704 99 20258 36 16115 97 445680 99\n17966 36\ndbtest,64240M,78829,99,266172,42,47904,6,58410,72,116247,9,767.9,1,256,16229,98,371704,99,20258,36,16115,97,445680,99,17966,36\n\nI am having severe COPY performance issues after adding indices. What used to take a few minutes (without indices) now takes several hours (with indices). I've tried to tweak the database configuration (based on Postgres documentation and forums), but it hasn't helped as yet. Perhaps, I haven't increased the limits sufficiently. Dropping and recreating indices may not be an option due to a long time it takes to rebuild all indices.\nI'll appreciate someone looking at my configuration and giving me a few ideas on how to increase the copy performance. Thanks.Saadat.Table structure:===========table C:\n           Table \"public.C\"\n  Column  |       Type       | Modifiers----------+------------------+-----------\n sclk     | double precision | not null chan     | smallint         | not null\n det      | smallint         | not null x        | real             | not null\n y        | real             | not null z        | real             | not null\n r        | real             | t        | real             |\n lat      | real             | lon      | real             |\n a        | real             | b        | real             |\n c        | real             | time     | real             |\n qa       | smallint         | not null qb       | smallint         | not null\n qc       | smallint         | not nullIndexes:\n    \"C_pkey\" PRIMARY KEY, btree (sclk, chan, det)partitioned into 19 sub-tables covering lat bands. For example:\nsub-table C0:   Inherits: C\n   Check constraints:       \"C0_lat_check\" CHECK (lat >= (-10::real) AND lat < 0::real)\n   Indexes:       \"C0_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n       \"C0_lat\" btree (lat)       \"C0_time\" btree (time)       \"C0_lon\" btree (lon)sub-table C1:\n   Inherits: C\n   Check constraints:\n       \"C1_lat_check\" \nCHECK (lat >= (-20::real) AND lat < -10::real)\n   Indexes:\n       \"C1_pkey\" \nPRIMARY KEY, btree (sclk, chan, det)\n       \"C1_lat\" btree \n(lat)\n       \"C1_time\" btree (time)\n       \"C1_lon\" btree (lon)\nThe partitions C?s are ~30G (328,000,000 rows) each except one, which is ~65G (909,000,000 rows). There are no rows in umbrella table C from which C1, C2, ..., C19 inherit. The data is partitioned in C1, C2, ..., C19 in order to promote better access. Most people will access the data in C by specifying a lat range. Also, C?s can become quite large over time.\nThe COPY operation copies one file per partition, for each of the 19 partitions. Each file is between 300,000 - 600,000 records. System configuration:================\n\n1. RHEL5 x86_64\n2. 32G RAM\n3. 8T RAID5 partition for database on a Dell PERC 5/E controller\n   (I understand that I'll never get fast inserts/updates on it based on    http://wiki.postgresql.org/wiki/SlowQueryQuestions but cannot change\n\n\n\n\n    to a RAID0+1 for now).    Database's filesystem is ext4 on LVM on RAID5.\n4. Postgres 8.4.2    shared_buffers = 10GB    temp_buffers = 16MB    work_mem = 2GB    maintenance_work_mem = 256MB    max_files_per_process = 1000    effective_io_concurrency = 3    wal_buffers = 8MB\n\n\n\n\n    checkpoint_segments = 40    enable_seqscan = off    effective_cache_size = 16GB5. analyze verbose; ran on the database before copy operationBonnie++ output:=============Version  1.03       ------Sequential Output------ --Sequential Input- --Random-\n\n\n                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CPdbtest    64240M 78829  99 266172  42 47904   6 58410  72 116247   9 767.9   1\n\n\n                    ------Sequential Create------ --------Random Create--------                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP\n\n\n                256 16229  98 371704  99 20258  36 16115  97 445680  99 17966  36dbtest,64240M,78829,99,266172,42,47904,6,58410,72,116247,9,767.9,1,256,16229,98,371704,99,20258,36,16115,97,445680,99,17966,36", "msg_date": "Wed, 18 Aug 2010 13:09:39 -0700", "msg_from": "s anwar <[email protected]>", "msg_from_op": false, "msg_subject": "Copy performance issues" }, { "msg_contents": "Saadat Anwar <[email protected]> writes:\n> I am having severe COPY performance issues after adding indices. What used\n> to take a few minutes (without indices) now takes several hours (with\n> indices). I've tried to tweak the database configuration (based on Postgres\n> documentation and forums), but it hasn't helped as yet. Perhaps, I haven't\n> increased the limits sufficiently. Dropping and recreating indices may not\n> be an option due to a long time it takes to rebuild all indices.\n\nI suspect your problem is basically that the index updates require a\nworking set larger than available RAM, so the machine spends all its\ntime shuffling index pages in and out. Can you reorder the input so\nthat there's more locality of reference in the index values?\n\nAlso, my first reaction to that schema is to wonder whether the lat/lon\nindexes are worth anything. What sort of queries are you using them\nfor, and have you considered an rtree/gist index instead?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Aug 2010 18:42:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Copy performance issues " }, { "msg_contents": "s anwar wrote:\n> 3. 8T RAID5 partition for database on a Dell PERC 5/E controller\n> (I understand that I'll never get fast inserts/updates on it based on\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions but cannot change\n> to a RAID0+1 for now).\n> Database's filesystem is ext4 on LVM on RAID5.\n\nAnd LVM slows things down too, so you're stuck with basically the worst \npossible layout here. Have you checked that your PERC is setup with a \nwrite-back cache? If not, that will kill performance. You need to have \na battery for the controller for that to work right. This is the first \nthing I'd check on your system. With ext4, on this controller you might \nneed to use the \"nobarrier\" write option when mounting the filesystem to \nget good performance too. Should be safe if you have a battery on the \ncontroller.\n\n> shared_buffers = 10GB\n> checkpoint_segments = 40\n\nI've gotten some reports that the fall-off where shared_buffers stops \nhelping is lower than this on Linux. I'd suggest at most 8GB, and you \nmight even try something like 4GB just to see if that turns out to be \nbetter.\n\nWith what you're doing, you could likely increase checkpoint_segments \nquite a bit from here too. I'd try something >100 and see if that helps.\n\n> enable_seqscan = off\n\nThat's going to cause you serious trouble one day if you leave it like \nthis in production, on a table where index scans are much more expensive \nthan sequential ones for what you're doing.\n\nOne thing that can help large COPYies a lot is to increase Linux \nread-ahead. Something like:\n\n/sbin/blockdev --setra 4096 /dev/sda\n\nDone for each physical drive the system sees will help large sequential \nreads of files the way COPY does significantly. The default is probably \n256 on your system given the LVM setup, possibly even smaller.\n\nBeyond that, you may just need to investigate clever ways to reduce the \nindexing requirements.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 18 Aug 2010 18:49:50 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Copy performance issues" }, { "msg_contents": "On Wed, Aug 18, 2010 at 3:42 PM, Tom Lane <[email protected]> wrote:\n\n> Saadat Anwar <[email protected]> writes:\n> > I am having severe COPY performance issues after adding indices. What\n> used\n> > to take a few minutes (without indices) now takes several hours (with\n> > indices). I've tried to tweak the database configuration (based on\n> Postgres\n> > documentation and forums), but it hasn't helped as yet. Perhaps, I\n> haven't\n> > increased the limits sufficiently. Dropping and recreating indices may\n> not\n> > be an option due to a long time it takes to rebuild all indices.\n>\n> I suspect your problem is basically that the index updates require a\n> working set larger than available RAM, so the machine spends all its\n> time shuffling index pages in and out. Can you reorder the input so\n> that there's more locality of reference in the index values?\n>\n> I can potentially reorder the data so that it has locality of reference\nw.r.t. one index, but not all. Or did I not interpret your response\ncorrectly?\n\nAlso, my first reaction to that schema is to wonder whether the lat/lon\n> indexes are worth anything. What sort of queries are you using them\n> for, and have you considered an rtree/gist index instead?\n>\n> I always assumed that the btree indices on individual fields were smaller\nand more efficient as compared to the rtree/gist indices. Is that not the\ncase? And since the users did not need points and point-queries, I decided\nin the favor of indexing individual fields.\n\n\n> regards, tom lane\n>\n\n\nThanks.\nSaadat.\n\nOn Wed, Aug 18, 2010 at 3:42 PM, Tom Lane <[email protected]> wrote:\nSaadat Anwar <[email protected]> writes:\n> I am having severe COPY performance issues after adding indices. What used\n> to take a few minutes (without indices) now takes several hours (with\n> indices). I've tried to tweak the database configuration (based on Postgres\n> documentation and forums), but it hasn't helped as yet. Perhaps, I haven't\n> increased the limits sufficiently. Dropping and recreating indices may not\n> be an option due to a long time it takes to rebuild all indices.\n\nI suspect your problem is basically that the index updates require a\nworking set larger than available RAM, so the machine spends all its\ntime shuffling index pages in and out.  Can you reorder the input so\nthat there's more locality of reference in the index values?\nI can potentially reorder the data so that it has locality of reference w.r.t. one index, but not all. Or did I not interpret your response correctly?\n\nAlso, my first reaction to that schema is to wonder whether the lat/lon\nindexes are worth anything.  What sort of queries are you using them\nfor, and have you considered an rtree/gist index instead?\nI always assumed that the btree indices on individual fields were smaller and more efficient as compared to the rtree/gist indices. Is that not the case? And since the users did not need points and point-queries, I decided in the favor of indexing individual fields.\n \n                        regards, tom lane\nThanks.Saadat.", "msg_date": "Wed, 18 Aug 2010 15:59:52 -0700", "msg_from": "s anwar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Copy performance issues" }, { "msg_contents": "Hi,\n\nTry to split your data in small batches. It helped me in a similar\nsituation recently. I was loading about a million rows into the table\nhighly populated with indexes and different triggers and the batch\nsize was 100 (using COPY). The only thing I did with DDL is droped FKs\nand recreated them after.\n\nBTW question to gurus - why and in what cases small batch loading\ncould theoretically be faster then huge one if there is no another\nload on the database but this?\n\n\nOn 18 August 2010 20:25, Saadat Anwar <[email protected]> wrote:\n> I am having severe COPY performance issues after adding indices. What used\n> to take a few minutes (without indices) now takes several hours (with\n> indices). I've tried to tweak the database configuration (based on Postgres\n> documentation and forums), but it hasn't helped as yet. Perhaps, I haven't\n> increased the limits sufficiently. Dropping and recreating indices may not\n> be an option due to a long time it takes to rebuild all indices.\n>\n> I'll appreciate someone looking at my configuration and giving me a few\n> ideas on how to increase the copy performance.\n>\n> Thanks.\n> Saadat.\n>\n> Table structure:\n> ===========\n> table C:\n>            Table \"public.C\"\n>   Column  |       Type       | Modifiers\n> ----------+------------------+-----------\n>  sclk     | double precision | not null\n>  chan     | smallint         | not null\n>  det      | smallint         | not null\n>  x        | real             | not null\n>  y        | real             | not null\n>  z        | real             | not null\n>  r        | real             |\n>  t        | real             |\n>  lat      | real             |\n>  lon      | real             |\n>  a        | real             |\n>  b        | real             |\n>  c        | real             |\n>  time     | real             |\n>  qa       | smallint         | not null\n>  qb       | smallint         | not null\n>  qc       | smallint         | not null\n> Indexes:\n>     \"C_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n>\n>\n> partitioned into 19 sub-tables covering lat bands. For example:\n>\n> sub-table C0:\n>    Inherits: C\n>    Check constraints:\n>        \"C0_lat_check\" CHECK (lat >= (-10::real) AND lat < 0::real)\n>    Indexes:\n>        \"C0_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n>        \"C0_lat\" btree (lat)\n>        \"C0_time\" btree (time)\n>        \"C0_lon\" btree (lon)\n>\n> sub-table C1:\n>    Inherits: C\n>    Check constraints:\n>        \"C1_lat_check\" CHECK (lat >= (-20::real) AND lat < -10::real)\n>    Indexes:\n>        \"C1_pkey\" PRIMARY KEY, btree (sclk, chan, det)\n>        \"C1_lat\" btree (lat)\n>        \"C1_time\" btree (time)\n>        \"C1_lon\" btree (lon)\n>\n> The partitions C?s are ~30G (328,000,000 rows) each except one, which is\n> ~65G (909,000,000 rows). There are no rows in umbrella table C from which\n> C1, C2, ..., C19 inherit. The data is partitioned in C1, C2, ..., C19 in\n> order to promote better access. Most people will access the data in C by\n> specifying a lat range. Also, C?s can become quite large over time.\n>\n> The COPY operation copies one file per partition, for each of the 19\n> partitions. Each file is between 300,000 - 600,000 records.\n>\n>\n> System configuration:\n> ================\n> 1. RHEL5 x86_64\n> 2. 32G RAM\n> 3. 8T RAID5 partition for database on a Dell PERC 5/E controller\n>    (I understand that I'll never get fast inserts/updates on it based on\n>     http://wiki.postgresql.org/wiki/SlowQueryQuestions but cannot change\n>     to a RAID0+1 for now).\n>     Database's filesystem is ext4 on LVM on RAID5.\n> 4. Postgres 8.4.2\n>     shared_buffers = 10GB\n>     temp_buffers = 16MB\n>     work_mem = 2GB\n>     maintenance_work_mem = 256MB\n>     max_files_per_process = 1000\n>     effective_io_concurrency = 3\n>     wal_buffers = 8MB\n>     checkpoint_segments = 40\n>     enable_seqscan = off\n>     effective_cache_size = 16GB\n> 5. analyze verbose; ran on the database before copy operation\n>\n> Bonnie++ output:\n> =============\n> Version  1.03       ------Sequential Output------ --Sequential Input-\n> --Random-\n>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec\n> %CP\n> dbtest    64240M 78829  99 266172  42 47904   6 58410  72 116247   9 767.9\n> 1\n>                     ------Sequential Create------ --------Random\n> Create--------\n>                     -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec\n> %CP\n>                 256 16229  98 371704  99 20258  36 16115  97 445680  99\n> 17966  36\n> dbtest,64240M,78829,99,266172,42,47904,6,58410,72,116247,9,767.9,1,256,16229,98,371704,99,20258,36,16115,97,445680,99,17966,36\n>\n>\n>\n\n\n\n-- \nSergey Konoplev\n\nBlog: http://gray-hemp.blogspot.com /\nLinkedin: http://ru.linkedin.com/in/grayhemp /\nJID/GTalk: [email protected] / Skype: gray-hemp / ICQ: 29353802\n", "msg_date": "Fri, 20 Aug 2010 11:25:11 +0400", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Copy performance issues" } ]
[ { "msg_contents": "I'm just starting the process of trying to tune new hardware, which is\n2x quad core xeon, 48GB RAM, 8x300GB SAS 15K drives in RAID 1+0,\n2x72GB 15K SAS drives in RAID 1 for WAL and system. It is a PERC 6/i\ncard with BBU. Write-back cache is enabled. The system volume is\next3. The large data partition is ext4.\n\ncurrent config changes are as follows (but I've been experimenting\nwith a variety of settings):\n\ndefault_statistics_target = 50 # pgtune wizard 2010-08-17\nmaintenance_work_mem = 1GB # pgtune wizard 2010-08-17\nconstraint_exclusion = on # pgtune wizard 2010-08-17\ncheckpoint_completion_target = 0.9 # pgtune wizard 2010-08-17\neffective_cache_size = 36GB # sam\nwork_mem = 288MB # pgtune wizard 2010-08-17\nwal_buffers = 8MB # pgtune wizard 2010-08-17\n#checkpoint_segments = 16 # pgtune wizard 2010-08-17\ncheckpoint_segments = 30 # sam\nshared_buffers = 11GB # pgtune wizard 2010-08-17\nmax_connections = 80 # pgtune wizard 2010-08-17\ncpu_tuple_cost = 0.0030 # sam\ncpu_index_tuple_cost = 0.0010 # sam\ncpu_operator_cost = 0.0005 # sam\n#random_page_cost = 2.0 # sam\n\nIt will eventually be a mixed-use db, but the OLTP load is very light.\n ETL for the warehouse side of things does no updates or deletes.\nJust inserts and partition drops. I know that\ndefault_statistics_target isn't right for a warehouse workload, but I\nhaven't gotten to the point of tuning with a production workload, yet,\nso I'm leaving the pgtune default.\n\nWhen running pgbench on a db which fits easily into RAM (10% of RAM =\n-s 380), I see transaction counts a little less than 5K. When I go to\n90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a\nfairly wide range of concurrencies). At that point, I decided to\ninvestigate the performance impact of write barriers. I tried building\nand running the test_fsync utility from the source distribution but\nreally didn't understand how to interpret the results. So I just\ntried the same test with write barriers on and write barriers off (on\nboth volumes).\n\nWith barriers off, I saw a transaction rate of about 1200. With\nbarriers on, it was closer to 1050. The test had a concurrency of 40\nin both cases. From what I understand of the write barrier problem, a\nmisbehaving controller will flush the cache to disk with every\nbarrier, so I assume performance would drop a heck of a lot more than\n13%. I assume the relatively small performance reduction is just\ncontention on the write barriers between the 40 backends. I was\nhoping someone could confirm this (I could test on and off with lower\nconcurrency, of course, but that will take hours to complete). It\noccurs to me that the relatively small drop in performance may also be\nthe result of the relatively small db size. Our actual production db\nis likely to be closer to 200% of RAM, but the most active data should\nbe a lot closer to 90% of RAM. Anyway, I could test all of this, but\nthe testing takes so long (I'm running 30 minutes per test in order to\nget any kind of consistency of results) that it is likely faster to\njust ask the experts.\n\nI'd also welcome any other tuning suggestions.\n\nThanks\n\n--sam\n", "msg_date": "Wed, 18 Aug 2010 12:24:28 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "write barrier question" }, { "msg_contents": "On 8/18/10 12:24 PM, Samuel Gendler wrote:\n> With barriers off, I saw a transaction rate of about 1200. With\n> barriers on, it was closer to 1050. The test had a concurrency of 40\n> in both cases.\n\nI discovered there is roughly 10-20% \"noise\" in pgbench results after running the exact same test over a 24-hour period on a machine with no other activity. Be sure you run your tests enough times to get good statistics unless you're looking at much larger differences.\n\nCraig\n", "msg_date": "Wed, 18 Aug 2010 12:56:57 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: write barrier question" }, { "msg_contents": "Samuel Gendler wrote:\n> When running pgbench on a db which fits easily into RAM (10% of RAM =\n> -s 380), I see transaction counts a little less than 5K. When I go to\n> 90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a\n> fairly wide range of concurrencies). At that point, I decided to\n> investigate the performance impact of write barriers.\nAt 90% of RAM you're probable reading data as well, not only writing. \nWatching iostat -xk 1 or vmstat 1 during a test should confirm this. To \nfind the maximum database size that fits comfortably in RAM you could \ntry out http://github.com/gregs1104/pgbench-tools - my experience with \nit is that it takes less than 10 minutes to setup and run and after some \ntime you get rewarded with nice pictures! :-)\n\nregards,\nYeb Havinga\n", "msg_date": "Wed, 18 Aug 2010 22:25:27 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: write barrier question" }, { "msg_contents": "I am. I was giving mean numbers\n\nOn Wed, Aug 18, 2010 at 12:56 PM, Craig James\n<[email protected]> wrote:\n> On 8/18/10 12:24 PM, Samuel Gendler wrote:\n>>\n>> With barriers off, I saw a transaction rate of about 1200.  With\n>> barriers on, it was closer to 1050.  The test had a concurrency of 40\n>> in both cases.\n>\n> I discovered there is roughly 10-20% \"noise\" in pgbench results after\n> running the exact same test over a 24-hour period on a machine with no other\n> activity.  Be sure you run your tests enough times to get good statistics\n> unless you're looking at much larger differences.\n>\n> Craig\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 18 Aug 2010 15:05:27 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: write barrier question" }, { "msg_contents": "On Wed, Aug 18, 2010 at 1:25 PM, Yeb Havinga <[email protected]> wrote:\n> Samuel Gendler wrote:\n>>\n>> When running pgbench on a db which fits easily into RAM (10% of RAM =\n>> -s 380), I see transaction counts a little less than 5K.  When I go to\n>> 90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a\n>> fairly wide range of concurrencies).  At that point, I decided to\n>> investigate the performance impact of write barriers.\n>\n> At 90% of RAM you're probable reading data as well, not only writing.\n> Watching iostat -xk 1 or vmstat 1 during a test should confirm this. To find\n> the maximum database size that fits comfortably in RAM you could try out\n> http://github.com/gregs1104/pgbench-tools - my experience with it is that it\n> takes less than 10 minutes to setup and run and after some time you get\n> rewarded with nice pictures! :-)\n\nYes. I've intentionally sized it at 90% precisely so that I am\nreading as well as writing, which is what an actual production\nenvironment will resemble.\n", "msg_date": "Wed, 18 Aug 2010 15:06:43 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: write barrier question" }, { "msg_contents": "2010/8/19 Samuel Gendler <[email protected]>:\n> On Wed, Aug 18, 2010 at 1:25 PM, Yeb Havinga <[email protected]> wrote:\n>> Samuel Gendler wrote:\n>>>\n>>> When running pgbench on a db which fits easily into RAM (10% of RAM =\n>>> -s 380), I see transaction counts a little less than 5K.  When I go to\n>>> 90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a\n>>> fairly wide range of concurrencies).  At that point, I decided to\n>>> investigate the performance impact of write barriers.\n\nI have just tested a similar hardware (32GB RAM), I have around 15k\ntps with a database wich fit in RAM (around 22GB DB).\nThe request and schema are very simple.\n\nMy advice is too use a pgbouncer on a separate server as well as any\nbenchmark. For very heavy bench I use 'tsung' benchmark tool.\nAlso some of your postgresql.conf params looks strange : shared_buffer\n>8GB is probably working less well.\n\nYou may want to fine tune OS for Dirty cache (and dirty write) and/or\nincrease bgwriter agressivity.\n\nIt may happen that your overall performance are better with a mount -o\nsync option ! (depend of your write usage, see iostat -x 2, especially\nthe Wtps, during checkpoints) Perc6i handle well the write cache and\nyou can use 'nobarrier' mount option too but mounting the filesystem\n'sync' is not that bad (because the perc6 will buffer).\n\n>>\n>> At 90% of RAM you're probable reading data as well, not only writing.\n>> Watching iostat -xk 1 or vmstat 1 during a test should confirm this. To find\n>> the maximum database size that fits comfortably in RAM you could try out\n>> http://github.com/gregs1104/pgbench-tools - my experience with it is that it\n>> takes less than 10 minutes to setup and run and after some time you get\n>> rewarded with nice pictures! :-)\n>\n> Yes.  I've intentionally sized it at 90% precisely so that I am\n> reading as well as writing, which is what an actual production\n> environment will resemble.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Sat, 28 Aug 2010 12:17:50 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: write barrier question" } ]
[ { "msg_contents": "I've got this explain: http://explain.depesz.com/s/Xh9\n\nAnd these settings:\ndefault_statistics_target = 50 # pgtune wizard 2010-08-17\nmaintenance_work_mem = 1GB # pgtune wizard 2010-08-17\nconstraint_exclusion = on # pgtune wizard 2010-08-17\ncheckpoint_completion_target = 0.9 # pgtune wizard 2010-08-17\neffective_cache_size = 36GB # sam\nwork_mem = 288MB # pgtune wizard 2010-08-17\nwal_buffers = 8MB # pgtune wizard 2010-08-17\n#checkpoint_segments = 16 # pgtune wizard 2010-08-17\ncheckpoint_segments = 30 # sam\nshared_buffers = 11GB # pgtune wizard 2010-08-17\nmax_connections = 80 # pgtune wizard 2010-08-17\ncpu_tuple_cost = 0.0030 # sam\ncpu_index_tuple_cost = 0.0010 # sam\ncpu_operator_cost = 0.0005 # sam\n#random_page_cost = 2.0 # sam\n\nI'm not understanding why it is sorting on disk if it would fit within\na work_mem segment - by a fairly wide margin. Is there something else\nI can do to get that sort to happen in memory?\n", "msg_date": "Wed, 18 Aug 2010 22:23:52 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "in-memory sorting" }, { "msg_contents": "Answered my own question. Cranking work_mem up to 350MB revealed that\nthe in-memory sort requires more memory than the disk sort.\n\nOn Wed, Aug 18, 2010 at 10:23 PM, Samuel Gendler\n<[email protected]> wrote:\n> I've got this explain: http://explain.depesz.com/s/Xh9\n>\n> And these settings:\n> default_statistics_target = 50 # pgtune wizard 2010-08-17\n> maintenance_work_mem = 1GB # pgtune wizard 2010-08-17\n> constraint_exclusion = on # pgtune wizard 2010-08-17\n> checkpoint_completion_target = 0.9 # pgtune wizard 2010-08-17\n> effective_cache_size = 36GB # sam\n> work_mem = 288MB # pgtune wizard 2010-08-17\n> wal_buffers = 8MB # pgtune wizard 2010-08-17\n> #checkpoint_segments = 16 # pgtune wizard 2010-08-17\n> checkpoint_segments = 30 # sam\n> shared_buffers = 11GB # pgtune wizard 2010-08-17\n> max_connections = 80 # pgtune wizard 2010-08-17\n> cpu_tuple_cost = 0.0030                 # sam\n> cpu_index_tuple_cost = 0.0010           # sam\n> cpu_operator_cost = 0.0005              # sam\n> #random_page_cost = 2.0                 # sam\n>\n> I'm not understanding why it is sorting on disk if it would fit within\n> a work_mem segment - by a fairly wide margin.  Is there something else\n> I can do to get that sort to happen in memory?\n>\n", "msg_date": "Wed, 18 Aug 2010 22:45:58 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in-memory sorting" }, { "msg_contents": "Hello\n>>\n>> I'm not understanding why it is sorting on disk if it would fit within\n>> a work_mem segment - by a fairly wide margin.  Is there something else\n>> I can do to get that sort to happen in memory?\n>>\n\nPlanner working with estimations. So there is some probability so\nplanner expected a larger result set and used a external sort.\nProbably quick sort takes more memory too. Your statistic are probably\nout of range - system expecting 0.5 mil rec and get 2 mil rec.\n\nRegards\n\nPavel\n", "msg_date": "Thu, 19 Aug 2010 07:55:15 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in-memory sorting" }, { "msg_contents": "On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler\n<[email protected]> wrote:\n> Answered my own question.  Cranking work_mem up to 350MB revealed that\n> the in-memory sort requires more memory than the disk sort.\n\nNote that unless you run VERY few client connections, it's usually\nbetter to leave work_mem somewhere in the 1 to 32Meg range and have\nthe connection or user or database that needs 350Meg be set there.\n\nI.e.\n\n<connect>\nset work_mem='512MB';\n<execute query\n\nOR\n\nalter user memoryhog set work_mem='512MB';\n\nOR\n\nalter database memhogdb set work_mem='512MB';\n", "msg_date": "Thu, 19 Aug 2010 00:24:55 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in-memory sorting" }, { "msg_contents": "Yeah, although with 48GB of available memory and not that much concurrency,\nI'm not sure it matters that much. But point taken, I'll see about modifying\nthe app such that work_mem gets set on a per-query basis.\n\n\nOn Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler\n> <[email protected]> wrote:\n> > Answered my own question. Cranking work_mem up to 350MB revealed that\n> > the in-memory sort requires more memory than the disk sort.\n>\n> Note that unless you run VERY few client connections, it's usually\n> better to leave work_mem somewhere in the 1 to 32Meg range and have\n> the connection or user or database that needs 350Meg be set there.\n>\n> I.e.\n>\n> <connect>\n> set work_mem='512MB';\n> <execute query\n>\n> OR\n>\n> alter user memoryhog set work_mem='512MB';\n>\n> OR\n>\n> alter database memhogdb set work_mem='512MB';\n>\n\nYeah, although with 48GB of available memory and not that much concurrency, I'm not sure it matters that much. But point taken, I'll see about modifying the app such that work_mem gets set on a per-query basis.\nOn Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe <[email protected]> wrote:\nOn Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler\n<[email protected]> wrote:\n> Answered my own question.  Cranking work_mem up to 350MB revealed that\n> the in-memory sort requires more memory than the disk sort.\n\nNote that unless you run VERY few client connections, it's usually\nbetter to leave work_mem somewhere in the 1 to 32Meg range and have\nthe connection or user or database that needs 350Meg be set there.\n\nI.e.\n\n<connect>\nset work_mem='512MB';\n<execute query\n\nOR\n\nalter user memoryhog set work_mem='512MB';\n\nOR\n\nalter database memhogdb set work_mem='512MB';", "msg_date": "Wed, 18 Aug 2010 23:38:50 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in-memory sorting" }, { "msg_contents": "Exactly, it's about the concurrency. I have a server with 128G ram\nbut it runs dozens of queries at a time for hundreds of clients a\nsecond. The chance that something big for work_mem might jump up and\nbite me are pretty good there. Even so, at 16Meg it's not really big\nfor that machine, and I might test cranking it up. Note that large\nwork_mem can cause the kernel to flush its cache, which means going to\ndisk for everybody's data, and all the queries are slow instead of\none. Keep an eye on how high work_mem affects your kernel cache.\n\nOn Thu, Aug 19, 2010 at 12:38 AM, Samuel Gendler\n<[email protected]> wrote:\n> Yeah, although with 48GB of available memory and not that much concurrency,\n> I'm not sure it matters that much. But point taken, I'll see about modifying\n> the app such that work_mem gets set on a per-query basis.\n>\n> On Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler\n>> <[email protected]> wrote:\n>> > Answered my own question.  Cranking work_mem up to 350MB revealed that\n>> > the in-memory sort requires more memory than the disk sort.\n>>\n>> Note that unless you run VERY few client connections, it's usually\n>> better to leave work_mem somewhere in the 1 to 32Meg range and have\n>> the connection or user or database that needs 350Meg be set there.\n>>\n>> I.e.\n>>\n>> <connect>\n>> set work_mem='512MB';\n>> <execute query\n>>\n>> OR\n>>\n>> alter user memoryhog set work_mem='512MB';\n>>\n>> OR\n>>\n>> alter database memhogdb set work_mem='512MB';\n>\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Thu, 19 Aug 2010 00:52:16 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in-memory sorting" }, { "msg_contents": "Incidentally, if I set values on the connection before querying, is there an\neasy way to get things back to default values or will my code need to know\nthe prior value and explicitly set it back? Something like\n\n<get connection from pool>\nset work_mem = '512MB'\nquery\nset value = 'default'\n<return connection to pool>\n\nor maybe\n\n<get connection from pool>\nBEGIN;\nset work_mem='512MB'\nselect query\nROLLBACK;\n<return connection to pool>\n\nOn Wed, Aug 18, 2010 at 11:52 PM, Scott Marlowe <[email protected]>wrote:\n\n> Exactly, it's about the concurrency. I have a server with 128G ram\n> but it runs dozens of queries at a time for hundreds of clients a\n> second. The chance that something big for work_mem might jump up and\n> bite me are pretty good there. Even so, at 16Meg it's not really big\n> for that machine, and I might test cranking it up. Note that large\n> work_mem can cause the kernel to flush its cache, which means going to\n> disk for everybody's data, and all the queries are slow instead of\n> one. Keep an eye on how high work_mem affects your kernel cache.\n>\n> On Thu, Aug 19, 2010 at 12:38 AM, Samuel Gendler\n> <[email protected]> wrote:\n> > Yeah, although with 48GB of available memory and not that much\n> concurrency,\n> > I'm not sure it matters that much. But point taken, I'll see about\n> modifying\n> > the app such that work_mem gets set on a per-query basis.\n> >\n> > On Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe <[email protected]\n> >\n> > wrote:\n> >>\n> >> On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler\n> >> <[email protected]> wrote:\n> >> > Answered my own question. Cranking work_mem up to 350MB revealed that\n> >> > the in-memory sort requires more memory than the disk sort.\n> >>\n> >> Note that unless you run VERY few client connections, it's usually\n> >> better to leave work_mem somewhere in the 1 to 32Meg range and have\n> >> the connection or user or database that needs 350Meg be set there.\n> >>\n> >> I.e.\n> >>\n> >> <connect>\n> >> set work_mem='512MB';\n> >> <execute query\n> >>\n> >> OR\n> >>\n> >> alter user memoryhog set work_mem='512MB';\n> >>\n> >> OR\n> >>\n> >> alter database memhogdb set work_mem='512MB';\n> >\n> >\n>\n>\n>\n> --\n> To understand recursion, one must first understand recursion.\n>\n\nIncidentally, if I set values on the connection before querying, is there an easy way to get things back to default values or will my code need to know the prior value and explicitly set it back?  Something like \n<get connection from pool>set work_mem = '512MB'queryset value = 'default'<return connection to pool>or maybe\n<get connection from pool>BEGIN;set work_mem='512MB'select queryROLLBACK;<return connection to pool>\nOn Wed, Aug 18, 2010 at 11:52 PM, Scott Marlowe <[email protected]> wrote:\nExactly, it's about the concurrency.  I have a server with 128G ram\nbut it runs dozens of queries at a time for hundreds of clients a\nsecond.  The chance that something big for work_mem might jump up and\nbite me are pretty good there.  Even so, at 16Meg it's not really big\nfor that machine, and I might test cranking it up. Note that large\nwork_mem can cause the kernel to flush its cache, which means going to\ndisk for everybody's data, and all the queries are slow instead of\none.  Keep an eye on how high work_mem affects your kernel cache.\n\nOn Thu, Aug 19, 2010 at 12:38 AM, Samuel Gendler\n<[email protected]> wrote:\n> Yeah, although with 48GB of available memory and not that much concurrency,\n> I'm not sure it matters that much. But point taken, I'll see about modifying\n> the app such that work_mem gets set on a per-query basis.\n>\n> On Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler\n>> <[email protected]> wrote:\n>> > Answered my own question.  Cranking work_mem up to 350MB revealed that\n>> > the in-memory sort requires more memory than the disk sort.\n>>\n>> Note that unless you run VERY few client connections, it's usually\n>> better to leave work_mem somewhere in the 1 to 32Meg range and have\n>> the connection or user or database that needs 350Meg be set there.\n>>\n>> I.e.\n>>\n>> <connect>\n>> set work_mem='512MB';\n>> <execute query\n>>\n>> OR\n>>\n>> alter user memoryhog set work_mem='512MB';\n>>\n>> OR\n>>\n>> alter database memhogdb set work_mem='512MB';\n>\n>\n\n\n\n--\nTo understand recursion, one must first understand recursion.", "msg_date": "Thu, 19 Aug 2010 00:06:34 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in-memory sorting" }, { "msg_contents": "On Thu, Aug 19, 2010 at 12:06 AM, Samuel Gendler\n<[email protected]>wrote:\n\n> Incidentally, if I set values on the connection before querying, is there\n> an easy way to get things back to default values or will my code need to\n> know the prior value and explicitly set it back? Something like\n>\n> <get connection from pool>\n> set work_mem = '512MB'\n> query\n> set value = 'default'\n> <return connection to pool>\n>\n> or maybe\n>\n> <get connection from pool>\n> BEGIN;\n> set work_mem='512MB'\n> select query\n> ROLLBACK;\n> <return connection to pool>\n>\n>\nI guess I'm getting the hang of this whole postgres thing because those were\nboth wild guesses and both of them appear to work.\n\nset work_mem=default sets it to the value in the config file, and setting\nwithin a transaction and rolling back also restores the original value.\n\n\n\n>\n> On Wed, Aug 18, 2010 at 11:52 PM, Scott Marlowe <[email protected]>wrote:\n>\n>> Exactly, it's about the concurrency. I have a server with 128G ram\n>> but it runs dozens of queries at a time for hundreds of clients a\n>> second. The chance that something big for work_mem might jump up and\n>> bite me are pretty good there. Even so, at 16Meg it's not really big\n>> for that machine, and I might test cranking it up. Note that large\n>> work_mem can cause the kernel to flush its cache, which means going to\n>> disk for everybody's data, and all the queries are slow instead of\n>> one. Keep an eye on how high work_mem affects your kernel cache.\n>>\n>> On Thu, Aug 19, 2010 at 12:38 AM, Samuel Gendler\n>> <[email protected]> wrote:\n>> > Yeah, although with 48GB of available memory and not that much\n>> concurrency,\n>> > I'm not sure it matters that much. But point taken, I'll see about\n>> modifying\n>> > the app such that work_mem gets set on a per-query basis.\n>> >\n>> > On Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe <\n>> [email protected]>\n>> > wrote:\n>> >>\n>> >> On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler\n>> >> <[email protected]> wrote:\n>> >> > Answered my own question. Cranking work_mem up to 350MB revealed\n>> that\n>> >> > the in-memory sort requires more memory than the disk sort.\n>> >>\n>> >> Note that unless you run VERY few client connections, it's usually\n>> >> better to leave work_mem somewhere in the 1 to 32Meg range and have\n>> >> the connection or user or database that needs 350Meg be set there.\n>> >>\n>> >> I.e.\n>> >>\n>> >> <connect>\n>> >> set work_mem='512MB';\n>> >> <execute query\n>> >>\n>> >> OR\n>> >>\n>> >> alter user memoryhog set work_mem='512MB';\n>> >>\n>> >> OR\n>> >>\n>> >> alter database memhogdb set work_mem='512MB';\n>> >\n>> >\n>>\n>>\n>>\n>> --\n>> To understand recursion, one must first understand recursion.\n>>\n>\n>\n\nOn Thu, Aug 19, 2010 at 12:06 AM, Samuel Gendler <[email protected]> wrote:\nIncidentally, if I set values on the connection before querying, is there an easy way to get things back to default values or will my code need to know the prior value and explicitly set it back?  Something like \n\n<get connection from pool>set work_mem = '512MB'queryset value = 'default'<return connection to pool>or maybe\n<get connection from pool>BEGIN;set work_mem='512MB'select queryROLLBACK;<return connection to pool>\nI guess I'm getting the hang of this whole postgres thing because those were both wild guesses and both of them appear to work.set work_mem=default sets it to the value in the config file, and setting within a transaction and rolling back also restores the original value.\n \nOn Wed, Aug 18, 2010 at 11:52 PM, Scott Marlowe <[email protected]> wrote:\n\nExactly, it's about the concurrency.  I have a server with 128G ram\nbut it runs dozens of queries at a time for hundreds of clients a\nsecond.  The chance that something big for work_mem might jump up and\nbite me are pretty good there.  Even so, at 16Meg it's not really big\nfor that machine, and I might test cranking it up. Note that large\nwork_mem can cause the kernel to flush its cache, which means going to\ndisk for everybody's data, and all the queries are slow instead of\none.  Keep an eye on how high work_mem affects your kernel cache.\n\nOn Thu, Aug 19, 2010 at 12:38 AM, Samuel Gendler\n<[email protected]> wrote:\n> Yeah, although with 48GB of available memory and not that much concurrency,\n> I'm not sure it matters that much. But point taken, I'll see about modifying\n> the app such that work_mem gets set on a per-query basis.\n>\n> On Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler\n>> <[email protected]> wrote:\n>> > Answered my own question.  Cranking work_mem up to 350MB revealed that\n>> > the in-memory sort requires more memory than the disk sort.\n>>\n>> Note that unless you run VERY few client connections, it's usually\n>> better to leave work_mem somewhere in the 1 to 32Meg range and have\n>> the connection or user or database that needs 350Meg be set there.\n>>\n>> I.e.\n>>\n>> <connect>\n>> set work_mem='512MB';\n>> <execute query\n>>\n>> OR\n>>\n>> alter user memoryhog set work_mem='512MB';\n>>\n>> OR\n>>\n>> alter database memhogdb set work_mem='512MB';\n>\n>\n\n\n\n--\nTo understand recursion, one must first understand recursion.", "msg_date": "Thu, 19 Aug 2010 00:14:41 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: in-memory sorting" }, { "msg_contents": "On Thu, Aug 19, 2010 at 1:06 AM, Samuel Gendler\n<[email protected]> wrote:\n> Incidentally, if I set values on the connection before querying, is there an\n> easy way to get things back to default values or will my code need to know\n> the prior value and explicitly set it back?  Something like\n\nreset work_mem;\n", "msg_date": "Thu, 19 Aug 2010 01:16:51 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in-memory sorting" }, { "msg_contents": "Samuel Gendler <[email protected]> writes:\n> Answered my own question. Cranking work_mem up to 350MB revealed that\n> the in-memory sort requires more memory than the disk sort.\n\nYeah. The on-disk representation of sortable data is tighter than the\nin-memory representation for various reasons, mostly that we're willing\nto work at making it small. Datums aren't necessarily properly aligned\nfor example, and there's also palloc overhead to consider in-memory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Aug 2010 09:29:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: in-memory sorting " } ]
[ { "msg_contents": "Please forgive the barrage of questions. I'm just learning how to tune\nthings in postgres and I've still got a bit of learning curve to get over,\napparently. I have done a lot of reading, though, I swear.\n\nI've got two identical queries except for a change of one condition which\ncuts the number of rows in half - which also has the effect of eliminating\none partition from consideration (partitioned over time and I cut the time\nperiod in half). The query plans are considerably different as a result.\nThe net result is that the fast query is 5x faster than the slow query. I'm\ncurious if the alternate query plan is inherently faster or is it just a\ncase of the algorithm scaling worse than linearly with the row count, which\ncertainly wouldn't be surprising. The big win, for me, is that the sort\nuses vastly less memory. The slow plan requires work_mem to be 1500MB to\neven make it 5x worse. With a more reasonable work_mem (400MB), it drops to\nsomething like 15x worse because it has to sort on disk.\n\nfast plan: http://explain.depesz.com/s/iZ\nslow plan: http://explain.depesz.com/s/Dv2\n\nquery:\n\n\nEXPLAIN ANALYZE SELECT\n t_lookup.display_name as group,\n to_char(t_fact.time, 'DD/MM HH24:MI') as category,\n substring(t_lookup.display_name from 1 for 20) as label,\n round(sum(t_fact.total_ms)/sum(t_fact.count)) as value\n FROM\n portal.providers t_lookup,\n day_scale_radar_performance_fact t_fact\n WHERE\n t_fact.probe_type_num < 3\n and t_lookup.provider_id = t_fact.provider_id\n and t_lookup.provider_owner_customer_id =\nt_fact.provider_owner_customer_id\n and t_fact.provider_owner_customer_id = 0\n and t_fact.time between timezone('UTC', '2010-08-18 15:00:00') -\ninterval '30 day' and timezone('UTC', '2010-08-18 15:00:00')\n GROUP BY\n t_fact.provider_owner_customer_id, t_fact.provider_id,\n t_lookup.display_name,\n t_fact.time\n ORDER BY\n t_fact.time\n\ntable structure:\n\n Table \"perf_reporting.abstract_radar_performance_fact\"\n Column | Type | Modifiers\n----------------------------+-----------------------------+-----------\ncount | bigint | not null\ntotal_ms | bigint | not null\ntime | timestamp without time zone | not null\nmarket_num | integer | not null\ncountry_num | integer | not null\nautosys_num | integer | not null\nprovider_owner_zone_id | integer | not null\nprovider_owner_customer_id | integer | not null\nprovider_id | integer | not null\nprobe_type_num | integer | not null\n\nwith individual indexes on the everything from time to the bottom on the\nchild tables\n\nand\n\n\n Table \"portal.providers\"\n Column | Type | Modifiers\n\n----------------------------+-----------------------------+------------------------\nbtime | timestamp without time zone | not null default\nnow()\nmtime | timestamp without time zone | not null default\nnow()\nversion | integer | not null default\n1\nprovider_id | integer | not null\nprovider_owner_zone_id | integer | not null\nprovider_owner_customer_id | integer | not null\nprovider_category_id | integer | not null\nname | character varying(255) | not null\ndisplay_name | character varying(255) | not null\n\nwith indexes on every column with name ending in '_id'\n\nPlease forgive the barrage of questions.  I'm just learning how to tune things in postgres and I've still got a bit of learning curve to get over, apparently.  I have done a lot of reading, though, I swear.\nI've got two identical queries except for a change of one condition which cuts the number of rows in half - which also has the effect of eliminating one partition from consideration (partitioned over time and I cut the time period in half).  The query plans are considerably different as a result. The net result is that the fast query is 5x faster than the slow query.  I'm curious if the alternate query plan is inherently faster or is it just a case of the algorithm scaling worse than linearly with the row count, which certainly wouldn't be surprising.  The big win, for me, is that the sort uses vastly less memory.  The slow plan requires work_mem to be 1500MB to even make it 5x worse.  With a more reasonable work_mem (400MB), it drops to something like 15x worse because it has to sort on disk.\nfast plan: http://explain.depesz.com/s/iZslow plan: http://explain.depesz.com/s/Dv2query:EXPLAIN ANALYZE SELECT\n            t_lookup.display_name as group,            to_char(t_fact.time, 'DD/MM HH24:MI') as category,                      substring(t_lookup.display_name from 1 for 20) as label,            round(sum(t_fact.total_ms)/sum(t_fact.count)) as value\n        FROM            portal.providers t_lookup,            day_scale_radar_performance_fact t_fact        WHERE            t_fact.probe_type_num < 3            and t_lookup.provider_id = t_fact.provider_id\n            and t_lookup.provider_owner_customer_id = t_fact.provider_owner_customer_id            and t_fact.provider_owner_customer_id = 0            and t_fact.time between timezone('UTC', '2010-08-18 15:00:00') - interval '30 day' and timezone('UTC', '2010-08-18 15:00:00')\n        GROUP BY            t_fact.provider_owner_customer_id, t_fact.provider_id,            t_lookup.display_name,            t_fact.time        ORDER BY            t_fact.timetable structure:\n        Table \"perf_reporting.abstract_radar_performance_fact\"           Column           |            Type             | Modifiers \n----------------------------+-----------------------------+----------- count                      | bigint                      | not null total_ms                   | bigint                      | not null time                       | timestamp without time zone | not null\n market_num                 | integer                     | not null country_num                | integer                     | not null autosys_num                | integer                     | not null provider_owner_zone_id     | integer                     | not null\n provider_owner_customer_id | integer                     | not null provider_id                | integer                     | not null probe_type_num             | integer                     | not null\nwith individual indexes on the everything from time to the bottom on the child tablesand                             Table \"portal.providers\"\n           Column           |            Type             |       Modifiers        ----------------------------+-----------------------------+------------------------ btime                      | timestamp without time zone | not null default now()\n mtime                      | timestamp without time zone | not null default now() version                    | integer                     | not null default 1 provider_id                | integer                     | not null\n provider_owner_zone_id     | integer                     | not null provider_owner_customer_id | integer                     | not null provider_category_id       | integer                     | not null name                       | character varying(255)      | not null\n display_name               | character varying(255)      | not nullwith indexes on every column with name ending in '_id'", "msg_date": "Wed, 18 Aug 2010 23:14:10 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "yet another q" }, { "msg_contents": "On Wed, Aug 18, 2010 at 11:14 PM, Samuel Gendler\n<[email protected]>wrote:\n\n> Please forgive the barrage of questions. I'm just learning how to tune\n> things in postgres and I've still got a bit of learning curve to get over,\n> apparently. I have done a lot of reading, though, I swear.\n>\n> I've got two identical queries except for a change of one condition which\n> cuts the number of rows in half - which also has the effect of eliminating\n> one partition from consideration (partitioned over time and I cut the time\n> period in half). The query plans are considerably different as a result.\n> The net result is that the fast query is 5x faster than the slow query. I'm\n> curious if the alternate query plan is inherently faster or is it just a\n> case of the algorithm scaling worse than linearly with the row count, which\n> certainly wouldn't be surprising. The big win, for me, is that the sort\n> uses vastly less memory. The slow plan requires work_mem to be 1500MB to\n> even make it 5x worse. With a more reasonable work_mem (400MB), it drops to\n> something like 15x worse because it has to sort on disk.\n>\n> fast plan: http://explain.depesz.com/s/iZ\n> slow plan: http://explain.depesz.com/s/Dv2\n>\n> query:\n>\n>\n> EXPLAIN ANALYZE SELECT\n> t_lookup.display_name as group,\n> to_char(t_fact.time, 'DD/MM HH24:MI') as category,\n> substring(t_lookup.display_name from 1 for 20) as label,\n> round(sum(t_fact.total_ms)/sum(t_fact.count)) as value\n> FROM\n> portal.providers t_lookup,\n> day_scale_radar_performance_fact t_fact\n> WHERE\n> t_fact.probe_type_num < 3\n> and t_lookup.provider_id = t_fact.provider_id\n> and t_lookup.provider_owner_customer_id =\n> t_fact.provider_owner_customer_id\n> and t_fact.provider_owner_customer_id = 0\n> and t_fact.time between timezone('UTC', '2010-08-18 15:00:00') -\n> interval '30 day' and timezone('UTC', '2010-08-18 15:00:00')\n> GROUP BY\n> t_fact.provider_owner_customer_id, t_fact.provider_id,\n> t_lookup.display_name,\n> t_fact.time\n> ORDER BY\n> t_fact.time\n>\n> table structure:\n>\n> Table \"perf_reporting.abstract_radar_performance_fact\"\n> Column | Type | Modifiers\n> ----------------------------+-----------------------------+-----------\n> count | bigint | not null\n> total_ms | bigint | not null\n> time | timestamp without time zone | not null\n> market_num | integer | not null\n> country_num | integer | not null\n> autosys_num | integer | not null\n> provider_owner_zone_id | integer | not null\n> provider_owner_customer_id | integer | not null\n> provider_id | integer | not null\n> probe_type_num | integer | not null\n>\n> with individual indexes on the everything from time to the bottom on the\n> child tables\n>\n> and\n>\n>\n> Table \"portal.providers\"\n> Column | Type | Modifiers\n>\n>\n> ----------------------------+-----------------------------+------------------------\n> btime | timestamp without time zone | not null default\n> now()\n> mtime | timestamp without time zone | not null default\n> now()\n> version | integer | not null default\n> 1\n> provider_id | integer | not null\n> provider_owner_zone_id | integer | not null\n> provider_owner_customer_id | integer | not null\n> provider_category_id | integer | not null\n> name | character varying(255) | not null\n> display_name | character varying(255) | not null\n>\n> with indexes on every column with name ending in '_id'\n>\n>\nIt gets more complicated:\n\nWhen I dropped to a query over 15 days instead of 30 days, I saw a huge bump\nin performance (about 16 secs), the query plan for which is here:\n\nhttp://explain.depesz.com/s/iaf\n\nnote: the query is identical to the one below, but with the interval changed\nto 15 days from 30 days, which also keeps the query within a single\npartition. Note that the sort requires almost no memory and occurs after\nthe aggregation. I thought my problems were solved, since reducing the\nnormal window over which queries are performed is something the app can\ntolerate.\n\nHowever, if I keep the same 15 day window (so row count is approximately the\nsame), but change the time window start date by 2 days (still keeping the\nentire query within the same partition), I get a completely different query\nplan. There is effectively no difference between the two queries other than\nthe start date of the time window in the where clause, but one executes in\ntwice the time (35 secs or thereabouts).\n\nhttp://explain.depesz.com/s/LA\n\nJust for completeness' sake, I changed the query such that it is still 15\ndays, but this time crosses a partition boundary. The plan is very similar\nto the previous one and executes in about the same time (35 secs or so)\n\nhttp://explain.depesz.com/s/Aqw\n\nStatistics are up to date and were performed with default_statistics_target\n= 100\n\nIs there any way I can force the more efficient HashAggregate then sort plan\ninstead of sort then GroupAggregate?\n\nOn Wed, Aug 18, 2010 at 11:14 PM, Samuel Gendler <[email protected]> wrote:\nPlease forgive the barrage of questions.  I'm just learning how to tune things in postgres and I've still got a bit of learning curve to get over, apparently.  I have done a lot of reading, though, I swear.\n\nI've got two identical queries except for a change of one condition which cuts the number of rows in half - which also has the effect of eliminating one partition from consideration (partitioned over time and I cut the time period in half).  The query plans are considerably different as a result. The net result is that the fast query is 5x faster than the slow query.  I'm curious if the alternate query plan is inherently faster or is it just a case of the algorithm scaling worse than linearly with the row count, which certainly wouldn't be surprising.  The big win, for me, is that the sort uses vastly less memory.  The slow plan requires work_mem to be 1500MB to even make it 5x worse.  With a more reasonable work_mem (400MB), it drops to something like 15x worse because it has to sort on disk.\nfast plan: http://explain.depesz.com/s/iZslow plan: http://explain.depesz.com/s/Dv2\nquery:EXPLAIN ANALYZE SELECT\n            t_lookup.display_name as group,            to_char(t_fact.time, 'DD/MM HH24:MI') as category,                      substring(t_lookup.display_name from 1 for 20) as label,            round(sum(t_fact.total_ms)/sum(t_fact.count)) as value\n\n        FROM            portal.providers t_lookup,            day_scale_radar_performance_fact t_fact        WHERE            t_fact.probe_type_num < 3            and t_lookup.provider_id = t_fact.provider_id\n\n            and t_lookup.provider_owner_customer_id = t_fact.provider_owner_customer_id            and t_fact.provider_owner_customer_id = 0            and t_fact.time between timezone('UTC', '2010-08-18 15:00:00') - interval '30 day' and timezone('UTC', '2010-08-18 15:00:00')\n\n        GROUP BY            t_fact.provider_owner_customer_id, t_fact.provider_id,            t_lookup.display_name,            t_fact.time        ORDER BY            t_fact.timetable structure:\n        Table \"perf_reporting.abstract_radar_performance_fact\"           Column           |            Type             | Modifiers \n----------------------------+-----------------------------+----------- count                      | bigint                      | not null total_ms                   | bigint                      | not null time                       | timestamp without time zone | not null\n\n market_num                 | integer                     | not null country_num                | integer                     | not null autosys_num                | integer                     | not null provider_owner_zone_id     | integer                     | not null\n\n provider_owner_customer_id | integer                     | not null provider_id                | integer                     | not null probe_type_num             | integer                     | not null\nwith individual indexes on the everything from time to the bottom on the child tablesand                             Table \"portal.providers\"\n\n           Column           |            Type             |       Modifiers        ----------------------------+-----------------------------+------------------------ btime                      | timestamp without time zone | not null default now()\n\n mtime                      | timestamp without time zone | not null default now() version                    | integer                     | not null default 1 provider_id                | integer                     | not null\n\n provider_owner_zone_id     | integer                     | not null provider_owner_customer_id | integer                     | not null provider_category_id       | integer                     | not null name                       | character varying(255)      | not null\n\n display_name               | character varying(255)      | not nullwith indexes on every column with name ending in '_id'\nIt gets more complicated:When I dropped to a query over 15 days instead of 30 days, I saw a huge bump in performance (about 16 secs), the query plan for which is here:\nhttp://explain.depesz.com/s/iafnote: the query is identical to the one below, but with the interval changed to 15 days from 30 days, which also keeps the query within a single partition.  Note that the sort requires almost no memory and occurs after the aggregation.  I thought my problems were solved, since reducing the normal window over which queries are performed is something the app can tolerate.\nHowever, if I keep the same 15 day window (so row count is approximately the same), but change the time window start date by 2 days (still keeping the entire query within the same partition), I get a completely different query plan.  There is effectively no difference between the two queries other than the start date of the time window in the where clause, but one executes in twice the time (35 secs or thereabouts).\nhttp://explain.depesz.com/s/LAJust for completeness' sake, I changed the query such that it is still 15 days, but this time crosses a partition boundary.  The plan is very similar to the previous one and executes in about the same time (35 secs or so)\nhttp://explain.depesz.com/s/AqwStatistics are up to date and were performed with default_statistics_target = 100\nIs there any way I can force the more efficient HashAggregate then sort plan instead of sort then GroupAggregate?", "msg_date": "Wed, 18 Aug 2010 23:50:29 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: yet another q" }, { "msg_contents": "The full set of conf changes that were in use during these tests are as\nfollows:\n\ndefault_statistics_target = 100 # pgtune wizard 2010-08-17\nmaintenance_work_mem = 1GB # pgtune wizard 2010-08-17\nconstraint_exclusion = on # pgtune wizard 2010-08-17\ncheckpoint_completion_target = 0.9 # pgtune wizard 2010-08-17\neffective_cache_size = 36GB # sam\nwork_mem = 1500MB # pgtune wizard 2010-08-17\nwal_buffers = 8MB # pgtune wizard 2010-08-17\n#checkpoint_segments = 16 # pgtune wizard 2010-08-17\ncheckpoint_segments = 30 # sam\nshared_buffers = 8GB # pgtune wizard 2010-08-17\nmax_connections = 80 # pgtune wizard 2010-08-17\ncpu_tuple_cost = 0.0030 # sam\ncpu_index_tuple_cost = 0.0010 # sam\ncpu_operator_cost = 0.0005 # sam\nrandom_page_cost = 2.0 # sam\n\n\nOn Wed, Aug 18, 2010 at 11:50 PM, Samuel Gendler\n<[email protected]>wrote:\n\n> On Wed, Aug 18, 2010 at 11:14 PM, Samuel Gendler <\n> [email protected]> wrote:\n>\n>> Please forgive the barrage of questions. I'm just learning how to tune\n>> things in postgres and I've still got a bit of learning curve to get over,\n>> apparently. I have done a lot of reading, though, I swear.\n>>\n>> I've got two identical queries except for a change of one condition which\n>> cuts the number of rows in half - which also has the effect of eliminating\n>> one partition from consideration (partitioned over time and I cut the time\n>> period in half). The query plans are considerably different as a result.\n>> The net result is that the fast query is 5x faster than the slow query. I'm\n>> curious if the alternate query plan is inherently faster or is it just a\n>> case of the algorithm scaling worse than linearly with the row count, which\n>> certainly wouldn't be surprising. The big win, for me, is that the sort\n>> uses vastly less memory. The slow plan requires work_mem to be 1500MB to\n>> even make it 5x worse. With a more reasonable work_mem (400MB), it drops to\n>> something like 15x worse because it has to sort on disk.\n>>\n>> fast plan: http://explain.depesz.com/s/iZ\n>> slow plan: http://explain.depesz.com/s/Dv2\n>>\n>> query:\n>>\n>>\n>> EXPLAIN ANALYZE SELECT\n>> t_lookup.display_name as group,\n>> to_char(t_fact.time, 'DD/MM HH24:MI') as category,\n>> substring(t_lookup.display_name from 1 for 20) as label,\n>> round(sum(t_fact.total_ms)/sum(t_fact.count)) as value\n>> FROM\n>> portal.providers t_lookup,\n>> day_scale_radar_performance_fact t_fact\n>> WHERE\n>> t_fact.probe_type_num < 3\n>> and t_lookup.provider_id = t_fact.provider_id\n>> and t_lookup.provider_owner_customer_id =\n>> t_fact.provider_owner_customer_id\n>> and t_fact.provider_owner_customer_id = 0\n>> and t_fact.time between timezone('UTC', '2010-08-18 15:00:00')\n>> - interval '30 day' and timezone('UTC', '2010-08-18 15:00:00')\n>> GROUP BY\n>> t_fact.provider_owner_customer_id, t_fact.provider_id,\n>> t_lookup.display_name,\n>> t_fact.time\n>> ORDER BY\n>> t_fact.time\n>>\n>> table structure:\n>>\n>> Table \"perf_reporting.abstract_radar_performance_fact\"\n>> Column | Type | Modifiers\n>> ----------------------------+-----------------------------+-----------\n>> count | bigint | not null\n>> total_ms | bigint | not null\n>> time | timestamp without time zone | not null\n>> market_num | integer | not null\n>> country_num | integer | not null\n>> autosys_num | integer | not null\n>> provider_owner_zone_id | integer | not null\n>> provider_owner_customer_id | integer | not null\n>> provider_id | integer | not null\n>> probe_type_num | integer | not null\n>>\n>> with individual indexes on the everything from time to the bottom on the\n>> child tables\n>>\n>> and\n>>\n>>\n>> Table \"portal.providers\"\n>> Column | Type | Modifiers\n>>\n>>\n>> ----------------------------+-----------------------------+------------------------\n>> btime | timestamp without time zone | not null\n>> default now()\n>> mtime | timestamp without time zone | not null\n>> default now()\n>> version | integer | not null\n>> default 1\n>> provider_id | integer | not null\n>> provider_owner_zone_id | integer | not null\n>> provider_owner_customer_id | integer | not null\n>> provider_category_id | integer | not null\n>> name | character varying(255) | not null\n>> display_name | character varying(255) | not null\n>>\n>> with indexes on every column with name ending in '_id'\n>>\n>>\n> It gets more complicated:\n>\n> When I dropped to a query over 15 days instead of 30 days, I saw a huge\n> bump in performance (about 16 secs), the query plan for which is here:\n>\n> http://explain.depesz.com/s/iaf\n>\n> note: the query is identical to the one below, but with the interval\n> changed to 15 days from 30 days, which also keeps the query within a single\n> partition. Note that the sort requires almost no memory and occurs after\n> the aggregation. I thought my problems were solved, since reducing the\n> normal window over which queries are performed is something the app can\n> tolerate.\n>\n> However, if I keep the same 15 day window (so row count is approximately\n> the same), but change the time window start date by 2 days (still keeping\n> the entire query within the same partition), I get a completely different\n> query plan. There is effectively no difference between the two queries\n> other than the start date of the time window in the where clause, but one\n> executes in twice the time (35 secs or thereabouts).\n>\n> http://explain.depesz.com/s/LA\n>\n> Just for completeness' sake, I changed the query such that it is still 15\n> days, but this time crosses a partition boundary. The plan is very similar\n> to the previous one and executes in about the same time (35 secs or so)\n>\n> http://explain.depesz.com/s/Aqw\n>\n> Statistics are up to date and were performed with default_statistics_target\n> = 100\n>\n> Is there any way I can force the more efficient HashAggregate then sort\n> plan instead of sort then GroupAggregate?\n>\n>\n>\n\nThe full set of conf changes that were in use during these tests are as follows:default_statistics_target = 100 # pgtune wizard 2010-08-17\nmaintenance_work_mem = 1GB # pgtune wizard 2010-08-17constraint_exclusion = on # pgtune wizard 2010-08-17\ncheckpoint_completion_target = 0.9 # pgtune wizard 2010-08-17effective_cache_size = 36GB # sam\nwork_mem = 1500MB # pgtune wizard 2010-08-17wal_buffers = 8MB # pgtune wizard 2010-08-17\n#checkpoint_segments = 16 # pgtune wizard 2010-08-17checkpoint_segments = 30 # sam\nshared_buffers = 8GB # pgtune wizard 2010-08-17max_connections = 80 # pgtune wizard 2010-08-17\ncpu_tuple_cost = 0.0030                 # samcpu_index_tuple_cost = 0.0010           # sam\ncpu_operator_cost = 0.0005              # samrandom_page_cost = 2.0                  # sam\nOn Wed, Aug 18, 2010 at 11:50 PM, Samuel Gendler <[email protected]> wrote:\nOn Wed, Aug 18, 2010 at 11:14 PM, Samuel Gendler <[email protected]> wrote:\n\nPlease forgive the barrage of questions.  I'm just learning how to tune things in postgres and I've still got a bit of learning curve to get over, apparently.  I have done a lot of reading, though, I swear.\n\n\nI've got two identical queries except for a change of one condition which cuts the number of rows in half - which also has the effect of eliminating one partition from consideration (partitioned over time and I cut the time period in half).  The query plans are considerably different as a result. The net result is that the fast query is 5x faster than the slow query.  I'm curious if the alternate query plan is inherently faster or is it just a case of the algorithm scaling worse than linearly with the row count, which certainly wouldn't be surprising.  The big win, for me, is that the sort uses vastly less memory.  The slow plan requires work_mem to be 1500MB to even make it 5x worse.  With a more reasonable work_mem (400MB), it drops to something like 15x worse because it has to sort on disk.\nfast plan: http://explain.depesz.com/s/iZslow plan: http://explain.depesz.com/s/Dv2\n\nquery:EXPLAIN ANALYZE SELECT\n            t_lookup.display_name as group,            to_char(t_fact.time, 'DD/MM HH24:MI') as category,                      substring(t_lookup.display_name from 1 for 20) as label,            round(sum(t_fact.total_ms)/sum(t_fact.count)) as value\n\n\n        FROM            portal.providers t_lookup,            day_scale_radar_performance_fact t_fact        WHERE            t_fact.probe_type_num < 3            and t_lookup.provider_id = t_fact.provider_id\n\n\n            and t_lookup.provider_owner_customer_id = t_fact.provider_owner_customer_id            and t_fact.provider_owner_customer_id = 0            and t_fact.time between timezone('UTC', '2010-08-18 15:00:00') - interval '30 day' and timezone('UTC', '2010-08-18 15:00:00')\n\n\n        GROUP BY            t_fact.provider_owner_customer_id, t_fact.provider_id,            t_lookup.display_name,            t_fact.time        ORDER BY            t_fact.timetable structure:\n        Table \"perf_reporting.abstract_radar_performance_fact\"           Column           |            Type             | Modifiers \n----------------------------+-----------------------------+----------- count                      | bigint                      | not null total_ms                   | bigint                      | not null time                       | timestamp without time zone | not null\n\n\n market_num                 | integer                     | not null country_num                | integer                     | not null autosys_num                | integer                     | not null provider_owner_zone_id     | integer                     | not null\n\n\n provider_owner_customer_id | integer                     | not null provider_id                | integer                     | not null probe_type_num             | integer                     | not null\nwith individual indexes on the everything from time to the bottom on the child tablesand                             Table \"portal.providers\"\n\n\n           Column           |            Type             |       Modifiers        ----------------------------+-----------------------------+------------------------ btime                      | timestamp without time zone | not null default now()\n\n\n mtime                      | timestamp without time zone | not null default now() version                    | integer                     | not null default 1 provider_id                | integer                     | not null\n\n\n provider_owner_zone_id     | integer                     | not null provider_owner_customer_id | integer                     | not null provider_category_id       | integer                     | not null name                       | character varying(255)      | not null\n\n\n display_name               | character varying(255)      | not nullwith indexes on every column with name ending in '_id'\nIt gets more complicated:When I dropped to a query over 15 days instead of 30 days, I saw a huge bump in performance (about 16 secs), the query plan for which is here:\nhttp://explain.depesz.com/s/iafnote: the query is identical to the one below, but with the interval changed to 15 days from 30 days, which also keeps the query within a single partition.  Note that the sort requires almost no memory and occurs after the aggregation.  I thought my problems were solved, since reducing the normal window over which queries are performed is something the app can tolerate.\nHowever, if I keep the same 15 day window (so row count is approximately the same), but change the time window start date by 2 days (still keeping the entire query within the same partition), I get a completely different query plan.  There is effectively no difference between the two queries other than the start date of the time window in the where clause, but one executes in twice the time (35 secs or thereabouts).\nhttp://explain.depesz.com/s/LAJust for completeness' sake, I changed the query such that it is still 15 days, but this time crosses a partition boundary.  The plan is very similar to the previous one and executes in about the same time (35 secs or so)\nhttp://explain.depesz.com/s/AqwStatistics are up to date and were performed with default_statistics_target = 100\n\nIs there any way I can force the more efficient HashAggregate then sort plan instead of sort then GroupAggregate?", "msg_date": "Wed, 18 Aug 2010 23:51:47 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: yet another q" }, { "msg_contents": "Samuel Gendler <[email protected]> writes:\n> fast plan: http://explain.depesz.com/s/iZ\n> slow plan: http://explain.depesz.com/s/Dv2\n\nYour problem here is that it's switching from hash aggregation to\nsort-and-group-aggregate once it decides that the number of aggregate\ngroups won't fit in work_mem anymore. While you could brute-force\nthat by raising work_mem, it'd be a lot better if you could get the\nestimated number of groups more in line with the actual. Notice the\nvery large discrepancy between the estimated and actual numbers of\nrows out of the aggregation steps.\n\nIncreasing the stats targets for the GROUP BY columns might help,\nbut I think what's basically going on here is there's correlation\nbetween the GROUP BY columns that the planner doesn't know about.\n\nOne thing I'm wondering is why you're grouping by owner_customer_id\nand t_fact.provider_id, when these aren't used in the output.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Aug 2010 12:28:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: yet another q " } ]
[ { "msg_contents": "Hi,\n\nI'm having a strange performance result on a new database server \ncompared to my simple desktop.\n\nThe configuration of the new server :\n - OS : GNU/Linux Debian Etch x86_64\n - kernel : Linux 2.6.26-2-vserver-amd64 #1 SMP Sun Jun 20 20:40:33 \nUTC 2010 x86_64 GNU/Linux\n (tests are on the \"real server\", not on a vserver)\n - CPU : 2 x Six-Core AMD Opteron(tm) Processor 2427 @ 2.20GHz\n - RAM : 32 Go\nThe configuration of my desktop pc :\n - OS : GNU/Linux Debian Testing i686\n - kernel : Linux 2.6.32-5-686 #1 SMP Tue Jun 1 04:59:47 UTC 2010 \ni686 GNU/Linux\n - CPU : Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz\n - RAM : 2 Go\n\nOn each configuration, i've compiled Postgresql 8.4.4 (simple \n./configuration && make && make install).\n\nOn each configuration, i've restore a little database (the compressed \ndump is 33Mo), here is the output of \"\\d+\" :\n Schema | Name | Type | Owner | \nSize | Description\n--------+----------------------------+----------+-------------+------------+-------------\n public | article | table | indexwsprem | 77 \nMB |\n public | article_id_seq | sequence | indexwsprem | 8192 \nbytes |\n public | evt | table | indexwsprem | 8192 \nbytes |\n public | evt_article | table | indexwsprem | 17 \nMB |\n public | evt_article_id_seq | sequence | indexwsprem | 8192 \nbytes |\n public | evt_id_seq | sequence | indexwsprem | 8192 \nbytes |\n public | firm | table | indexwsprem | 1728 \nkB |\n public | firm_article | table | indexwsprem | 17 \nMB |\n public | firm_article_id_seq | sequence | indexwsprem | 8192 \nbytes |\n public | firm_id_seq | sequence | indexwsprem | 8192 \nbytes |\n public | publication | table | indexwsprem | 64 \nkB |\n public | publication_article | table | indexwsprem | 0 \nbytes |\n public | publication_article_id_seq | sequence | indexwsprem | 8192 \nbytes |\n public | publication_id_seq | sequence | indexwsprem | 8192 \nbytes |\n(14 rows)\n\nOn each configuration, postgresql.conf are the same and don't have been \nmodified (the shared_buffer seems enought for my simple tests).\n\nI've enabled timing on psql, and here is the result of different \n\"simple\" query (executed twice to use cache) :\n1- select count(*) from firm;\n server x64 : 48661 (1 row) Time: 14,412 ms\n desk i686 : 48661 (1 row) Time: 4,845 ms\n\n2- select * from pg_settings;\n server x64 : Time: 3,898 ms\n desk i686 : Time: 1,517 ms\n\n3- I've run \"time pgbench -c 50\" :\n server x64 :\n starting vacuum...end.\n transaction type: TPC-B (sort of)\n scaling factor: 1\n query mode: simple\n number of clients: 50\n number of transactions per client: 10\n number of transactions actually processed: 500/500\n tps = 523.034437 (including connections establishing)\n tps = 663.511008 (excluding connections establishing)\n\n real 0m0.984s\n user 0m0.088s\n sys 0m0.096s\n desk i686 :\n starting vacuum...end.\n transaction type: TPC-B (sort of)\n scaling factor: 1\n query mode: simple\n number of clients: 50\n number of transactions per client: 10\n number of transactions actually processed: 500/500\n tps = 781.986778 (including connections establishing)\n tps = 862.809792 (excluding connections establishing)\n\n real 0m0.656s\n user 0m0.028s\n sys 0m0.052s\n\n\nDo you think it's a 32bit/64bit difference ?\n", "msg_date": "Thu, 19 Aug 2010 10:07:41 +0200", "msg_from": "Philippe Rimbault <[email protected]>", "msg_from_op": true, "msg_subject": "Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "On Thu, Aug 19, 2010 at 2:07 AM, Philippe Rimbault <[email protected]> wrote:\n> Hi,\n>\n> I'm having a strange performance result on a new database server compared to\n> my simple desktop.\n>\n> The configuration of the new server :\n>    - OS : GNU/Linux Debian Etch x86_64\n>    - kernel : Linux 2.6.26-2-vserver-amd64 #1 SMP Sun Jun 20 20:40:33 UTC\n> 2010 x86_64 GNU/Linux\n>        (tests are on the \"real server\", not on a vserver)\n>    - CPU : 2 x Six-Core AMD Opteron(tm) Processor 2427 @ 2.20GHz\n>    - RAM : 32 Go\n> The configuration of my desktop pc :\n>    - OS : GNU/Linux Debian Testing i686\n>    - kernel : Linux 2.6.32-5-686 #1 SMP Tue Jun 1 04:59:47 UTC 2010 i686\n> GNU/Linux\n>    - CPU : Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz\n>    - RAM : 2 Go\n\nPERFORMANCE STUFF DELETED FOR BREVITY\n\n> Do you think it's a 32bit/64bit difference ?\n\nNo, it's likely that your desktop has much faster CPU cores than your\nserver, and it has drives that may or may not be obeying fsync\ncommands. Your server, OTOH, has more cores, so it's likely to do\nbetter under a real load. And assuming it has more disks on a better\ncontroller it will also do better under heavier loads.\n\nSo how are the disks setup anyway?\n", "msg_date": "Thu, 19 Aug 2010 03:51:02 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "On 19/08/2010 11:51, Scott Marlowe wrote:\n> On Thu, Aug 19, 2010 at 2:07 AM, Philippe Rimbault<[email protected]> wrote:\n> \n>> Hi,\n>>\n>> I'm having a strange performance result on a new database server compared to\n>> my simple desktop.\n>>\n>> The configuration of the new server :\n>> - OS : GNU/Linux Debian Etch x86_64\n>> - kernel : Linux 2.6.26-2-vserver-amd64 #1 SMP Sun Jun 20 20:40:33 UTC\n>> 2010 x86_64 GNU/Linux\n>> (tests are on the \"real server\", not on a vserver)\n>> - CPU : 2 x Six-Core AMD Opteron(tm) Processor 2427 @ 2.20GHz\n>> - RAM : 32 Go\n>> The configuration of my desktop pc :\n>> - OS : GNU/Linux Debian Testing i686\n>> - kernel : Linux 2.6.32-5-686 #1 SMP Tue Jun 1 04:59:47 UTC 2010 i686\n>> GNU/Linux\n>> - CPU : Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz\n>> - RAM : 2 Go\n>> \n> PERFORMANCE STUFF DELETED FOR BREVITY\n>\n> \n>> Do you think it's a 32bit/64bit difference ?\n>> \n> No, it's likely that your desktop has much faster CPU cores than your\n> server, and it has drives that may or may not be obeying fsync\n> commands. Your server, OTOH, has more cores, so it's likely to do\n> better under a real load. And assuming it has more disks on a better\n> controller it will also do better under heavier loads.\n>\n> So how are the disks setup anyway?\n> \nThanks for your reply !\n\nThe server use a HP Smart Array P410 with a Raid 5 array on Sata 133 disk.\nMy desktop only use one Sata 133 disk.\nI was thinking that my simples queries didn't use disk but only memory.\nI've launch a new pgbench with much more client and transactions :\n\nServer :\n postgres$ pgbench -c 400 -t 100\n starting vacuum...end.\n transaction type: TPC-B (sort of)\n scaling factor: 1\n query mode: simple\n number of clients: 400\n number of transactions per client: 100\n number of transactions actually processed: 40000/40000\n tps = 115.054386 (including connections establishing)\n tps = 115.617186 (excluding connections establishing)\n\n real 5m47.706s\n user 0m27.054s\n sys 0m59.804s\n\nDesktop :\n postgres$ time pgbench -c 400 -t 100\n starting vacuum...end.\n transaction type: TPC-B (sort of)\n scaling factor: 1\n query mode: simple\n number of clients: 400\n number of transactions per client: 100\n number of transactions actually processed: 40000/40000\n tps = 299.456785 (including connections establishing)\n tps = 300.590503 (excluding connections establishing)\n\n real 2m13.604s\n user 0m5.304s\n sys 0m13.469s\n\n\n\n\n", "msg_date": "Thu, 19 Aug 2010 12:23:17 +0200", "msg_from": "Philippe Rimbault <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "On 19/08/2010 12:23, Philippe Rimbault wrote:\n> On 19/08/2010 11:51, Scott Marlowe wrote:\n>> On Thu, Aug 19, 2010 at 2:07 AM, Philippe Rimbault<[email protected]> \n>> wrote:\n>>> Hi,\n>>>\n>>> I'm having a strange performance result on a new database server \n>>> compared to\n>>> my simple desktop.\n>>>\n>>> The configuration of the new server :\n>>> - OS : GNU/Linux Debian Etch x86_64\n>>> - kernel : Linux 2.6.26-2-vserver-amd64 #1 SMP Sun Jun 20 \n>>> 20:40:33 UTC\n>>> 2010 x86_64 GNU/Linux\n>>> (tests are on the \"real server\", not on a vserver)\n>>> - CPU : 2 x Six-Core AMD Opteron(tm) Processor 2427 @ 2.20GHz\n>>> - RAM : 32 Go\n>>> The configuration of my desktop pc :\n>>> - OS : GNU/Linux Debian Testing i686\n>>> - kernel : Linux 2.6.32-5-686 #1 SMP Tue Jun 1 04:59:47 UTC 2010 \n>>> i686\n>>> GNU/Linux\n>>> - CPU : Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz\n>>> - RAM : 2 Go\n>> PERFORMANCE STUFF DELETED FOR BREVITY\n>>\n>>> Do you think it's a 32bit/64bit difference ?\n>> No, it's likely that your desktop has much faster CPU cores than your\n>> server, and it has drives that may or may not be obeying fsync\n>> commands. Your server, OTOH, has more cores, so it's likely to do\n>> better under a real load. And assuming it has more disks on a better\n>> controller it will also do better under heavier loads.\n>>\n>> So how are the disks setup anyway?\n> Thanks for your reply !\n>\n> The server use a HP Smart Array P410 with a Raid 5 array on Sata 133 \n> disk.\n> My desktop only use one Sata 133 disk.\n> I was thinking that my simples queries didn't use disk but only memory.\n> I've launch a new pgbench with much more client and transactions :\n>\n> Server :\n> postgres$ pgbench -c 400 -t 100\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 400\n> number of transactions per client: 100\n> number of transactions actually processed: 40000/40000\n> tps = 115.054386 (including connections establishing)\n> tps = 115.617186 (excluding connections establishing)\n>\n> real 5m47.706s\n> user 0m27.054s\n> sys 0m59.804s\n>\n> Desktop :\n> postgres$ time pgbench -c 400 -t 100\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 400\n> number of transactions per client: 100\n> number of transactions actually processed: 40000/40000\n> tps = 299.456785 (including connections establishing)\n> tps = 300.590503 (excluding connections establishing)\n>\n> real 2m13.604s\n> user 0m5.304s\n> sys 0m13.469s\n>\n>\n>\n>\n>\nI've re-init the pgbench with -s 400 and now server work (very) better \nthan desktop.\nSo ... my desktop cpu is faster if i only work with small query but \nserver handle better heavier loads.\nI was just suprise about the difference on my small database.\n\nThx\n", "msg_date": "Thu, 19 Aug 2010 14:27:51 +0200", "msg_from": "Philippe Rimbault <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "On Thu, Aug 19, 2010 at 4:23 AM, Philippe Rimbault <[email protected]> wrote:\n>> So how are the disks setup anyway?\n>>\n>\n> Thanks for your reply !\n>\n> The server use a HP Smart Array P410 with a Raid 5 array on Sata 133 disk.\n\nIf you can change that to RAID-10 do so now. RAID-5 is notoriously\nslow for database use, unless you're only gonna do reporting type\nqueries with few updates.\n\n> My desktop only use one Sata 133 disk.\n> I was thinking that my simples queries didn't use disk but only memory.\n\nNo, butt pgbench has to write to the disk.\n\n> I've launch a new pgbench with much more client and transactions :\n>\n> Server :\n>    postgres$ pgbench -c 400 -t 100\n\n-c 400 is HUGE. (and as you mentioned in your later email, you need\nto -s -i 400 for -c 400 to make sense) Try values in the 4 to 40\nrange and the server should REALLY outshine your desktop as you pass\n12 or 16 or so.\n", "msg_date": "Thu, 19 Aug 2010 10:59:22 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "Philippe Rimbault wrote:\n> I've run \"time pgbench -c 50\" :\n> server x64 :\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 50\n> number of transactions per client: 10\n> number of transactions actually processed: 500/500\n> tps = 523.034437 (including connections establishing)\n> tps = 663.511008 (excluding connections establishing)\n>\n\nAs mentioned already, most of the difference you're seeing is simply \nthat your desktop system has faster individual processor cores in it, so \njobs where only a single core are being used are going to be faster on it.\n\nThe above isn't going to work very well either because the database \nscale is too small, and you're not running the test for very long. The \nthings the bigger server is better at, you're not testing.\n\nSince your smaller system has 2GB of RAM and the larger one 32GB, try \nthis instead:\n\npgbench -i -s 2000\npgbench -c 24 -T 60 -S\npgbench -c 24 -T 300\n\nThat will create a much larger database, run some simple SELECT-only \ntests on it, and then run a write intensive one. Expect to see the \nserver system crush the results of the desktop here. Note that this \nwill take quite a while to run--the pgbench initialization step in \nparticular is going to take a good fraction of an hour or more, and then \nthe actual tests will run for 6 minutes after that. You can run more \ntests after that without doing the initialization step again, but if you \nrun a lot of the write-heavy tests eventually performance will start to \ndegrade.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 19 Aug 2010 14:25:02 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Greg Smith wrote:\n> Since your smaller system has 2GB of RAM and the larger one 32GB, try \n> this instead:\n>\n> pgbench -i -s 2000\n> pgbench -c 24 -T 60 -S\n> pgbench -c 24 -T 300\n\nOh, and to at least give a somewhat more normal postgresql.conf I'd \nrecommend you at least make the following two changes before doing the \nabove:\n\nshared_buffers=256MB\ncheckpoint_segments=32\n\nThose are the two parameters the pgbench test is most sensitive to, so \nsetting to higher values will give more realistic results.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 19 Aug 2010 14:31:39 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "\nOn Aug 19, 2010, at 11:25 AM, Greg Smith wrote:\n\n> Philippe Rimbault wrote:\n>> I've run \"time pgbench -c 50\" :\n>> server x64 :\n>> starting vacuum...end.\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 1\n>> query mode: simple\n>> number of clients: 50\n>> number of transactions per client: 10\n>> number of transactions actually processed: 500/500\n>> tps = 523.034437 (including connections establishing)\n>> tps = 663.511008 (excluding connections establishing)\n>> \n> \n> As mentioned already, most of the difference you're seeing is simply \n> that your desktop system has faster individual processor cores in it, so \n> jobs where only a single core are being used are going to be faster on it.\n> \n\nBut the select count(*) query, cached in RAM is 3x faster in one system than the other. The CPUs aren't 3x different performance wise. Something else may be wrong here.\n\nAn individual Core2 Duo 2.93Ghz should be at most 50% faster than a 2.2Ghz Opteron for such a query. Unless there are some compile options that are set wrong. I would check the compile options.\n\n\n", "msg_date": "Thu, 26 Aug 2010 21:30:58 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Scott Carey wrote:\n> But the select count(*) query, cached in RAM is 3x faster in one system than the other. The CPUs aren't 3x different performance wise. Something else may be wrong here.\n>\n> An individual Core2 Duo 2.93Ghz should be at most 50% faster than a 2.2Ghz Opteron for such a query. Unless there are some compile options that are set wrong. I would check the compile options.\n> \n\nSure, it might be. But I've seen RAM on an Intel chip like the E7500 \nhere (DDR3-1066 or better, around 10GB/s possible) run almost 3X as fast \nas what you'll find paired with an Opteron 2427 (DDR2-800, closer to \n3.5GB/s). Throw in the clock differences and there you go.\n\nI've been wandering around for years warning that the older Opterons on \nDDR2 running a single PostgreSQL process are dog slow compared to the \nsame thing on Intel. So that alone might actually be enough to account \nfor the difference. Ultimately the multi-processor stuff is what's more \nimportant to most apps, though, which is why I was hinting to properly \nrun that instead.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 27 Aug 2010 13:25:08 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Hi!\n\nOn Fri, Aug 27, 2010 at 12:55 PM, Greg Smith <[email protected]> wrote:\n> Scott Carey wrote:\n>>\n>> But the select count(*) query, cached in RAM is 3x faster in one system\n>> than the other.  The CPUs aren't 3x different performance wise.  Something\n>> else may be wrong here.\n>>\n>> An individual Core2 Duo 2.93Ghz should be at most 50% faster than a 2.2Ghz\n>> Opteron for such a query.   Unless there are some compile options that are\n>> set wrong.   I would check the compile options.\n>>\n>\n> Sure, it might be.  But I've seen RAM on an Intel chip like the E7500 here\n> (DDR3-1066 or better, around 10GB/s possible) run almost 3X as fast as what\n> you'll find paired with an Opteron 2427 (DDR2-800, closer to 3.5GB/s).\n>  Throw in the clock differences and there you go.\n\nPrecisely! CPU core clock is not all that matters, specially when it\ncomes to work with large datasets. CPU core clock will only make a\ndifference with relatively small (ie, that fits on cpu cache) code\nthat works with a relatively small (ie, that *also* fits on cpu cache)\ndataset, for example, a series PI calculation, or a simple prime\nnumber generation algorithm, but when it comes to large amounts of\ndata/code, the RAM starts to play a vital role, and not just \"raw\" RAM\nspeed, but latency!!! (a combination of them both) some people just go\nfor the \"fastest\" RAM around, but they don't pay attention to latency\nnumbers, you need to get the fastest RAM with the slowest latency.\n\nAlso, nowadays, Intel has better performance than AMD, at least when\ncomparing Athlon 64 vs Core2, I'm still saving to get a Phenom II\nsystem in order to benchmark them and see how it goes (does anyone\nhave one of these for testing?).\n\n>\n> I've been wandering around for years warning that the older Opterons on DDR2\n> running a single PostgreSQL process are dog slow compared to the same thing\n> on Intel.  So that alone might actually be enough to account for the\n> difference.  Ultimately the multi-processor stuff is what's more important\n> to most apps, though, which is why I was hinting to properly run that\n> instead.\n>\n> --\n> Greg Smith  2ndQuadrant US  Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected]   www.2ndQuadrant.us\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sun, 29 Aug 2010 10:37:20 -0430", "msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "Jose Ildefonso Camargo Tolosa wrote:\n> Also, nowadays, Intel has better performance than AMD, at least when\n> comparing Athlon 64 vs Core2, I'm still saving to get a Phenom II\n> system in order to benchmark them and see how it goes (does anyone\n> have one of these for testing?).\n> \n\nThings even out again when you reach the large server line from AMD that \nuses DDR-3 RAM; they've finally solved this problem there. Scott \nMarlowe has been helping me out with some tests of a new system he's got \nrunning the AMD Opteron 6172, using the STREAM memory benchmark. Intro \nto that and some sample numbers at \nhttp://www.advancedclustering.com/company-blog/stream-benchmarking.html\n\nHe's been seeing >75GB/s of aggregate memory bandwidth out of that \nmonster--using gcc, so even at a disadvantage compared to the Intel one \nused for that report. If you're only using one or two cores Intel still \nseems to have a lead, I am still working out if that's true in every \nsituation.\n\nI haven't had a chance to test any of the Phenom II processors yet, from \nwhat I know of their design I expect them to still have the same \nfundamental design issues that kept all AMD processors from scaling very \nwell, memory speed wise, the last few years. You might be able to dig a \nsystem using one of them out of the list at \nhttp://www.cs.virginia.edu/stream/peecee/Bandwidth.html , I didn't \nnotice anything obvious that featured one.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 30 Aug 2010 00:40:04 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Greg Smith wrote:\n> He's been seeing >75GB/s of aggregate memory bandwidth out of that \n> monster--using gcc, so even at a disadvantage compared to the Intel \n> one used for that report.\n\nOn second read this was confusing. The best STREAM results from using \nthe Intel compiler on Linux. The ones I've been doing and that Scott \nhas been running are using regular gcc instead. So when the new AMD \nsystem is clearing 75MB/s in the little test set I'm trying to get \nautomated, that's actually a conservative figure, given that a compiler \nswap is almost guaranteed to boost results a bit too.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 30 Aug 2010 01:01:10 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Jose Ildefonso Camargo Tolosa wrote:\n> Also, nowadays, Intel has better performance than AMD, at least when\n> comparing Athlon 64 vs Core2, I'm still saving to get a Phenom II\n> system in order to benchmark them and see how it goes (does anyone\n> have one of these for testing?).\nroot@p:~/ff/www.cs.virginia.edu/stream/FTP/Code# cat /proc/cpuinfo\nprocessor : 0\nvendor_id : AuthenticAMD\ncpu family : 16\nmodel : 4\nmodel name : AMD Phenom(tm) II X4 940 Processor\nstepping : 2\ncpu MHz : 3000.000\ncache size : 512 KB\nphysical id : 0\nsiblings : 4\ncore id : 0\ncpu cores : 4\napicid : 0\ninitial apicid : 0\nfpu : yes\nfpu_exception : yes\ncpuid level : 5\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge \nmca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext \nfxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good \nnonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm \nextapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt\nbogomips : 6020.46\nTLB size : 1024 4K pages\nclflush size : 64\ncache_alignment : 64\naddress sizes : 48 bits physical, 48 bits virtual\npower management: ts ttp tm stc 100mhzsteps hwpstate\n\n\nstream compiled with -O3\n\nroot@p:~/ff/www.cs.virginia.edu/stream/FTP/Code# ./a.out\n-------------------------------------------------------------\nSTREAM version $Revision: 5.9 $\n-------------------------------------------------------------\nThis system uses 8 bytes per DOUBLE PRECISION word.\n-------------------------------------------------------------\nArray size = 2000000, Offset = 0\nTotal memory required = 45.8 MB.\nEach test is run 10 times, but only\nthe *best* time for each is used.\n-------------------------------------------------------------\nPrinting one line per active thread....\n-------------------------------------------------------------\nYour clock granularity/precision appears to be 1 microseconds.\nEach test below will take on the order of 5031 microseconds.\n (= 5031 clock ticks)\nIncrease the size of the arrays if this shows that\nyou are not getting at least 20 clock ticks per test.\n-------------------------------------------------------------\nWARNING -- The above is only a rough guideline.\nFor best results, please be sure you know the\nprecision of your system timer.\n-------------------------------------------------------------\nFunction Rate (MB/s) Avg time Min time Max time\nCopy: 5056.0434 0.0064 0.0063 0.0064\nScale: 4950.4916 0.0065 0.0065 0.0065\nAdd: 5322.0173 0.0091 0.0090 0.0091\nTriad: 5395.1815 0.0089 0.0089 0.0089\n-------------------------------------------------------------\nSolution Validates\n-------------------------------------------------------------\n\ntwo parallel\nroot@p:~/ff/www.cs.virginia.edu/stream/FTP/Code# ./a.out & ./a.out\n\n-------------------------------------------------------------\nFunction Rate (MB/s) Avg time Min time Max time\nCopy: 2984.2741 0.0108 0.0107 0.0108\nScale: 2945.8261 0.0109 0.0109 0.0110\nAdd: 3282.4631 0.0147 0.0146 0.0149\nTriad: 3321.2893 0.0146 0.0145 0.0148\n-------------------------------------------------------------\nFunction Rate (MB/s) Avg time Min time Max time\nCopy: 2981.4898 0.0108 0.0107 0.0108\nScale: 2943.3067 0.0109 0.0109 0.0109\nAdd: 3283.8552 0.0147 0.0146 0.0149\nTriad: 3313.9634 0.0147 0.0145 0.0148\n\n\nfour parallel\nroot@p:~/ff/www.cs.virginia.edu/stream/FTP/Code# ./a.out & ./a.out & \n./a.out & ./a.out\n\n-------------------------------------------------------------\nFunction Rate (MB/s) Avg time Min time Max time\nCopy: 1567.4880 0.0208 0.0204 0.0210\nScale: 1525.3401 0.0211 0.0210 0.0213\nAdd: 1739.7735 0.0279 0.0276 0.0282\nTriad: 1763.4858 0.0274 0.0272 0.0276\n-------------------------------------------------------------\nFunction Rate (MB/s) Avg time Min time Max time\nCopy: 1559.0759 0.0208 0.0205 0.0210\nScale: 1536.2520 0.0211 0.0208 0.0212\nAdd: 1740.4503 0.0279 0.0276 0.0283\nTriad: 1758.4951 0.0276 0.0273 0.0279\n-------------------------------------------------------------\nFunction Rate (MB/s) Avg time Min time Max time\nCopy: 1552.7271 0.0208 0.0206 0.0210\nScale: 1527.5275 0.0211 0.0209 0.0212\nAdd: 1737.9263 0.0279 0.0276 0.0282\nTriad: 1757.3439 0.0276 0.0273 0.0278\n-------------------------------------------------------------\nFunction Rate (MB/s) Avg time Min time Max time\nCopy: 1515.5912 0.0213 0.0211 0.0214\nScale: 1544.7033 0.0210 0.0207 0.0212\nAdd: 1754.4495 0.0278 0.0274 0.0281\nTriad: 1856.3659 0.0279 0.0259 0.0284\n\n\n", "msg_date": "Mon, 30 Aug 2010 09:58:16 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "\nOn Aug 27, 2010, at 10:25 AM, Greg Smith wrote:\n\n> Scott Carey wrote:\n>> But the select count(*) query, cached in RAM is 3x faster in one system than the other. The CPUs aren't 3x different performance wise. Something else may be wrong here.\n>> \n>> An individual Core2 Duo 2.93Ghz should be at most 50% faster than a 2.2Ghz Opteron for such a query. Unless there are some compile options that are set wrong. I would check the compile options.\n>> \n> \n> Sure, it might be. But I've seen RAM on an Intel chip like the E7500 \n> here (DDR3-1066 or better, around 10GB/s possible) run almost 3X as fast \n> as what you'll find paired with an Opteron 2427 (DDR2-800, closer to \n> 3.5GB/s). Throw in the clock differences and there you go.\n\nThe 2427 should do 12.8 GB/sec theoretical peak (dual channel 800Mhz DDR2) per processor socket (so 2x that if multithreaded and 2 Sockets).\n\nA Nehalem will do ~2x that (triple channel, 1066Mhz) and is also significantly faster clock for clock.\n\nBut a Core2 based Xeon on Socket 775 at 1066Mhz FSB? Nah... its theoretical peak bandwidth is 33% more and real world no more than 40% more.\n\nLatency and other factors might add up too. 3x just does not make sense here. \n\nNehalem would be another story, but Core2 was only slightly faster than Opterons of this generation and did not scale as well with more sockets.\n\n\n> \n> I've been wandering around for years warning that the older Opterons on \n> DDR2 running a single PostgreSQL process are dog slow compared to the \n> same thing on Intel. \n\nThis isn't an older Opteron, its 6 core, 6MB L3 cache \"Istanbul\". Its not the newer stuff either. The E7500 is basically the end of line Core2 before Nehalem based processors took over.\n\n> So that alone might actually be enough to account \n> for the difference. Ultimately the multi-processor stuff is what's more \n> important to most apps, though, which is why I was hinting to properly \n> run that instead.\n> \n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n\n", "msg_date": "Mon, 30 Aug 2010 01:29:04 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "On Mon, Aug 30, 2010 at 1:58 AM, Yeb Havinga <[email protected]> wrote:\n> four parallel\n> root@p:~/ff/www.cs.virginia.edu/stream/FTP/Code# ./a.out & ./a.out & ./a.out\n> & ./a.out\n\nYou know you can just do \"stream 4\" to get 4 parallel streams right?\n", "msg_date": "Mon, 30 Aug 2010 04:54:42 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "Scott Marlowe wrote:\n> On Mon, Aug 30, 2010 at 1:58 AM, Yeb Havinga <[email protected]> wrote:\n> \n>> four parallel\n>> root@p:~/ff/www.cs.virginia.edu/stream/FTP/Code# ./a.out & ./a.out & ./a.out\n>> & ./a.out\n>> \n>\n> You know you can just do \"stream 4\" to get 4 parallel streams right?\n> \nWhich version is that? The stream.c source contains no argc/argv usage, \nthough Code/Versions/Experimental has a script called Parallel_jobs that \nspawns n processes.\n\n-- Yeb\n\n", "msg_date": "Mon, 30 Aug 2010 14:52:23 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Scott Carey wrote:\n> The 2427 should do 12.8 GB/sec theoretical peak (dual channel 800Mhz DDR2) per processor socket (so 2x that if multithreaded and 2 Sockets).\n>\n> A Nehalem will do ~2x that (triple channel, 1066Mhz) and is also significantly faster clock for clock.\n>\n> But a Core2 based Xeon on Socket 775 at 1066Mhz FSB? Nah... its theoretical peak bandwidth is 33% more and real world no more than 40% more.\n> The E7500 is basically the end of line Core2 before Nehalem based processors took over.\n> \n\nAh...from its use of DDR3, I thought that the E7500 was a low-end \nNehalem. Now I see that you're right, that it's actually a high-end \nWolfdale. So that does significantly decrease the margin between the \ntwo I'd expect. I agree with your figures, and that this may be back to \nlooking a little fishy.\n\nThe other thing I normally check is whether one of the two systems has \nmore aggressive power management turned on. Easiest way to tell on \nLinux is look at /proc/cpuinfo , and see if the displayed processor \nspeed is much lower than the actual one. Many systems default to \nsomething pretty conservative here, and don't return up to full speed \nnearly fast enough for some benchmark tests.\n\n\n> This isn't an older Opteron, its 6 core, 6MB L3 cache \"Istanbul\". Its not the newer stuff either. \n> \n\nEverything before Magny Cours is now an older Opteron from my \nperspective. They've caught up with Intel again with the release of \nthose. Everything from AMD that's come out ever since Intel Nehalem \nproducts started shipping in quantity (early 2009) have been marginal \nproducts until the new M-C, and their early Quad-core stuff was pretty \nterrible too. So in my head I'm lumping AMD's Budapest, Shanghai, and \nIstanbul product lines all into a giant \"slow compared to Intel during \nthe same period\" bin in my head. Fine for databases with lots of \nclients, not so good at executing single queries quickly.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 30 Aug 2010 11:15:31 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Yeb Havinga wrote:\n> model name : AMD Phenom(tm) II X4 940 Processor @ 3.00GHz\n> cpu cores : 4\n> stream compiled with -O3\n> Function Rate (MB/s) Avg time Min time Max time\n> Triad: 5395.1815 0.0089 0.0089 0.0089\n\nFor comparison sake, an only moderately expensive desktop Intel CPU \nusing DDR3-1600:\n\nmodel name : Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz\ncpu cores : 4\nsiblings : 8\nNumber of Threads requested = 4\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 13666.0986 0.0108 0.0107 0.0108\n\n8 hyper-threaded cores here. They work well for improving CPU-heavy \ntasks, but with 4 threads total is where the memory throughput maxes out at.\n\nI'm not sure if Yeb's stream was compiled to use MPI correctly though, \nbecause I'm not seeing \"Number of Threads\" in his results. Here's what \nworks for me:\n\n gcc -O3 -fopenmp stream.c -o stream\n\nAnd then you can set:\n\nexport OMP_NUM_THREADS=4\n\nOr whatever you want in order to control the number of threads it uses \ninside. Here's the way scaling works on my processor:\n\nNumber of Threads requested = 1\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 9806.2648 0.0150 0.0149 0.0151\n\nNumber of Threads requested = 2\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 12495.2113 0.0117 0.0117 0.0118\n\nNumber of Threads requested = 3\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 13388.7187 0.0111 0.0109 0.0126\n\nNumber of Threads requested = 4\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 13695.6611 0.0107 0.0107 0.0108\n\nNumber of Threads requested = 5\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 12651.7200 0.0116 0.0116 0.0116\n\nNumber of Threads requested = 6\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 12804.7192 0.0115 0.0114 0.0117\n\nNumber of Threads requested = 7\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 12670.2525 0.0116 0.0116 0.0117\n\nNumber of Threads requested = 8\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 12468.5739 0.0119 0.0117 0.0131\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 30 Aug 2010 11:15:58 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Hi!\n\nThanks you all for this great amount of information!\n\nWhat memory/motherboard (ie, chipset) is installed on the phenom ii one?\n\nit looks like it peaks to ~6.2GB/s with 4 threads.\n\nAlso, what kernel is on it? (uname -a would be nice).\n\nNow, this looks like sustained memory speed, what about random memory\naccess (where latency comes to play an important role):\nhttp://icl.cs.utk.edu/projectsfiles/hpcc/RandomAccess/\n\nI don't have any of these systems to test, but it would be interesting\nto get the random access benchmarks too, what do you think? will the\nresult be the same?\n\nOnce again, thanks!\n\nSincerely,\n\nIldefonso Camargo\n", "msg_date": "Mon, 30 Aug 2010 11:39:19 -0430", "msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "Hi,\n\n>> This isn't an older Opteron, its 6 core, 6MB L3 cache \"Istanbul\".  Its not\n>> the newer stuff either.\n>\n> Everything before Magny Cours is now an older Opteron from my perspective.\n\nThe 6-cores are identical to Magny Cours (except that Magny Cours has\ntwo of those beast in one package).\n\n- Clemens\n", "msg_date": "Mon, 30 Aug 2010 22:41:59 +0200", "msg_from": "Clemens Eisserer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "Clemens Eisserer wrote:\n> Hi,\n>\n> \n>>> This isn't an older Opteron, its 6 core, 6MB L3 cache \"Istanbul\". Its not\n>>> the newer stuff either.\n>>> \n>> Everything before Magny Cours is now an older Opteron from my perspective.\n>> \n>\n> The 6-cores are identical to Magny Cours (except that Magny Cours has\n> two of those beast in one package).\n> \n\nIn some ways, but not in regards to memory issues. \nhttp://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/2 \nhas a good intro. While the inside is like two 6-core models stuck \ntogether, the external memory interface was completely reworked.\n\nOriginal report here involved Opteron 2427, correctly idenitified as \nbeing from the 6-core \"Istanbul\" architecture. All Istanbul processors \nuse DDR2 and are quite slow at memory access compared to similar Intel \nNehalem systems. The \"Magny-Cours\" architecture is available in 8 and \n12 core variants, and the memory controller has been completely \nredesigned to take advantage of many banks of DDR3 at the same time; it \nis far faster than two of the older 6 cores working together.\n\nhttp://en.wikipedia.org/wiki/List_of_AMD_Opteron_microprocessors has a \ngood summary of the models; it's confusing. Quick chart showing the \nthree generations compared demonstrates what I just said above using the \nsame STREAM benchmarking that a few results have popped out here using \nalready:\n\nhttp://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/5\n\nIstanbul Opteron 2435 in this case, 21GB/s. The two Nehelam Intel \nXeons, >31GB/s. New Magny, 49MB/s.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n\n\n\n\n\n\nClemens Eisserer wrote:\n\nHi,\n\n \n\n\nThis isn't an older Opteron, its 6 core, 6MB L3 cache \"Istanbul\".  Its not\nthe newer stuff either.\n \n\nEverything before Magny Cours is now an older Opteron from my perspective.\n \n\n\nThe 6-cores are identical to Magny Cours (except that Magny Cours has\ntwo of those beast in one package).\n \n\n\nIn some ways, but not in regards to memory issues. \nhttp://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/2\nhas a good intro.  While the inside is like two 6-core models stuck\ntogether, the external memory interface was completely reworked.\n\nOriginal report here involved Opteron 2427, correctly idenitified as\nbeing from the 6-core \"Istanbul\" architecture.  All Istanbul processors\nuse DDR2 and are quite slow at memory access compared to similar Intel\nNehalem systems.  The \"Magny-Cours\" architecture is available in 8 and\n12 core variants, and the memory controller has been completely\nredesigned to take advantage of many banks of DDR3 at the same time; it\nis far faster than two of the older 6 cores working together. \n\nhttp://en.wikipedia.org/wiki/List_of_AMD_Opteron_microprocessors has a\ngood summary of the models; it's confusing.  Quick chart showing the\nthree generations compared demonstrates what I just said above using\nthe same STREAM benchmarking that a few results have popped out here\nusing already:\n\nhttp://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/5\n\nIstanbul Opteron 2435 in this case, 21GB/s.  The two Nehelam Intel\nXeons, >31GB/s.  New Magny, 49MB/s.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us", "msg_date": "Mon, 30 Aug 2010 18:31:53 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Hi!\n\nThanks for the review link!\n\nIldefonso.\n\nOn Mon, Aug 30, 2010 at 6:01 PM, Greg Smith <[email protected]> wrote:\n> Clemens Eisserer wrote:\n>\n> Hi,\n>\n>\n>\n> This isn't an older Opteron, its 6 core, 6MB L3 cache \"Istanbul\".  Its not\n> the newer stuff either.\n>\n>\n> Everything before Magny Cours is now an older Opteron from my perspective.\n>\n>\n> The 6-cores are identical to Magny Cours (except that Magny Cours has\n> two of those beast in one package).\n>\n>\n> In some ways, but not in regards to memory issues.\n> http://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/2\n> has a good intro.  While the inside is like two 6-core models stuck\n> together, the external memory interface was completely reworked.\n>\n> Original report here involved Opteron 2427, correctly idenitified as being\n> from the 6-core \"Istanbul\" architecture.  All Istanbul processors use DDR2\n> and are quite slow at memory access compared to similar Intel Nehalem\n> systems.  The \"Magny-Cours\" architecture is available in 8 and 12 core\n> variants, and the memory controller has been completely redesigned to take\n> advantage of many banks of DDR3 at the same time; it is far faster than two\n> of the older 6 cores working together.\n>\n> http://en.wikipedia.org/wiki/List_of_AMD_Opteron_microprocessors has a good\n> summary of the models; it's confusing.  Quick chart showing the three\n> generations compared demonstrates what I just said above using the same\n> STREAM benchmarking that a few results have popped out here using already:\n>\n> http://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/5\n>\n> Istanbul Opteron 2435 in this case, 21GB/s.  The two Nehelam Intel Xeons,\n>>31GB/s.  New Magny, 49MB/s.\n>\n> --\n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n>\n", "msg_date": "Mon, 30 Aug 2010 20:59:35 -0430", "msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "Greg Smith wrote:\n> Yeb Havinga wrote:\n>> model name : AMD Phenom(tm) II X4 940 Processor @ 3.00GHz\n>> cpu cores : 4\n>> stream compiled with -O3\n>> Function Rate (MB/s) Avg time Min time Max time\n>> Triad: 5395.1815 0.0089 0.0089 0.0089\n> I'm not sure if Yeb's stream was compiled to use MPI correctly though, \n> because I'm not seeing \"Number of Threads\" in his results. Here's \n> what works for me:\n>\n> gcc -O3 -fopenmp stream.c -o stream\n>\n> And then you can set:\n>\n> export OMP_NUM_THREADS=4\nThen I get the following. The rather wierd dip at 5 threads is \nconsistent over multiple tries:\n\nNumber of Threads requested = 1\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 5378.7495 0.0089 0.0089 0.0090\n\nNumber of Threads requested = 2\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 6596.1140 0.0073 0.0073 0.0073\n\nNumber of Threads requested = 3\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 7033.9806 0.0069 0.0068 0.0069\n\nNumber of Threads requested = 4\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 7007.2950 0.0069 0.0069 0.0069\n\nNumber of Threads requested = 5\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 6553.8133 0.0074 0.0073 0.0074\n\nNumber of Threads requested = 6\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 6803.6427 0.0071 0.0071 0.0071\n\nNumber of Threads requested = 7\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 6895.6909 0.0070 0.0070 0.0071\n\nNumber of Threads requested = 8\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 6931.3018 0.0069 0.0069 0.0070\n\nOther info: DDR2 800MHz ECC memory\nMB: 790FX chipset (Asus m4a78-e)\n\nregards,\nYeb Havinga\n\n", "msg_date": "Tue, 31 Aug 2010 14:41:42 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Yeb Havinga wrote:\n> The rather wierd dip at 5 threads is consistent over multiple tries\n\nI've seen that twice on 4 core systems now. The spot where there's just \none more thread than cores seems to be the worst case for cache \nthrashing on a lot of these servers.\n\nHow much total RAM is in this server? Are all the slots filled? Just \nfilling in a spreadsheet I have here with sample configs of various \nhardware.\n\nYeb's results look right to me now. That's what an AMD Phenom II X4 940 \n@ 3.00GHz should look like. It's a little faster, memory-wise, than my \nolder Intel Q6600 @ 2.4GHz. So they've finally caught up with that \ngeneration of Intel's stuff. But my current desktop quad-core i860 with \nhyperthreading is nearly twice as fast in terms of memory access at \nevery thread size. That's why I own one of them instead of a Phenom II X4.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 31 Aug 2010 11:43:41 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Hi!\n\nOn Tue, Aug 31, 2010 at 8:11 AM, Yeb Havinga <[email protected]> wrote:\n> Greg Smith wrote:\n>>\n>> Yeb Havinga wrote:\n>>>\n>>> model name      : AMD Phenom(tm) II X4 940 Processor @ 3.00GHz\n>>> cpu cores         : 4\n>>> stream compiled with -O3\n>>> Function      Rate (MB/s)   Avg time     Min time     Max time\n>>> Triad:       5395.1815       0.0089       0.0089       0.0089\n>>\n>> I'm not sure if Yeb's stream was compiled to use MPI correctly though,\n>> because I'm not seeing \"Number of Threads\" in his results.  Here's what\n>> works for me:\n>>\n>>  gcc -O3 -fopenmp stream.c -o stream\n>>\n>> And then you can set:\n>>\n>> export OMP_NUM_THREADS=4\n>\n> Then I get the following. The rather wierd dip at 5 threads is consistent\n> over multiple tries:\n>\n> Number of Threads requested = 1\n> Function      Rate (MB/s)   Avg time     Min time     Max time\n> Triad:       5378.7495       0.0089       0.0089       0.0090\n>\n> Number of Threads requested = 2\n> Function      Rate (MB/s)   Avg time     Min time     Max time\n> Triad:       6596.1140       0.0073       0.0073       0.0073\n>\n> Number of Threads requested = 3\n> Function      Rate (MB/s)   Avg time     Min time     Max time\n> Triad:       7033.9806       0.0069       0.0068       0.0069\n>\n> Number of Threads requested = 4\n> Function      Rate (MB/s)   Avg time     Min time     Max time\n> Triad:       7007.2950       0.0069       0.0069       0.0069\n>\n> Number of Threads requested = 5\n> Function      Rate (MB/s)   Avg time     Min time     Max time\n> Triad:       6553.8133       0.0074       0.0073       0.0074\n>\n> Number of Threads requested = 6\n> Function      Rate (MB/s)   Avg time     Min time     Max time\n> Triad:       6803.6427       0.0071       0.0071       0.0071\n>\n> Number of Threads requested = 7\n> Function      Rate (MB/s)   Avg time     Min time     Max time\n> Triad:       6895.6909       0.0070       0.0070       0.0071\n>\n> Number of Threads requested = 8\n> Function      Rate (MB/s)   Avg time     Min time     Max time\n> Triad:       6931.3018       0.0069       0.0069       0.0070\n>\n> Other info: DDR2 800MHz ECC memory\n\nOk, this could explain the huge difference. I was planing on getting\nGigaByte GA-890GPA-UD3H, with a Phenom II X6 and that ram: Crucial\nCT2KIT25664BA13​39, Crucial BL2KIT25664FN1608, or something better I\nfind when I get enough money (depending on my budget at the moment).\n\n> MB: 790FX chipset (Asus m4a78-e)\n>\n> regards,\n> Yeb Havinga\n>\n>\n\nThanks for the extra info!\n\nIldefonso.\n", "msg_date": "Tue, 31 Aug 2010 11:23:24 -0430", "msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "Hi!\n\nOn Tue, Aug 31, 2010 at 11:13 AM, Greg Smith <[email protected]> wrote:\n> Yeb Havinga wrote:\n>>\n>> The rather wierd dip at 5 threads is consistent over multiple tries\n>\n> I've seen that twice on 4 core systems now.  The spot where there's just one\n> more thread than cores seems to be the worst case for cache thrashing on a\n> lot of these servers.\n>\n> How much total RAM is in this server?  Are all the slots filled?  Just\n> filling in a spreadsheet I have here with sample configs of various\n> hardware.\n>\n> Yeb's results look right to me now.  That's what an AMD Phenom II X4 940 @\n> 3.00GHz should look like.  It's a little faster, memory-wise, than my older\n> Intel Q6600 @ 2.4GHz.  So they've finally caught up with that generation of\n> Intel's stuff.  But my current desktop quad-core i860 with hyperthreading is\n> nearly twice as fast in terms of memory access at every thread size.  That's\n> why I own one of them instead of a Phenom II X4.\n\nyour i860? http://en.wikipedia.org/wiki/Intel_i860 wow!. :D\n\nNow, seriously: what memory (brand/model) does the Q6600 and your\nnewer desktop have?\n\nI'm just too curious, last time I was able to run benchmarks myself\nwas with a core2duo and a athlon 64 x2, back then: core2due beated\nathlon at almost anything.\n\nNowadays, it looks like amd is playing the \"more cores for the money\"\ngame, but I think that sooner or later they will catchup again, and\nwhen that happen: Intel will just get another ET chip, and put on\nmarked,and so on! :D\n\nThis is a game where the winners are: us!\n", "msg_date": "Tue, 31 Aug 2010 11:36:04 -0430", "msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "Jose Ildefonso Camargo Tolosa wrote:\n> Ok, this could explain the huge difference. I was planing on getting\n> GigaByte GA-890GPA-UD3H, with a Phenom II X6 and that ram: Crucial\n> CT2KIT25664BA13​39, Crucial BL2KIT25664FN1608, or something better I\n> find when I get enough money (depending on my budget at the moment).\n> \nWhy not pair a 8-core magny cours ($280,- at newegg \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16819105266) with a \nsupermicro ATX board \nhttp://www.supermicro.com/Aplus/motherboard/Opteron6100/SR56x0/H8SGL-F.cfm \n($264 at newegg \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16813182230&Tpk=H8SGL-F) \nand some memory?\n\nregards,\nYeb Havinga\n\n", "msg_date": "Tue, 31 Aug 2010 18:08:32 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Note that in that graph, the odd dips are happening every 8 cores on a\nsystem with 4 12 core processors. I don't know why, I would expect it\nto be every 6 or something.\n", "msg_date": "Tue, 31 Aug 2010 11:03:28 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "And, I have zone reclaim set to off because it makes the linux kernel\non large cpu machines make pathologically unsound decisions during\nlarge file transfers.\n", "msg_date": "Tue, 31 Aug 2010 11:04:30 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" }, { "msg_contents": "Jose Ildefonso Camargo Tolosa wrote:\n> your i860? http://en.wikipedia.org/wiki/Intel_i860 wow!. :D\n> \n\nThat's supposed to be i7-860: \nhttp://en.wikipedia.org/wiki/List_of_Intel_Core_i7_microprocessors\n\nIt was a whole $199, so not an expensive processor.\n\n> Now, seriously: what memory (brand/model) does the Q6600 and your\n> newer desktop have?\n> \n\nQ6600 is running Corsair DDR2-800 (5-5-5-18): \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16820145176\n\ni7-860 has Corsair DDR3-1600 C8 (8-8-8-24): \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16820145265\n\nBoth systems have 4 2GB modules in them for 8GB total.\n\nI've been both happy with the performance of the Corsair stuff, and with \nhow their head spreader design keeps my grubby fingers off the sensitive \nparts of the chips. This is all desktop memory though; the registered \nand ECC stuff for servers tends to be a bit slower, but for good reasons.\n\n> I'm just too curious, last time I was able to run benchmarks myself\n> was with a core2duo and a athlon 64 x2, back then: core2due beated\n> athlon at almost anything.\n> \n\nYes. The point I've made a couple of times here already is that Intel \npulled ahead around the Core 2 time, and AMD has been anywhere from a \nlittle to way behind ever since. And in the last 18 months that's \nmainly been related to the memory controller design, not the CPUs \nthemselves. Until these new Magny Cours designs, where AMD finally \ncaught back up, particularly on big servers with lots of banks of RAM. \n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 31 Aug 2010 13:19:52 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "Scott Marlowe wrote:\n> On Tue, Aug 31, 2010 at 6:41 AM, Yeb Havinga <[email protected]> wrote:\n> \n>>> export OMP_NUM_THREADS=4\n>>> \n>> Then I get the following. The rather wierd dip at 5 threads is consistent\n>> over multiple tries:\n>>\n>> \n>\n> I get similar dips on my server. Especially as you make the stream\n> test write a large enough chunk of data to outrun its caches.\n>\n> See attached png.\n> \nInteresting graph, especially since the overall feeling is a linear like \nincrease in memory bandwidth when more cores are active.\n\nJust curious, what is the 8-core cpu?\n\n-- Yeb\n\n", "msg_date": "Tue, 31 Aug 2010 20:55:29 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit\n desktop" }, { "msg_contents": "On Tue, Aug 31, 2010 at 12:55 PM, Yeb Havinga <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> On Tue, Aug 31, 2010 at 6:41 AM, Yeb Havinga <[email protected]> wrote:\n>>\n>>>>\n>>>> export OMP_NUM_THREADS=4\n>>>>\n>>>\n>>> Then I get the following. The rather wierd dip at 5 threads is consistent\n>>> over multiple tries:\n>>>\n>>>\n>>\n>> I get similar dips on my server.  Especially as you make the stream\n>> test write a large enough chunk of data to outrun its caches.\n>>\n>> See attached png.\n>>\n>\n> Interesting graph, especially since the overall feeling is a linear like\n> increase in memory bandwidth when more cores are active.\n>\n> Just curious, what is the 8-core cpu?\n\n8 core = dual 2352 cpus (2x4) 2.1 GHz\n12 core = dual 2427 cpus (2x6) 2.2 GHz\n48 core = quad 6127 cpus (4x12) 2.1 GHz\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Tue, 31 Aug 2010 12:56:53 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on new 64bit server compared to my 32bit desktop" } ]
[ { "msg_contents": "I have two tables:\nA: ItemID (PK), IsssueID (Indexed)\nB: ItemID (FK), IndexNumber : PK(ItemID, IndexNumber)\n\nBoth tables have several million columns, but B has much more than A.\n\nNow if I run\n\nSELECT A.ItemID FROM A, B WHERE A.ItemID = B.itemID AND A.issueID =\n<some id>\n\nThe query takes extremely long (several hours). I ran EXPLAIN and got:\n\n\"Hash Join (cost=516.66..17710110.47 rows=8358225 width=16)\"\n\" Hash Cond: ((b.itemid)::bpchar = a.itemid)\"\n\" -> Seq Scan on b (cost=0.00..15110856.68 rows=670707968 width=16)\"\n\" -> Hash (cost=504.12..504.12 rows=1003 width=16)\"\n\" -> Index Scan using idx_issueid on a (cost=0.00..504.12\nrows=1003 width=16)\"\n\" Index Cond: (issueid = 'A1983PW823'::bpchar)\"\n\nNow we see the problem is the seq scan on B. However there are only a\nhandful of rows in A that match a specific issueID. So my question is\nwhy doesn't it just check for each of the ItemIDs that have the correct\nIssueID in A if there is a matching itemID in B. This should be really\nfast because ItemID in B is indexed since it is part of the primary key.\n\nWhat is the reason for postgres not doing this, is there a way I can\nmake it do that? I'm using postgresql 8.4.4 and yes, I did run ANALYZE\non the entire DB.\n\nI have\nwork_mem = 10MB\nshared_buffer = 256MB\neffective_cache_size = 768MB\n\nThe database is basically for a single user.\n\nThanks a lot,\nJann\n\n", "msg_date": "Mon, 23 Aug 2010 06:23:38 +0200", "msg_from": "=?UTF-8?B?SmFubiBSw7ZkZXI=?= <[email protected]>", "msg_from_op": true, "msg_subject": "Inefficient query plan" }, { "msg_contents": "On Sun, Aug 22, 2010 at 10:23 PM, Jann Röder <[email protected]> wrote:\n> I have two tables:\n> A: ItemID (PK), IsssueID (Indexed)\n> B: ItemID (FK), IndexNumber : PK(ItemID, IndexNumber)\n>\n> Both tables have several million columns, but B has much more than A.\n>\n> Now if I run\n>\n> SELECT A.ItemID FROM A, B WHERE A.ItemID = B.itemID AND A.issueID =\n> <some id>\n>\n> The query takes extremely long (several hours). I ran EXPLAIN and got:\n>\n> \"Hash Join  (cost=516.66..17710110.47 rows=8358225 width=16)\"\n> \"  Hash Cond: ((b.itemid)::bpchar = a.itemid)\"\n> \"  ->  Seq Scan on b  (cost=0.00..15110856.68 rows=670707968 width=16)\"\n> \"  ->  Hash  (cost=504.12..504.12 rows=1003 width=16)\"\n> \"        ->  Index Scan using idx_issueid on a  (cost=0.00..504.12\n> rows=1003 width=16)\"\n> \"              Index Cond: (issueid = 'A1983PW823'::bpchar)\"\n\nHave you tried adding an index on b.indexid?\n", "msg_date": "Sun, 22 Aug 2010 23:51:22 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Also are a.indexid and b.indexid the same type?\n", "msg_date": "Sun, 22 Aug 2010 23:52:50 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Am 23.08.10 07:52, schrieb Scott Marlowe:\n> Also are a.indexid and b.indexid the same type?\n> \n\nYou mean ItemID? Fields of the same name are of the same type - so yes.\nAccording to the documentation pgsql adds indexes for primary keys\nautomatically so (b.itemID, b.indexNumber) is indexed. Or do you think\nadding an extra idnex for b.itemID alone will help? If I understand the\ndocumentation correctly, pqSQL can use the first column of a\nmulti-column index as if it was indexed individually... but maybe I'm\nwrong here.\n\n>> I have two tables:\n>> A: ItemID (PK), IsssueID (Indexed)\n>> B: ItemID (FK), IndexNumber : PK(ItemID, IndexNumber)\n\n\n", "msg_date": "Mon, 23 Aug 2010 12:15:43 +0200", "msg_from": "=?UTF-8?B?SmFubiBSw7ZkZXI=?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "On Mon, Aug 23, 2010 at 4:15 AM, Jann Röder <[email protected]> wrote:\n> Am 23.08.10 07:52, schrieb Scott Marlowe:\n>> Also are a.indexid and b.indexid the same type?\n>>\n>\n> You mean ItemID? Fields of the same name are of the same type - so yes.\n> According to the documentation pgsql adds indexes for primary keys\n> automatically so (b.itemID, b.indexNumber) is indexed. Or do you think\n> adding an extra idnex for b.itemID alone will help? If I understand the\n> documentation correctly, pqSQL can use the first column of a\n> multi-column index as if it was indexed individually... but maybe I'm\n> wrong here.\n\nIt can but that doesn't mean it will. A multi-column index is often\nquite a bit bigger than a single column one.\n\nWhat happens if you try\n\nset enable_seqscan=off;\n(your query here)\n", "msg_date": "Mon, 23 Aug 2010 04:18:02 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Am 23.08.10 12:18, schrieb Scott Marlowe:\n> On Mon, Aug 23, 2010 at 4:15 AM, Jann Röder <[email protected]> wrote:\n>> Am 23.08.10 07:52, schrieb Scott Marlowe:\n>>> Also are a.indexid and b.indexid the same type?\n>>>\n>>\n>> You mean ItemID? Fields of the same name are of the same type - so yes.\n>> According to the documentation pgsql adds indexes for primary keys\n>> automatically so (b.itemID, b.indexNumber) is indexed. Or do you think\n>> adding an extra idnex for b.itemID alone will help? If I understand the\n>> documentation correctly, pqSQL can use the first column of a\n>> multi-column index as if it was indexed individually... but maybe I'm\n>> wrong here.\n> \n> It can but that doesn't mean it will. A multi-column index is often\n> quite a bit bigger than a single column one.\n> \n> What happens if you try\n> \n> set enable_seqscan=off;\n> (your query here)\n> \nTried that already. The query plan is exactly the same.\n\n", "msg_date": "Mon, 23 Aug 2010 12:20:02 +0200", "msg_from": "=?UTF-8?B?SmFubiBSw7ZkZXI=?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Excerpts from Jann Röder's message of lun ago 23 00:23:38 -0400 2010:\n\n> \"Hash Join (cost=516.66..17710110.47 rows=8358225 width=16)\"\n> \" Hash Cond: ((b.itemid)::bpchar = a.itemid)\"\n> \" -> Seq Scan on b (cost=0.00..15110856.68 rows=670707968 width=16)\"\n> \" -> Hash (cost=504.12..504.12 rows=1003 width=16)\"\n> \" -> Index Scan using idx_issueid on a (cost=0.00..504.12\n> rows=1003 width=16)\"\n> \" Index Cond: (issueid = 'A1983PW823'::bpchar)\"\n\nHmm, I'm placing bets on the bpchar weirdness. I'd try getting rid of\nthat and using plain varchar for all the columns.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 23 Aug 2010 10:38:35 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Hmm, I'm placing bets on the bpchar weirdness. I'd try getting rid of\n> that and using plain varchar for all the columns.\n\nThat's certainly what's inhibiting it from considering an indexscan\non the larger table. I'm not as convinced as the OP that a nestloop\nindexscan is really going to win compared to the hash plan, but if\nthe comparison value is varchar then an index on a bpchar column\nis simply not useful --- at least not unless you stick an explicit\ncast into the query, so that the comparison will have bpchar rather\nthan varchar semantics.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Aug 2010 11:20:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan " } ]
[ { "msg_contents": "Jann Röder wrote:\nAm 23.08.10 12:18, schrieb Scott Marlowe:\n \n>> What happens if you try\n>>\n>> set enable_seqscan=off;\n>> (your query here)\n>>\n> Tried that already. The query plan is exactly the same.\n \nExactly? Not even the cost shown for the seq scan changed?\n \nYou are almost certainly omitting some crucial piece of information\nin your report. Please look over this page and post a more complete\nreport. In particular, please show the results of \\d for both tables\n(or of pg_dump -s -t 'tablename'), your complete postgresql.conf file\nstripped of comments, and a description of your hardware and OS.\n \n-Kevin\n", "msg_date": "Mon, 23 Aug 2010 07:08:22 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Thanks for your help,\nhere is the information you requested:\n\nTable information: A = Papers, B = PaperReferences\n\nwos-db=> \\d Papers\n Table \"public.papers\"\n Column | Type | Modifiers\n------------------+-------------------------+-----------\n itemid | character(15) | not null\n t9id | integer |\n artn | character varying |\n doi | character varying |\n pii | character varying |\n unsp | character varying |\n issueid | character(10) | not null\n title | character varying(1500) | not null\n titleenhancement | character varying(500) |\n beginningpage | character varying(19) |\n pagecount | integer | not null\n documenttype | character(1) | not null\n abstract | text |\nIndexes:\n \"papers_pkey\" PRIMARY KEY, btree (itemid)\n \"idx_papers_issueid\" btree (issueid)\nForeign-key constraints:\n \"papers_issueid_fkey\" FOREIGN KEY (issueid) REFERENCES\njournals(issueid) ON DELETE CASCADE\nReferenced by:\n TABLE \"authorkeywords\" CONSTRAINT \"authorkeywords_itemid_fkey\"\nFOREIGN KEY (itemid) REFERENCES papers(itemid) ON DELETE CASCADE\n TABLE \"authors\" CONSTRAINT \"authors_itemid_fkey\" FOREIGN KEY\n(itemid) REFERENCES papers(itemid) ON DELETE CASCADE\n TABLE \"grantnumbers\" CONSTRAINT \"grantnumbers_itemid_fkey\" FOREIGN\nKEY (itemid) REFERENCES papers(itemid) ON DELETE CASCADE\n TABLE \"keywordsplus\" CONSTRAINT \"keywordsplus_itemid_fkey\" FOREIGN\nKEY (itemid) REFERENCES papers(itemid) ON DELETE CASCADE\n TABLE \"languages\" CONSTRAINT \"languages_itemid_fkey\" FOREIGN KEY\n(itemid) REFERENCES papers(itemid) ON DELETE CASCADE\n TABLE \"paperreferences\" CONSTRAINT \"paperreferences_fromitemid_fkey\"\nFOREIGN KEY (itemid) REFERENCES papers(itemid) ON DELETE CASCADE\n\nwos-db=> \\d PaperReferences\n Table \"public.paperreferences\"\n Column | Type | Modifiers\n--------------------+-----------------------+-----------\n itemid | character varying(15) | not null\n t9id | integer |\n citedauthor | character varying(75) |\n citedartn | character varying |\n citeddoi | character varying |\n citedpii | character varying |\n citedunsp | character varying |\n citedreferenceyear | integer |\n citedtitle | character varying(20) | not null\n citedvolume | character varying(4) |\n citedpage | character varying(5) |\n referenceindex | integer | not null\nIndexes:\n \"paperreferences_pkey\" PRIMARY KEY, btree (itemid, referenceindex)\nForeign-key constraints:\n \"paperreferences_fromitemid_fkey\" FOREIGN KEY (itemid) REFERENCES\npapers(itemid) ON DELETE CASCADE\n\nI just noticed that PaperReferences uses character varying (15) and\nPapers uses character(15). Stupid mistake of mine. Do you think this\nmight cause the bad query planning? I will alter the table to use\ncharacter(15) in both cases and see if that helps.\n\npostgresql.conf:\nmax_connections = 20\t\t\t\nshared_buffers = 256MB\t\t\t\nwork_mem = 10MB\t\t\t\t\nmaintenance_work_mem = 128MB\t\t\nmax_stack_depth = 4MB\t\t\t\nsynchronous_commit = off\t\t\nwal_buffers = 1MB\t\t\t\ncheckpoint_segments = 10\t\t\neffective_cache_size = 768MB\ndefault_statistics_target = 200\t\ndatestyle = 'iso, mdy'\nlc_messages = 'C'\t\t\t\nlc_monetary = 'C'\t\t\t\nlc_numeric = 'C'\t\t\t\nlc_time = 'C'\t\t\t\t\ndefault_text_search_config = 'pg_catalog.simple'\n\nThe query I run:\nSELECT p.ItemID FROM Papers AS p, PaperReferences AS r WHERE p.itemID =\nr.ItemID AND p.issueID = 'A1983PW823'\n\nQuery plan with seqscan enabled:\n\n\"Hash Join (cost=512.71..17709356.53 rows=8283226 width=16)\"\n\" Hash Cond: ((r.itemid)::bpchar = p.itemid)\"\n\" -> Seq Scan on paperreferences r (cost=0.00..15110856.68\nrows=670707968 width=16)\"\n\" -> Hash (cost=500.29..500.29 rows=994 width=16)\"\n\" -> Index Scan using idx_papers_issueid on papers p\n(cost=0.00..500.29 rows=994 width=16)\"\n\" Index Cond: (issueid = 'A1983PW823'::bpchar)\"\n\nQuery plan with seqscan disbaled\n\n\"Hash Join (cost=10000000280.88..10017668625.22 rows=4233278 width=16)\"\n\" Hash Cond: ((r.itemid)::bpchar = p.itemid)\"\n\" -> Seq Scan on paperreferences r\n(cost=10000000000.00..10015110856.68 rows=670707968 width=16)\"\n\" -> Hash (cost=274.53..274.53 rows=508 width=16)\"\n\" -> Index Scan using idx_papers_issueid on papers p\n(cost=0.00..274.53 rows=508 width=16)\"\n\" Index Cond: (issueid = 'A1983PW823'::bpchar)\"\n\nDo you need an EXPLAIN ANALYZE output? Since it takes so long I can't\neasily post one right now. But maybe I can get one over night.\n\nMy Hardware is an iMac running OS X 10.6.4 with 1.5 GB RAM and a 2.1 GHz\n(or so) core 2 Duo processor.\n\nJann\n\nAm 23.08.10 14:08, schrieb Kevin Grittner:\n> Jann Röder wrote:\n> Am 23.08.10 12:18, schrieb Scott Marlowe:\n> \n>>> What happens if you try\n>>>\n>>> set enable_seqscan=off;\n>>> (your query here)\n>>>\n>> Tried that already. The query plan is exactly the same.\n> \n> Exactly? Not even the cost shown for the seq scan changed?\n> \n> You are almost certainly omitting some crucial piece of information\n> in your report. Please look over this page and post a more complete\n> report. In particular, please show the results of \\d for both tables\n> (or of pg_dump -s -t 'tablename'), your complete postgresql.conf file\n> stripped of comments, and a description of your hardware and OS.\n> \n> -Kevin\n> \n\n\n", "msg_date": "Mon, 23 Aug 2010 15:19:20 +0200", "msg_from": "=?UTF-8?B?SmFubiBSw7ZkZXI=?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "joining on varchars is always going to be very expensive. Longer the\nvalue is, more expensive it will be. Consider going for surrogate\nkeys.\n", "msg_date": "Mon, 23 Aug 2010 14:28:51 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Jann Rᅵder<[email protected]> wrote:\n \n> Table \"public.papers\"\n> Column | Type | Modifiers\n> ------------------+-------------------------+-----------\n> itemid | character(15) | not null\n \n> wos-db=> \\d PaperReferences\n> Table \"public.paperreferences\"\n> Column | Type | Modifiers\n> --------------------+-----------------------+-----------\n> itemid | character varying(15) | not null\n \n> I just noticed that PaperReferences uses character varying (15)\n> and Papers uses character(15). Stupid mistake of mine. Do you\n> think this might cause the bad query planning?\n \nAbsolutely. These are *not* the same type and don't compare all\nthat well.\n \n> I will alter the table to use character(15) in both cases and see\n> if that helps.\n \nI suspect that making them the same will cure the problem, but I\nwould recommend you make any character(n) columns character\nvarying(n) instead of the other way around. The the character(n)\ndata type has many surprising behaviors and tends to perform worse. \nAvoid using it if possible.\n \n> postgresql.conf:\n> max_connections = 20\t\t\t\n> shared_buffers = 256MB\t\t\t\n> work_mem = 10MB\t\t\t\t\n> maintenance_work_mem = 128MB\t\t\n> max_stack_depth = 4MB\t\t\t\n> synchronous_commit = off\t\t\n> wal_buffers = 1MB\t\t\t\n> checkpoint_segments = 10\t\t\n> effective_cache_size = 768MB\n> default_statistics_target = 200\t\n> datestyle = 'iso, mdy'\n> lc_messages = 'C'\t\t\t\n> lc_monetary = 'C'\t\t\t\n> lc_numeric = 'C'\t\t\t\n> lc_time = 'C'\t\t\t\t\n> default_text_search_config = 'pg_catalog.simple'\n \n> Do you need an EXPLAIN ANALYZE output? Since it takes so long I\n> can't easily post one right now. But maybe I can get one over\n> night.\n \nNot necessary; you've already identified the cause and the fix.\n \n> My Hardware is an iMac running OS X 10.6.4 with 1.5 GB RAM and a\n> 2.1 GHz (or so) core 2 Duo processor.\n \nOK. If you still don't get a good plan, you might want to try\nedging up effective_cache_size, if the sum of your shared_buffers\nand OS cache is larger than 768MB (which I would expect it might\nbe). If the active part of your database (the part which is\nfrequently referenced) fits within cache space, or even a\nsignificant portion of it fits, you might need to adjust\nrandom_page_cost and perhaps seq_page_cost to reflect the lower\naverage cost of fetching from cache rather than disk -- but you want\nto fix your big problem (the type mismatch) first, and then see if\nyou need further adjustments.\n \n-Kevin\n", "msg_date": "Mon, 23 Aug 2010 08:33:22 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Grzegorz Jaᅵkiewicz<[email protected]> wrote:\n \n> joining on varchars is always going to be very expensive. Longer\n> the value is, more expensive it will be. Consider going for\n> surrogate keys.\n \nSurrogate keys come with their own set of costs and introduce quite\na few problems of their own. I don't want to start a flame war or\ngo into an overly long diatribe on the evils of surrogate keys on\nthis thread; suffice it to say that it's not the first thing to try\nhere.\n \nAs an example of the performance we get using natural keys, with\ncompound keys on almost every table, check out this 1.3TB database,\nbeing updated live by 3000 users as you view it:\n \nhttp://wcca.wicourts.gov/\n \nSome tables have hundreds of millions of rows. No partitioning.\n \n-Kevin\n", "msg_date": "Mon, 23 Aug 2010 08:47:25 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "On Mon, Aug 23, 2010 at 2:47 PM, Kevin Grittner\n<[email protected]> wrote:\n> Grzegorz Jaœkiewicz<[email protected]> wrote:\n>\n>> joining on varchars is always going to be very expensive. Longer\n>> the value is, more expensive it will be. Consider going for\n>> surrogate keys.\n>\n> Surrogate keys come with their own set of costs and introduce quite\n> a few problems of their own.  I don't want to start a flame war or\n> go into an overly long diatribe on the evils of surrogate keys on\n> this thread; suffice it to say that it's not the first thing to try\n> here.\n>\n> As an example of the performance we get using natural keys, with\n> compound keys on almost every table, check out this 1.3TB database,\n> being updated live by 3000 users as you view it:\n>\n> http://wcca.wicourts.gov/\n>\n> Some tables have hundreds of millions of rows.  No partitioning.\n>\n\nTrue, but as far as joining is concerned, joining on single column\nfixed length fields is always going to be a win. Hence why surrogate\nkeys make sens in this particular example, or the guy here should at\nleast test it to see, rather than believe in one or the other.\n\n\n-- \nGJ\n", "msg_date": "Mon, 23 Aug 2010 15:11:55 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Grzegorz Jaᅵkiewicz<[email protected]> wrote:\n \n> True, but as far as joining is concerned, joining on single column\n> fixed length fields is always going to be a win. Hence why\n> surrogate keys make sens in this particular example, or the guy\n> here should at least test it to see, rather than believe in one or\n> the other.\n \nHow about we start by just having him use the same data type in both\ntables?\n \nIf you insist on getting into a discussion of the merits of\nsurrogate keys, you need to look at not just this one query and its\nresponse time, where surrogate keys might give a percentage point or\ntwo increase in performance, but at the integrity challenges they\nintroduce, and at what happens when you've got dozens of other\ntables which would be containing the natural data, but which now\nneed to navigate through particular linkage paths to get to it to\ngenerate summary reports and such. It's easy to construct a narrow\ncase where a surrogate key is a short-term marginal win; it's just\nabout as easy to show data corruption vulnerabilities and huge\nperformance hits on complex queries when surrogate keys are used.\nThey have a place, but it's a pretty narrow set of use-cases in my\nbook. For every place they're not used where they should be, there\nare at least 100 places they are used where they shouldn't be.\n \n-Kevin\n", "msg_date": "Mon, 23 Aug 2010 09:21:09 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "I am not a fan of 'do this - this is best' response to queries like that.\nRather: this is what you should try, and choose whichever one suits you better.\nSo, rather than 'natural keys ftw', I am giving him another option to\nchoose from.\n\nYou see, in my world, I was able to improve some large dbs performance\n10+ times fold, by going for surrogate keys. But in them cases, joins\nwere performed on 2+ varchar PK fields, and the whole thing was\ncrawwwling.\n\nSo, don't narrow down to one solution because it worked for you. Keep\nan open book.\n", "msg_date": "Mon, 23 Aug 2010 15:40:25 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Oh, and I second using same types in joins especially, very much so :)\n", "msg_date": "Mon, 23 Aug 2010 15:41:01 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Grzegorz Jaᅵkiewicz<[email protected]> wrote:\n \n> So, don't narrow down to one solution because it worked for you.\n> Keep an open book.\n \nWhat I was trying to do was advise on what would most directly fix\nthe problem. Adding surrogate keys goes way beyond adding the\ncolumns and using them as keys, as I'm sure you're aware if you've\ndone this on a large scale. I wouldn't tell someone not to ever use\nthem; I would advise not to try them when there is a natural key\nunless there are problems which are not solved without them, as\nappears to have been the case with your database.\n \nI may be a little bit over-sensitive on the topic, because I've seen\nso many people who consider it \"wrong\" to use natural keys on any\ntable *ever*. About one out of every four or five programmers who\ngets hired here feels compelled to argue that we should add\nsurrogate keys to all our tables for no reason beyond \"it's the\nthing to do\". I've been at this for 38 years, most of that as a\nconsultant to a wide variety of businesses, government agencies, and\nNPOs; and in my experience it usually is *not* the right thing to\ndo.\n \nDon't worry -- when I see evidence that surrogate keys will solve a\nproblem which has not yielded to more conservative solutions, I'll\nsuggest using them.\n \n-Kevin\n", "msg_date": "Mon, 23 Aug 2010 10:03:11 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "On Mon, Aug 23, 2010 at 7:19 AM, Jann Röder <[email protected]> wrote:\n> Thanks for your help,\n> here is the information you requested:\n>\n> Table information: A = Papers, B = PaperReferences\n>\n> wos-db=> \\d Papers\n>                 Table \"public.papers\"\n>      Column      |          Type           | Modifiers\n> ------------------+-------------------------+-----------\n>  itemid           | character(15)           | not null\n\n> wos-db=> \\d PaperReferences\n>             Table \"public.paperreferences\"\n>       Column       |         Type          | Modifiers\n> --------------------+-----------------------+-----------\n>  itemid             | character varying(15) | not null\n>\n> I just noticed that PaperReferences uses character varying (15) and\n> Papers uses character(15). Stupid mistake of mine. Do you think this\n> might cause the bad query planning? I will alter the table to use\n> character(15) in both cases and see if that helps.\n\nAlmost certainly it's not helping. If the planner doesn't choose an\nindexed lookup when you turn seq scans off, then either an index plan\nis WAY expensive (the planner is tricked to turning off seq scan by\nsetting the value of them to something very high) or you don't have a\nuseful index.\n\nWhen I asked if they were the same and if you'd tried with seqscan off\nthat's what I was getting at, that the types might not match.\n\nNow, it may or may not be much faster with an index scan, depending on\nyour data distribution and the number of rows to be returned, but at\nleast if they're the same type the planner has a choice. If they're\nnot, it has no choice, it has to go with the seq scan.\n\nLet us know how it runs when you've got the types matched up. BTW,\nI'd generally go with text over char or varchar, but that's just me.\n", "msg_date": "Mon, 23 Aug 2010 10:20:34 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "So that took a while... I'm currently running ANALYZE on the\nPaperReferences table again (the one where I changed the data type).\n\nThe plan however is still the same:\n\"Hash Join (cost=280.88..24330800.08 rows=670602240 width=16)\"\n\" Hash Cond: (r.itemid = p.itemid)\"\n\" -> Seq Scan on paperreferences r (cost=0.00..15109738.40\nrows=670602240 width=64)\"\n\" -> Hash (cost=274.53..274.53 rows=508 width=16)\"\n\" -> Index Scan using idx_papers_issueid on papers p\n(cost=0.00..274.53 rows=508 width=16)\"\n\" Index Cond: (issueid = 'A1983PW823'::bpchar)\"\n\nBut I can now force it to use an index scan instead of a seqScan:\n\"Merge Join (cost=0.00..2716711476.57 rows=670602240 width=16)\"\n\" Merge Cond: (p.itemid = r.itemid)\"\n\" -> Index Scan using papers_pkey on papers p (cost=0.00..21335008.47\nrows=508 width=16)\"\n\" Filter: (issueid = 'A1983PW823'::bpchar)\"\n\" -> Index Scan using paperreferences_pkey on paperreferences r\n(cost=0.00..2686993938.83 rows=670602240 width=64)\"\n\nUnfortunately this is not faster than the other one. I did not wait\nuntil it returned because I want this query to take less than 5 seconds\nor so.\n\nHere is my query again:\nSELECT p.ItemID FROM Papers AS p, PaperReferences AS r WHERE p.itemID =\nr.ItemID AND p.issueID = 'A1983PW823';\n\nI can also write it as:\nSELECT ItemID FROM PaperReferences WHERE ItemID IN (SELECT ItemID FROM\nPapers WHERE IssueID = 'A1983PW823')\n\nWhich is more what I would do if I was the database. Unfortunately this\nis not fast either:\n\n\"Hash Semi Join (cost=280.88..24330800.08 rows=670602240 width=64)\"\n\" Hash Cond: (paperreferences.itemid = papers.itemid)\"\n\" -> Seq Scan on paperreferences (cost=0.00..15109738.40\nrows=670602240 width=64)\"\n\" -> Hash (cost=274.53..274.53 rows=508 width=16)\"\n\" -> Index Scan using idx_papers_issueid on papers\n(cost=0.00..274.53 rows=508 width=16)\"\n\" Index Cond: (issueid = 'A1983PW823'::bpchar)\"\n\nThe sub-query SELECT ItemID FROM Papers WHERE IssueID = 'A1983PW823' is\nreally fast, though and returns 16 rows. If I unroll the query by hand\nlike this:\nSELECT ItemID FROM PaperReferences WHERE\n(ItemID = 'A1983PW82300001' OR\nItemID = 'A1983PW82300002' OR\nItemID = 'A1983PW82300003' OR\nItemID = 'A1983PW82300004' OR\nItemID = 'A1983PW82300005' OR\nItemID = 'A1983PW82300006' OR\n...)\n\n(All the ORed stuff is the result of the sub-query) I get my result\nreally fast. So what I need now is a way to tell postgres to do it that\nway automatically. If everything else fails I will have to put that\nlogic into my application in java code, which I don't want to do because\nthen I will also have to remove my constraints so I can delete stuff at\na reasonable speed.\n\nThanks,\nJann\n\n\nAm 23.08.10 15:33, schrieb Kevin Grittner:\n> Jann Röder<[email protected]> wrote:\n> \n>> Table \"public.papers\"\n>> Column | Type | Modifiers\n>> ------------------+-------------------------+-----------\n>> itemid | character(15) | not null\n> \n>> wos-db=> \\d PaperReferences\n>> Table \"public.paperreferences\"\n>> Column | Type | Modifiers\n>> --------------------+-----------------------+-----------\n>> itemid | character varying(15) | not null\n> \n>> I just noticed that PaperReferences uses character varying (15)\n>> and Papers uses character(15). Stupid mistake of mine. Do you\n>> think this might cause the bad query planning?\n> \n> Absolutely. These are *not* the same type and don't compare all\n> that well.\n> \n>> I will alter the table to use character(15) in both cases and see\n>> if that helps.\n> \n> I suspect that making them the same will cure the problem, but I\n> would recommend you make any character(n) columns character\n> varying(n) instead of the other way around. The the character(n)\n> data type has many surprising behaviors and tends to perform worse. \n> Avoid using it if possible.\n> \n>> postgresql.conf:\n>> max_connections = 20\t\t\t\n>> shared_buffers = 256MB\t\t\t\n>> work_mem = 10MB\t\t\t\t\n>> maintenance_work_mem = 128MB\t\t\n>> max_stack_depth = 4MB\t\t\t\n>> synchronous_commit = off\t\t\n>> wal_buffers = 1MB\t\t\t\n>> checkpoint_segments = 10\t\t\n>> effective_cache_size = 768MB\n>> default_statistics_target = 200\t\n>> datestyle = 'iso, mdy'\n>> lc_messages = 'C'\t\t\t\n>> lc_monetary = 'C'\t\t\t\n>> lc_numeric = 'C'\t\t\t\n>> lc_time = 'C'\t\t\t\t\n>> default_text_search_config = 'pg_catalog.simple'\n> \n>> Do you need an EXPLAIN ANALYZE output? Since it takes so long I\n>> can't easily post one right now. But maybe I can get one over\n>> night.\n> \n> Not necessary; you've already identified the cause and the fix.\n> \n>> My Hardware is an iMac running OS X 10.6.4 with 1.5 GB RAM and a\n>> 2.1 GHz (or so) core 2 Duo processor.\n> \n> OK. If you still don't get a good plan, you might want to try\n> edging up effective_cache_size, if the sum of your shared_buffers\n> and OS cache is larger than 768MB (which I would expect it might\n> be). If the active part of your database (the part which is\n> frequently referenced) fits within cache space, or even a\n> significant portion of it fits, you might need to adjust\n> random_page_cost and perhaps seq_page_cost to reflect the lower\n> average cost of fetching from cache rather than disk -- but you want\n> to fix your big problem (the type mismatch) first, and then see if\n> you need further adjustments.\n> \n> -Kevin\n> \n\n\n", "msg_date": "Tue, 24 Aug 2010 15:03:16 +0200", "msg_from": "=?UTF-8?B?SmFubiBSw7ZkZXI=?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" }, { "msg_contents": "Thanks everyone,\nthe problem just solved itself. After the ANALYZE had finished, postgres\nstarted doing what I wanted it to do all along:\nEXPLAIN SELECT p.ItemID FROM Papers AS p, PaperReferences AS r WHERE\np.itemID = r.ItemID AND p.issueID = 'A1983PW823';\n\n\"Nested Loop (cost=0.00..4515980.97 rows=2071811 width=16)\"\n\" -> Index Scan using idx_papers_issueid on papers p\n(cost=0.00..274.53 rows=508 width=16)\"\n\" Index Cond: (issueid = 'A1983PW823'::bpchar)\"\n\" -> Index Scan using paperreferences_pkey on paperreferences r\n(cost=0.00..8838.21 rows=4078 width=16)\"\n\" Index Cond: (r.itemid = p.itemid)\"\n\nSo thanks again. I'm starting to grasp the postgres quirks :)\n\nJann\n\nAm 24.08.10 15:03, schrieb Jann Röder:\n> So that took a while... I'm currently running ANALYZE on the\n> PaperReferences table again (the one where I changed the data type).\n> \n> The plan however is still the same:\n> \"Hash Join (cost=280.88..24330800.08 rows=670602240 width=16)\"\n> \" Hash Cond: (r.itemid = p.itemid)\"\n> \" -> Seq Scan on paperreferences r (cost=0.00..15109738.40\n> rows=670602240 width=64)\"\n> \" -> Hash (cost=274.53..274.53 rows=508 width=16)\"\n> \" -> Index Scan using idx_papers_issueid on papers p\n> (cost=0.00..274.53 rows=508 width=16)\"\n> \" Index Cond: (issueid = 'A1983PW823'::bpchar)\"\n> \n> But I can now force it to use an index scan instead of a seqScan:\n> \"Merge Join (cost=0.00..2716711476.57 rows=670602240 width=16)\"\n> \" Merge Cond: (p.itemid = r.itemid)\"\n> \" -> Index Scan using papers_pkey on papers p (cost=0.00..21335008.47\n> rows=508 width=16)\"\n> \" Filter: (issueid = 'A1983PW823'::bpchar)\"\n> \" -> Index Scan using paperreferences_pkey on paperreferences r\n> (cost=0.00..2686993938.83 rows=670602240 width=64)\"\n> \n> Unfortunately this is not faster than the other one. I did not wait\n> until it returned because I want this query to take less than 5 seconds\n> or so.\n> \n> Here is my query again:\n> SELECT p.ItemID FROM Papers AS p, PaperReferences AS r WHERE p.itemID =\n> r.ItemID AND p.issueID = 'A1983PW823';\n> \n> I can also write it as:\n> SELECT ItemID FROM PaperReferences WHERE ItemID IN (SELECT ItemID FROM\n> Papers WHERE IssueID = 'A1983PW823')\n> \n> Which is more what I would do if I was the database. Unfortunately this\n> is not fast either:\n> \n> \"Hash Semi Join (cost=280.88..24330800.08 rows=670602240 width=64)\"\n> \" Hash Cond: (paperreferences.itemid = papers.itemid)\"\n> \" -> Seq Scan on paperreferences (cost=0.00..15109738.40\n> rows=670602240 width=64)\"\n> \" -> Hash (cost=274.53..274.53 rows=508 width=16)\"\n> \" -> Index Scan using idx_papers_issueid on papers\n> (cost=0.00..274.53 rows=508 width=16)\"\n> \" Index Cond: (issueid = 'A1983PW823'::bpchar)\"\n> \n> The sub-query SELECT ItemID FROM Papers WHERE IssueID = 'A1983PW823' is\n> really fast, though and returns 16 rows. If I unroll the query by hand\n> like this:\n> SELECT ItemID FROM PaperReferences WHERE\n> (ItemID = 'A1983PW82300001' OR\n> ItemID = 'A1983PW82300002' OR\n> ItemID = 'A1983PW82300003' OR\n> ItemID = 'A1983PW82300004' OR\n> ItemID = 'A1983PW82300005' OR\n> ItemID = 'A1983PW82300006' OR\n> ...)\n> \n> (All the ORed stuff is the result of the sub-query) I get my result\n> really fast. So what I need now is a way to tell postgres to do it that\n> way automatically. If everything else fails I will have to put that\n> logic into my application in java code, which I don't want to do because\n> then I will also have to remove my constraints so I can delete stuff at\n> a reasonable speed.\n> \n> Thanks,\n> Jann\n> \n> \n> Am 23.08.10 15:33, schrieb Kevin Grittner:\n>> Jann Röder<[email protected]> wrote:\n>> \n>>> Table \"public.papers\"\n>>> Column | Type | Modifiers\n>>> ------------------+-------------------------+-----------\n>>> itemid | character(15) | not null\n>> \n>>> wos-db=> \\d PaperReferences\n>>> Table \"public.paperreferences\"\n>>> Column | Type | Modifiers\n>>> --------------------+-----------------------+-----------\n>>> itemid | character varying(15) | not null\n>> \n>>> I just noticed that PaperReferences uses character varying (15)\n>>> and Papers uses character(15). Stupid mistake of mine. Do you\n>>> think this might cause the bad query planning?\n>> \n>> Absolutely. These are *not* the same type and don't compare all\n>> that well.\n>> \n>>> I will alter the table to use character(15) in both cases and see\n>>> if that helps.\n>> \n>> I suspect that making them the same will cure the problem, but I\n>> would recommend you make any character(n) columns character\n>> varying(n) instead of the other way around. The the character(n)\n>> data type has many surprising behaviors and tends to perform worse. \n>> Avoid using it if possible.\n>> \n>>> postgresql.conf:\n>>> max_connections = 20\t\t\t\n>>> shared_buffers = 256MB\t\t\t\n>>> work_mem = 10MB\t\t\t\t\n>>> maintenance_work_mem = 128MB\t\t\n>>> max_stack_depth = 4MB\t\t\t\n>>> synchronous_commit = off\t\t\n>>> wal_buffers = 1MB\t\t\t\n>>> checkpoint_segments = 10\t\t\n>>> effective_cache_size = 768MB\n>>> default_statistics_target = 200\t\n>>> datestyle = 'iso, mdy'\n>>> lc_messages = 'C'\t\t\t\n>>> lc_monetary = 'C'\t\t\t\n>>> lc_numeric = 'C'\t\t\t\n>>> lc_time = 'C'\t\t\t\t\n>>> default_text_search_config = 'pg_catalog.simple'\n>> \n>>> Do you need an EXPLAIN ANALYZE output? Since it takes so long I\n>>> can't easily post one right now. But maybe I can get one over\n>>> night.\n>> \n>> Not necessary; you've already identified the cause and the fix.\n>> \n>>> My Hardware is an iMac running OS X 10.6.4 with 1.5 GB RAM and a\n>>> 2.1 GHz (or so) core 2 Duo processor.\n>> \n>> OK. If you still don't get a good plan, you might want to try\n>> edging up effective_cache_size, if the sum of your shared_buffers\n>> and OS cache is larger than 768MB (which I would expect it might\n>> be). If the active part of your database (the part which is\n>> frequently referenced) fits within cache space, or even a\n>> significant portion of it fits, you might need to adjust\n>> random_page_cost and perhaps seq_page_cost to reflect the lower\n>> average cost of fetching from cache rather than disk -- but you want\n>> to fix your big problem (the type mismatch) first, and then see if\n>> you need further adjustments.\n>> \n>> -Kevin\n\n\n", "msg_date": "Tue, 24 Aug 2010 15:24:30 +0200", "msg_from": "=?UTF-8?B?SmFubiBSw7ZkZXI=?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient query plan" } ]
[ { "msg_contents": "I forgot to paste link:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n\n", "msg_date": "Mon, 23 Aug 2010 07:10:30 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient query plan" } ]
[ { "msg_contents": ">I may be a little bit over-sensitive on the topic, because I've seen\n>so many people who consider it \"wrong\" to use natural keys on any\n>table *ever*. About one out of every four or five programmers who\n>gets hired here feels compelled to argue that we should add\n>surrogate keys to all our tables for no reason beyond \"it's the\n>thing to do\". I've been at this for 38 years, most of that as a\n>consultant to a wide variety of businesses, government agencies, and\n>NPOs; and in my experience it usually is *not* the right thing to\n>do.\n> \n>Don't worry -- when I see evidence that surrogate keys will solve a\n>problem which has not yielded to more conservative solutions, I'll\n>suggest using them.\n> \n>-Kevin\n>\n\nAh feeel your pain, brother. Been there, done that. In almost every case, those who make the \"thou shalt always, and only, use a surrogate key\" cite folks like Ambler as authoritative rather than folks like Date. The Kiddie Koder Krew are woefully uninformed about the history and development of RDBMS, and why some approaches are more intelligent than others. Ambler, et al coders all, have poisoned more minds than can be counted.\n\nDijkstra called out BASIC and COBOL for polluting the minds of young coders. Allowing Ambler and the like to be \"thought leaders\" in RDBMS is just as polluting.\n\nThere, I said.\n\n-- Robert\n", "msg_date": "Mon, 23 Aug 2010 11:22:46 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient query plan" } ]
[ { "msg_contents": "Hello There,\n\nI have a table x and a history table x_hist, whats the best way to update\nthe history table.\n\nshould i need to use triggers or embed a code in my script to update the\nhistory table?\n\nwhat is the performance impact of a trigger versus embedding the code in the\nscript?\n\nthanks for your time.\n\n- Deepak\n\nHello There,I have a table x and a history table x_hist, whats the best way to update the history table.should i need to use triggers or embed a code in my script to update the history table?what is the performance impact of a trigger versus embedding the code in the script?\nthanks for your time.- Deepak", "msg_date": "Mon, 23 Aug 2010 11:42:21 -0700", "msg_from": "DM <[email protected]>", "msg_from_op": true, "msg_subject": "Triggers or code?" }, { "msg_contents": "Trigger is the way to go.\n\nAndré.\n\nDate: Mon, 23 Aug 2010 11:42:21 -0700\nSubject: [PERFORM] Triggers or code?\nFrom: [email protected]\nTo: [email protected]\n\nHello There,\n\nI have a table x and a history table x_hist, whats the best way to update the history table.\n\nshould i need to use triggers or embed a code in my script to update the history table?\n\nwhat is the performance impact of a trigger versus embedding the code in the script?\n\n\nthanks for your time.\n\n- Deepak\n \t\t \t \t\t \n\n\n\n\n\nTrigger is the way to go.André.Date: Mon, 23 Aug 2010 11:42:21 -0700Subject: [PERFORM] Triggers or code?From: [email protected]: [email protected] There,I have a table x and a history table x_hist, whats the best way to update the history table.should i need to use triggers or embed a code in my script to update the history table?what is the performance impact of a trigger versus embedding the code in the script?\nthanks for your time.- Deepak", "msg_date": "Mon, 23 Aug 2010 22:13:51 +0300", "msg_from": "=?iso-8859-1?B?QW5kcukgRmVybmFuZGVz?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Triggers or code?" }, { "msg_contents": "That depends on your application's requirements.  If a transaction on table X fails, do you still want the history (noting the failure)?  If so, go with embedding the code in your script.  If you only want history for successful transactions, a trigger will take care of that for you automatically.\nBob Lunney \n\n--- On Mon, 8/23/10, DM <[email protected]> wrote:\n\nFrom: DM <[email protected]>\nSubject: [PERFORM] Triggers or code?\nTo: [email protected]\nDate: Monday, August 23, 2010, 2:42 PM\n\nHello There,\n\nI have a table x and a history table x_hist, whats the best way to update the history table.\n\nshould i need to use triggers or embed a code in my script to update the history table?\n\nwhat is the performance impact of a trigger versus embedding the code in the script?\n\n\nthanks for your time.\n\n- Deepak\n\n\n\n\n \nThat depends on your application's requirements.  If a transaction on table X fails, do you still want the history (noting the failure)?  If so, go with embedding the code in your script.  If you only want history for successful transactions, a trigger will take care of that for you automatically.Bob Lunney --- On Mon, 8/23/10, DM <[email protected]> wrote:From: DM <[email protected]>Subject: [PERFORM] Triggers or code?To: [email protected]: Monday, August 23, 2010, 2:42 PMHello There,I have a table x and a history table x_hist, whats the best way to update the history table.should i need to use triggers or\n embed a code in my script to update the history table?what is the performance impact of a trigger versus embedding the code in the script?\nthanks for your time.- Deepak", "msg_date": "Wed, 25 Aug 2010 07:30:31 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Triggers or code?" } ]
[ { "msg_contents": "Howdy all,\n\nWe're doing some performance testing, and when we scaled it our app up to about 250 concurrent users\nwe started seeing a bunch of processes sititng in \"PARSE WAITING\" state.\n\nCan anyone give me insite on what this means? what's the parse waiting for?\n\nThanks\n\nDave\n", "msg_date": "Mon, 23 Aug 2010 15:15:56 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "PARSE WAITING" }, { "msg_contents": "probably waiting on the xlog directory that's filled up... <sigh> \n<shotgun>-><foot>->blamo\n\nmove along, nothing to see here =)\n\n\nDave\n\n\nOn Mon, Aug 23, 2010 at 03:15:56PM -0700, David Kerr wrote:\n- Howdy all,\n- \n- We're doing some performance testing, and when we scaled it our app up to about 250 concurrent users\n- we started seeing a bunch of processes sititng in \"PARSE WAITING\" state.\n- \n- Can anyone give me insite on what this means? what's the parse waiting for?\n- \n- Thanks\n- \n- Dave\n- \n- -- \n- Sent via pgsql-performance mailing list ([email protected])\n- To make changes to your subscription:\n- http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 23 Aug 2010 15:19:31 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PARSE WAITING" }, { "msg_contents": "Excerpts from David Kerr's message of lun ago 23 18:15:56 -0400 2010:\n> Howdy all,\n> \n> We're doing some performance testing, and when we scaled it our app up to about 250 concurrent users\n> we started seeing a bunch of processes sititng in \"PARSE WAITING\" state.\n> \n> Can anyone give me insite on what this means? what's the parse waiting for?\n\nIt means the parse phase is waiting for a lock. You can see exactly\nwhat it's waiting for by looking at pg_locks \"WHERE NOT GRANTED\".\n\nHave you got lots of partitions, or something?\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 23 Aug 2010 18:23:25 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PARSE WAITING" }, { "msg_contents": "On Mon, Aug 23, 2010 at 06:23:25PM -0400, Alvaro Herrera wrote:\n- Excerpts from David Kerr's message of lun ago 23 18:15:56 -0400 2010:\n- > Howdy all,\n- > \n- > We're doing some performance testing, and when we scaled it our app up to about 250 concurrent users\n- > we started seeing a bunch of processes sititng in \"PARSE WAITING\" state.\n- > \n- > Can anyone give me insite on what this means? what's the parse waiting for?\n- \n- It means the parse phase is waiting for a lock. You can see exactly\n- what it's waiting for by looking at pg_locks \"WHERE NOT GRANTED\".\n- \n- Have you got lots of partitions, or something?\n\nno, the xlog directory filled up due to me being an idiot.\n\nonce concern i have though, is that after i freed up space in the pg_xlog directory\nthe processess didn't start moving again.. is that normal? this is 8.3.9)\n\n\nalso, as a result i had to crash the DB.. it's now going through and doing this\nat startup:\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.15779\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.25919\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.13352\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.16276\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.27857\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.34652\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.6804\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.4270\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.26926\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.29281\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.16689\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.36355\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.5502\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.5874\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.19594\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.11514\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.11865\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.20944\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.35733\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.8401\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.3767\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.2101\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.31776\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.15686\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.10364\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.12593\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.6041\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.3030\") = 0\nunlink(\"base/pgsql_tmp/pgsql_tmp28335.14737\") = 0\n\nwhich isn't the fastest operation.. just for my info, can anyone tell me what\npgsql_tmp is, and why the engine is wacking each file individually?\n\nThanks!\n\nDave\n", "msg_date": "Mon, 23 Aug 2010 15:47:02 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PARSE WAITING" }, { "msg_contents": "Excerpts from David Kerr's message of lun ago 23 18:47:02 -0400 2010:\n\n> unlink(\"base/pgsql_tmp/pgsql_tmp28335.12593\") = 0\n> unlink(\"base/pgsql_tmp/pgsql_tmp28335.6041\") = 0\n> unlink(\"base/pgsql_tmp/pgsql_tmp28335.3030\") = 0\n> unlink(\"base/pgsql_tmp/pgsql_tmp28335.14737\") = 0\n> \n> which isn't the fastest operation.. just for my info, can anyone tell me what\n> pgsql_tmp is, and why the engine is wacking each file individually?\n\nThese are temp files, which you can remove without concern if the server\nis down.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 23 Aug 2010 21:17:35 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PARSE WAITING" } ]
[ { "msg_contents": "Hello!\n\nWe've just installed a couple new servers that will be our new database \nservers. The basic hardware stats of each are:\n\n2 x Xeon E5620 (2.4GHz quad-core)\n32GB DDR3 1333 RAM\n6 x 600GB 15krpm SAS drives - RAID 10\nPerc 6/i RAID controller with battery backup\n\nI've lurked on this list for a short while and gotten a small (tiny, I'm \nsure) set of ideas about parameters that have commonly been mentioned \nwhen performance tuning questions come up, but haven't had the freedom \nto do much tuning to our existing setups. Now that we've got new \nservers (and thus my tuning won't potentially impact production \nperformance until we roll these out), I've begun researching how I \nshould be tuning these to keep the speeds up in an environment where \nwe're regularly dealing with queries handling multiple tables with 20 \nmillion or more rows, and tables that grow by half a million records per \nday.\n\nI've found a great many resources suggesting settings to tune and even \nsome that suggest ways to determine the values to set for them, but I'm \nhaving trouble tracking down anything CURRENT in terms of what options \nto set and what values to give them. The most comprehensive lists of \nsuggestions (as provided by a few, though not a great many minutes \nperusing Google results) list options that don't seem to exist anymore \n(eg- sort_mem) in the version of 8.4 that Ubuntu currently has for 10.04 \nserver (8.4.4).\n\nIs there a reference I can look at that will give me a description of \nhow to determine sensible values for settings like shared_buffers, \neffective_cache_size, etc. that I see discussed on this list and \nelsewhere? If there isn't a really good reference, would any of you \nmind suggesting sensible starting points for these options or methods \nfor determining what those would be, as well as other options I might \nbenefit from changing the values of?\n\nI'm limited a bit in my tuning by the disks available - I can keep the \nWAL on a separate partition in the array, but I wasn't able to allocate \nan entire disk or separate array to it due to the amount of the \navaliable space I'll need for the databases these servers will house. \nI've helped it a bit by using ext2 on the partition the WAL is on (the \ndatabases themselves are on ext4), but I'm not sure what else I can do \nin terms of tuning the WAL without re-thinking the storage setup.\n\nThanks in advance for any help you might be able to provide.\n\nBob Branch\n", "msg_date": "Wed, 25 Aug 2010 11:58:16 -0400", "msg_from": "Bob Branch <[email protected]>", "msg_from_op": true, "msg_subject": "New servers, need suggestions for sensible tuning settings" }, { "msg_contents": "Bob Branch <[email protected]> wrote:\n \n> Is there a reference I can look at that will give me a description\n> of how to determine sensible values for settings like\n> shared_buffers, effective_cache_size, etc. that I see discussed on\n> this list and elsewhere?\n \nYou might start with these links:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \nhttp://pgfoundry.org/projects/pgtune/\n \n-Kevin\n", "msg_date": "Wed, 25 Aug 2010 11:24:53 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New servers, need suggestions for sensible\n\t tuning settings" } ]
[ { "msg_contents": "Hi All,\n\nI have a poor performance SQL as following. The table has about 200M\nrecords, each employee have average 100 records. The query lasts about\n3 hours. All I want is to update the flag for highest version of each\nclient's record. Any suggestion is welcome!\n\nThanks,\n\nMike\n\n\n====SQL===========\nupdate empTbl A\nset flag=1\nwhere\nrec_ver =\n( select max(rec_ver)\nfrom empTbl\nwhere empNo = A.empNo)\n\n\n\n===Table empTbl=====\n\nempTbl\n{\nint empNo;\nint flag;\nchar[256] empDesc;\nint rec_ver;\n}\n", "msg_date": "Wed, 25 Aug 2010 18:18:43 -0700 (PDT)", "msg_from": "mike <[email protected]>", "msg_from_op": true, "msg_subject": "SubQuery Performance" }, { "msg_contents": "In response to mike :\n> Hi All,\n> \n> I have a poor performance SQL as following. The table has about 200M\n> records, each employee have average 100 records. The query lasts about\n> 3 hours. All I want is to update the flag for highest version of each\n> client's record. Any suggestion is welcome!\n> \n> Thanks,\n> \n> Mike\n> \n> \n> ====SQL===========\n> update empTbl A\n> set flag=1\n> where\n> rec_ver =\n> ( select max(rec_ver)\n> from empTbl\n> where empNo = A.empNo)\n> \n> \n> \n> ===Table empTbl=====\n> \n> empTbl\n> {\n> int empNo;\n> int flag;\n> char[256] empDesc;\n> int rec_ver;\n> }\n\nTry this:\n\nupdate empTbl A set flag=1 from (select empno, max(rec_ver) as rec_ver from empTbl group by empno) foo where (a.empno,a.rec_ver) = (foo.empno, foo.rec_ver);\n\nYou should create an index on empTbl(empNo,rec_ver).\n\nPlease show us the EXPLAIN ANALYSE <query> for both selects.\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Thu, 26 Aug 2010 07:24:20 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SubQuery Performance" } ]
[ { "msg_contents": "Hi,\n\nI have a colleague that is convinced that the website is faster if\nenable_seqscan is turned OFF.\nI'm convinced of the opposite (better to leave it ON), but i would like to\nshow it, prove it to him.\nNow the first query we tried, would do a bitmap heap scan instead of a\nseqscan when the latter were disabled, to exclude about 50% of the records\n(18K of 37K records).\nThe bitmap heap scan is 3% faster, so that didn't really plea my case.\nThe thing is that by the time we tried it, the data had been cached, so\nthere is no penalty for the use of the index (HDD retention on random\naccess). So it's logical that the index lookup is faster, it looks up less\nrecords.\n\nNow i'm looking for a way to turn off the caching, so that we'll have a fair\ntest.\n\nIt makes no sense to me to set shared_buffers really low. Any tips?\n\nCheers,\n\nWBL\n\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nHi,I have a colleague that is convinced that the website is faster if enable_seqscan is turned OFF.I'm convinced of the opposite (better to leave it ON), but i would like to show it, prove it to him.Now the first query we tried, would do a bitmap heap scan instead of a seqscan when the latter were disabled, to exclude about 50% of the records (18K of 37K records).\n\nThe bitmap heap scan is 3% faster, so that didn't really plea my case.The thing is that by the time we tried it, the data had been cached, so there is no penalty for the use of the index (HDD retention on random access). So it's logical that the index lookup is faster, it looks up less records.\nNow i'm looking for a way to turn off the caching, so that we'll have a fair test.It makes no sense to me to set shared_buffers really low. Any tips?Cheers,WBL-- \n\n\"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw", "msg_date": "Thu, 26 Aug 2010 12:32:57 +0200", "msg_from": "Willy-Bas Loos <[email protected]>", "msg_from_op": true, "msg_subject": "turn off caching for performance test" }, { "msg_contents": "Isn't it more fair to just flush the cache before doing each of the \nqueries? In real-life, you'll also have disk caching... Flushing the \nbuffer pool is easy, just restart PostgreSQL (or perhaps there is a \nadmin command for it too?). Flushing the OS-disk cache is obviously \nOS-dependent, for linux its trivial: http://linux-mm.org/Drop_Caches\n\nBest regards,\n\nArjen\n\nOn 26-8-2010 12:32 Willy-Bas Loos wrote:\n> Hi,\n>\n> I have a colleague that is convinced that the website is faster if\n> enable_seqscan is turned OFF.\n> I'm convinced of the opposite (better to leave it ON), but i would like\n> to show it, prove it to him.\n> Now the first query we tried, would do a bitmap heap scan instead of a\n> seqscan when the latter were disabled, to exclude about 50% of the\n> records (18K of 37K records).\n> The bitmap heap scan is 3% faster, so that didn't really plea my case.\n> The thing is that by the time we tried it, the data had been cached, so\n> there is no penalty for the use of the index (HDD retention on random\n> access). So it's logical that the index lookup is faster, it looks up\n> less records.\n>\n> Now i'm looking for a way to turn off the caching, so that we'll have a\n> fair test.\n>\n> It makes no sense to me to set shared_buffers really low. Any tips?\n>\n> Cheers,\n>\n> WBL\n>\n>\n> --\n> \"Patriotism is the conviction that your country is superior to all\n> others because you were born in it.\" -- George Bernard Shaw\n", "msg_date": "Thu, 26 Aug 2010 18:50:11 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: turn off caching for performance test" }, { "msg_contents": "\n> The bitmap heap scan is 3% faster,\n\n3% isn't really significant. Especially if the new setting makes one query \n100 times slower... Like a query which will, by bad luck, get turned into \na nested loop index scan for a lot of rows, on a huge table which isn't in \ncache...\n", "msg_date": "Thu, 26 Aug 2010 19:37:21 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: turn off caching for performance test" }, { "msg_contents": "@Pierre: i know.. but first i'd have to find such a query from real-life.\nAnd also, i'm convinced that this query would be faster with a seqscan if\nthe data wenen't cached.\n\n\n@Arjen: thanks, that helps.\nBut that's only the OS cache. There's also the shared_buffers, which are a\npostgres specific thing.\nI've found DISCARD in the\nmanual<http://www.postgresql.org/docs/8.3/interactive/sql-discard.html>,\nbut that only influences a single session, not the shared buffers.\n\nI reckon restarting the cluster should help, would it wipe out the cache?\n(pg_ctlcluster 8.3 main restart)\nOr is there a more graceful way?\n\nCheers,\n\nWBL\n\n\nOn Thu, Aug 26, 2010 at 7:37 PM, Pierre C <[email protected]> wrote:\n\n>\n> The bitmap heap scan is 3% faster,\n>>\n>\n> 3% isn't really significant. Especially if the new setting makes one query\n> 100 times slower... Like a query which will, by bad luck, get turned into a\n> nested loop index scan for a lot of rows, on a huge table which isn't in\n> cache...\n>\n\n\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\n@Pierre: i know.. but first i'd have to find such a query from real-life. And also, i'm convinced that this query would be faster with a seqscan if the data wenen't cached.@Arjen: thanks, that helps. \nBut that's only the OS cache. There's also the shared_buffers, which are a postgres specific thing.I've found DISCARD in the manual, but that only influences a single session, not the shared buffers.\nI reckon restarting the cluster should help, would it wipe out the cache? (pg_ctlcluster 8.3 main restart)Or is there a more graceful way?Cheers,WBLOn Thu, Aug 26, 2010 at 7:37 PM, Pierre C <[email protected]> wrote:\n\n\nThe bitmap heap scan is 3% faster,\n\n\n3% isn't really significant. Especially if the new setting makes one query 100 times slower... Like a query which will, by bad luck, get turned into a nested loop index scan for a lot of rows, on a huge table which isn't in cache...\n-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw", "msg_date": "Fri, 27 Aug 2010 11:51:21 +0200", "msg_from": "Willy-Bas Loos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: turn off caching for performance test" }, { "msg_contents": "Willy-Bas Loos wrote:\n> But that's only the OS cache. There's also the shared_buffers, which \n> are a postgres specific thing.\n> I've found DISCARD in the manual \n> <http://www.postgresql.org/docs/8.3/interactive/sql-discard.html>, but \n> that only influences a single session, not the shared buffers.\n>\n> I reckon restarting the cluster should help, would it wipe out the \n> cache? (pg_ctlcluster 8.3 main restart)\n> Or is there a more graceful way?\n\nStop the cluster; flush the OS cache; start the cluster again. Now you \nhave a clean cache to retest again. No easier way that's reliable. If \nyou try to clear out the database by doing things like scanning large \ntables not involved in the query, you'll discover features in PostgreSQL \nwill specifically defeat that from using more than a small portion of \nthe cache. Better to just do a full shutdown.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n\n\n\n\n\n\nWilly-Bas Loos wrote:\nBut that's only the OS cache. There's also the\nshared_buffers, which are a postgres specific thing.\nI've found DISCARD in the manual,\nbut that only influences a single session, not the shared buffers.\n\nI reckon restarting the cluster should help, would it wipe out the\ncache? (pg_ctlcluster 8.3 main restart)\nOr is there a more graceful way?\n\n\nStop the cluster; flush the OS cache; start the cluster again.  Now you\nhave a clean cache to retest again.  No easier way that's reliable.  If\nyou try to clear out the database by doing things like scanning large\ntables not involved in the query, you'll discover features in\nPostgreSQL will specifically defeat that from using more than a small\nportion of the cache.  Better to just do a full shutdown.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us", "msg_date": "Fri, 27 Aug 2010 13:28:30 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: turn off caching for performance test" }, { "msg_contents": "On Thu, Aug 26, 2010 at 4:32 AM, Willy-Bas Loos <[email protected]> wrote:\n> Hi,\n>\n> I have a colleague that is convinced that the website is faster if\n> enable_seqscan is turned OFF.\n> I'm convinced of the opposite (better to leave it ON), but i would like to\n> show it, prove it to him.\n\nStop, you're both doing it wrong. The issue isn't whether or not\nturning off seq scans will make a few things faster here and there,\nit's why is the query planner choosing sequential scans when it should\nbe choosing index scans.\n\nSo, what are your non-default settings in postgresql.conf?\nHave you increased effective_cache_size yet?\nLowered random_page_cost?\nRaised default stats target and re-analyzed?\n\nHave you been looking at the problem queries with explain analyze?\nWhat does it have to say about the planners choices?\n", "msg_date": "Fri, 27 Aug 2010 11:57:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: turn off caching for performance test" }, { "msg_contents": "On Thu, Aug 26, 2010 at 6:32 AM, Willy-Bas Loos <[email protected]> wrote:\n> I have a colleague that is convinced that the website is faster if\n> enable_seqscan is turned OFF.\n> I'm convinced of the opposite (better to leave it ON), but i would like to\n> show it, prove it to him.\n> Now the first query we tried, would do a bitmap heap scan instead of a\n> seqscan when the latter were disabled, to exclude about 50% of the records\n> (18K of 37K records).\n> The bitmap heap scan is 3% faster, so that didn't really plea my case.\n> The thing is that by the time we tried it, the data had been cached, so\n> there is no penalty for the use of the index (HDD retention on random\n> access). So it's logical that the index lookup is faster, it looks up less\n> records.\n>\n> Now i'm looking for a way to turn off the caching, so that we'll have a fair\n> test.\n>\n> It makes no sense to me to set shared_buffers really low. Any tips?\n\nsetting shared_buffers low or high is not going to flush the cache. it\nonly controls whether the o/s cache or the pg buffer cache is used.\n\nDisabling sequential scans is going to un-optimize a large class of\noperations where a sequential scan is really the best choice of\naction. In the old days of postgres, where the planner wasn't as\nsmart as it is today and some of the plan invalidation mechanics\nweren't there, it wasn't that uncommon to disable them. Today, it's\nreally not a good idea unless you have a very specific reason to, and\neven then I'd advise temporarily setting it and then setting it back\nwhen your operation is done.\n\nmerlin\n", "msg_date": "Wed, 15 Sep 2010 17:10:32 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: turn off caching for performance test" }, { "msg_contents": "On Fri, Aug 27, 2010 at 1:57 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Aug 26, 2010 at 4:32 AM, Willy-Bas Loos <[email protected]> wrote:\n>> Hi,\n>>\n>> I have a colleague that is convinced that the website is faster if\n>> enable_seqscan is turned OFF.\n>> I'm convinced of the opposite (better to leave it ON), but i would like to\n>> show it, prove it to him.\n>\n> Stop, you're both doing it wrong.  The issue isn't whether or not\n> turning off seq scans will make a few things faster here and there,\n> it's why is the query planner choosing sequential scans when it should\n> be choosing index scans.\n>\n> So, what are your non-default settings in postgresql.conf?\n> Have you increased effective_cache_size yet?\n> Lowered random_page_cost?\n> Raised default stats target and re-analyzed?\n>\n> Have you been looking at the problem queries with explain analyze?\n> What does it have to say about the planners choices?\n\n[a bit behind on my email]\n\nThis was exactly my thought on first reading this post. If the\nindexes are faster and PG thinks they are slower, it's a good bet that\nthere are some parameters that need tuning. Specifically,\neffective_cache_size may be too low, and random_page_cost and\nseq_page_cost are almost certainly too high.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Sat, 25 Sep 2010 23:36:06 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: turn off caching for performance test" }, { "msg_contents": "Hi,\n\nSorry for the late answer.\nI found the query i was looking for in the log (duration) and could\nprove that the seqscan is faster if the data were not cached.\nThis particular one was 22% faster.\nIt is \"a query which will get turned into a nested loop index scan for\na lot of rows, on a huge table\", but it's only 22% slower without a\nseqscan.\n(there's no advantage with seqscans off, as long as the cache is empty)\n\nI found few queries that did sequential scans in the normal mode on\ntables that matter.\nI found one query that did a seqscan anyway(with enable_seqscan off),\nbecause doing an index scan would be more than 1M points more\nexpensive (to the planner).\n\n$ grep ^[^#] /etc/postgresql/8.3/main/postgresql.conf|grep -e ^[^[:space:]]\ndata_directory =<blah>\t\t# use data in another directory\nhba_file = <blah>\t# host-based authentication file\nident_file = <blah>\t# ident configuration file\nexternal_pid_file = <blah>\t\t# write an extra PID file\nlisten_addresses = '*'\t\t# what IP address(es) to listen on;\nport = 5432\t\t\t\t# (change requires restart)\nmax_connections = 200\t\t\t# (change requires restart)\nunix_socket_directory = '/var/run/postgresql'\t\t# (change requires restart)\ntcp_keepalives_idle = 120\t\t# TCP_KEEPIDLE, in seconds;\ntcp_keepalives_interval = 0\t\t# TCP_KEEPINTVL, in seconds;\ntcp_keepalives_count = 0\t\t# TCP_KEEPCNT;\nshared_buffers = 2GB\t\t\t# min 128kB or max_connections*16kB\ntemp_buffers = 24MB\t\t\t# min 800kB\nwork_mem = 100MB\t\t\t\t# min 64kB\nmaintenance_work_mem = 256MB\t\t# min 1MB\nmax_fsm_pages = 600000\t\t\t# min max_fsm_relations*16, 6 bytes each\nsynchronous_commit = off\t\t# immediate fsync at commit\ncheckpoint_segments = 16\t\t# in logfile segments, min 1, 16MB each\neffective_cache_size = 4GB\nlog_min_duration_statement = 2000\t# -1 is disabled, 0 logs all\nstatements --> milliseconds\nlog_line_prefix = '%t '\t\t\t# special values:\nautovacuum = on \t\t\t# Enable autovacuum subprocess? 'on'\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8'\t\t\t# locale for system error message\nlc_monetary = 'en_US.UTF-8'\t\t\t# locale for monetary formatting\nlc_numeric = 'en_US.UTF-8'\t\t\t# locale for number formatting\nlc_time = 'en_US.UTF-8'\t\t\t\t# locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\nmax_locks_per_transaction = 128\t\t# min 10\n\nWe have 15K rpm SAS disks in RAID10.\nWe have 16 GB of RAM and 4 modern processor cores (i think xeons,\nmight also be opteron)\nWe run Debian Lenny.\nIt's a dedicated DB server, there is one other cluster on it without\nvery much data and with few connections to it daily.\ndf -h on the data dir gives me 143G\nwe're growing\nthere are many queries that should be optimized\nthe seqscan option is in the connection string, not in the postgresql.conf\n\nCheers,\n\nOn Fri, Aug 27, 2010 at 7:57 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Aug 26, 2010 at 4:32 AM, Willy-Bas Loos <[email protected]> wrote:\n>> Hi,\n>>\n>> I have a colleague that is convinced that the website is faster if\n>> enable_seqscan is turned OFF.\n>> I'm convinced of the opposite (better to leave it ON), but i would like to\n>> show it, prove it to him.\n>\n> Stop, you're both doing it wrong.  The issue isn't whether or not\n> turning off seq scans will make a few things faster here and there,\n> it's why is the query planner choosing sequential scans when it should\n> be choosing index scans.\n>\n> So, what are your non-default settings in postgresql.conf?\n> Have you increased effective_cache_size yet?\n> Lowered random_page_cost?\n> Raised default stats target and re-analyzed?\n>\n> Have you been looking at the problem queries with explain analyze?\n> What does it have to say about the planners choices?\n>\n\n\n\n-- \n\"Patriotism is the conviction that your country is superior to all\nothers because you were born in it.\" -- George Bernard Shaw\n", "msg_date": "Thu, 30 Sep 2010 18:41:55 +0200", "msg_from": "Willy-Bas Loos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: turn off caching for performance test" }, { "msg_contents": "> I found one query that did a seqscan anyway(with enable_seqscan off),\n> because doing an index scan would be more than 1M points more\n> expensive (to the planner).\n\nHmm, i guess that says it all :)\n-- \n\"Patriotism is the conviction that your country is superior to all\nothers because you were born in it.\" -- George Bernard Shaw\n", "msg_date": "Fri, 1 Oct 2010 19:51:26 +0200", "msg_from": "Willy-Bas Loos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: turn off caching for performance test" } ]
[ { "msg_contents": "-------- Original Message --------\nSubject: \tpostgres 8.4.1 number of connections\nDate: \tThu, 26 Aug 2010 14:25:47 -0500\nFrom: \tMaria L. Wilson <[email protected]>\nReply-To: \tWilson, Maria Louise (LARC-E301)[SCIENCE SYSTEMS \nAPPLICATIONS] <[email protected]>\nTo: \[email protected] \n<[email protected]>\n\n\n\nwe have this application (using jboss/java/hibernate) on linux accessing \ndata on 3 postgres database servers using 8.4.1. \n\nOne of our many concerns has been the way we handle connections to the \ndatabase. java/hibernate handle their own pooling so I understand that \nusing anything else is out of the question. Our jboss configuration \ncurrently defaults to 5 connections per database. On our main database \nserver, we handle 7 of the databases that this application uses. On \nthis one server, we usually average around 300 - 500 connections to the \nserver. In our postgres conf file on this particular machine we set the \nmax_connection parameter to 1000. If we set it to much lower we end up \nwith connection errors. Any comments or better way to handle this?\n\nthanks, Maria Wilson\nNASA, Langley Research Center\nHampton, Virginia 23666\n\n\n\n\n\n\n\n\n\n-------- Original Message --------\n\n\n\nSubject: \npostgres 8.4.1 number of connections\n\n\nDate: \nThu, 26 Aug 2010 14:25:47 -0500\n\n\nFrom: \nMaria L. Wilson <[email protected]>\n\n\nReply-To: \nWilson, Maria Louise (LARC-E301)[SCIENCE SYSTEMS\nAPPLICATIONS] <[email protected]>\n\n\nTo: \[email protected]\n<[email protected]>\n\n\n\n\n\nwe have this application (using jboss/java/hibernate) on linux accessing \ndata on 3 postgres database servers using 8.4.1. \n\nOne of our many concerns has been the way we handle connections to the \ndatabase. java/hibernate handle their own pooling so I understand that \nusing anything else is out of the question. Our jboss configuration \ncurrently defaults to 5 connections per database. On our main database \nserver, we handle 7 of the databases that this application uses. On \nthis one server, we usually average around 300 - 500 connections to the \nserver. In our postgres conf file on this particular machine we set the \nmax_connection parameter to 1000. If we set it to much lower we end up \nwith connection errors. Any comments or better way to handle this?\n\nthanks, Maria Wilson\nNASA, Langley Research Center\nHampton, Virginia 23666", "msg_date": "Thu, 26 Aug 2010 16:29:31 -0400", "msg_from": "\"Maria L. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: postgres 8.4.1 number of connections]" }, { "msg_contents": "On Thu, Aug 26, 2010 at 2:29 PM, Maria L. Wilson\n<[email protected]> wrote:\n>\n>\n> we have this application (using jboss/java/hibernate) on linux accessing\n> data on 3 postgres database servers using 8.4.1.\n>\n> One of our many concerns has been the way we handle connections to the\n> database. java/hibernate handle their own pooling so I understand that\n> using anything else is out of the question.\n\nIt's not impossible to put another connection pooler between hibernate\nand your database server. May not be useful or better, but it's\npossible.\n\n> Our jboss configuration\n> currently defaults to 5 connections per database.\n\nI assume this is for each app server. How many app servers do you have?\n\n> On our main database\n> server, we handle 7 of the databases that this application uses. On\n> this one server, we usually average around 300 - 500 connections to the\n> server. In our postgres conf file on this particular machine we set the\n> max_connection parameter to 1000. If we set it to much lower we end up\n> with connection errors. Any comments or better way to handle this?\n\nOther than breaking out the databases onto their own server, or trying\nto pool to fewer connections, not really. OTOH, a db server with\nenough memory can handle having that many connections as long as you\ndon't run into any thundering herd issues.\n\nOh, and you should really update your pgsql to 8.4.4. 8.4.1 had some\nissues with heavy load I ran into last year about this time. Which is\nwhy I'm still on 8.3.11\n\n--\nTo understand recursion, one must first understand recursion.\n", "msg_date": "Thu, 26 Aug 2010 15:13:44 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: postgres 8.4.1 number of connections]" }, { "msg_contents": "\n> One of our many concerns has been the way we handle connections to the \n> database. java/hibernate handle their own pooling so I understand that \n> using anything else is out of the question. \n\nIt's not out of the question, but it's probably not necessary.\n\n> Our jboss configuration \n> currently defaults to 5 connections per database. On our main database \n> server, we handle 7 of the databases that this application uses. On \n> this one server, we usually average around 300 - 500 connections to the \n> server. \n\nDo you have 10 to 20 application servers? Otherwise I don't understand\nhow you're getting 300 to 500 connections. Do you have any stats about\nhow many connections for each database each JBOSS server actually needs?\n It seems unusual that it would need more than one or two.\n\nAlso, why does the application use 7 different databases?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 26 Aug 2010 18:05:57 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: postgres 8.4.1 number of connections]" }, { "msg_contents": "On Thu, Aug 26, 2010 at 23:13, Scott Marlowe <[email protected]> wrote:\n> On Thu, Aug 26, 2010 at 2:29 PM, Maria L. Wilson\n> <[email protected]> wrote:\n>>\n>>\n>> we have this application (using jboss/java/hibernate) on linux accessing\n>> data on 3 postgres database servers using 8.4.1.\n>>\n>> One of our many concerns has been the way we handle connections to the\n>> database.  java/hibernate handle their own pooling so I understand that\n>> using anything else is out of the question.\n>\n> It's not impossible to put another connection pooler between hibernate\n> and your database server.  May not be useful or better, but it's\n> possible.\n\nI've been using pgbouncer in combination with the jboss connection\npooler several times, mainly for the reason that reconfiguring the\njboss connection poolers (particularly if you have lots of them) can\ncause quite a bit of downtime. Just sticking a 1-1 mapping pgbouncer\nin between with support for SUSPEND makes a lot of difference if you\nswitch master/slave on your replication /ha. It'll still break the\nconnections for jboss, but it'll recover from that a *lot* faster than\na reconfig.\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Fri, 27 Aug 2010 09:54:46 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: postgres 8.4.1 number of connections]" }, { "msg_contents": "thanks for your response....\nwe have 3 app servers that attach to this one particular database server.\nWhat kind of load issues did you find on 8.4.1? I'd be interested in \nanything documented on it - this might make an upgrade a higher priority!\n\nthanks, Maria\n\nScott Marlowe wrote:\n> On Thu, Aug 26, 2010 at 2:29 PM, Maria L. Wilson\n> <[email protected]> wrote:\n> \n>> we have this application (using jboss/java/hibernate) on linux accessing\n>> data on 3 postgres database servers using 8.4.1.\n>>\n>> One of our many concerns has been the way we handle connections to the\n>> database. java/hibernate handle their own pooling so I understand that\n>> using anything else is out of the question.\n>> \n>\n> It's not impossible to put another connection pooler between hibernate\n> and your database server. May not be useful or better, but it's\n> possible.\n>\n> \n>> Our jboss configuration\n>> currently defaults to 5 connections per database.\n>> \n>\n> I assume this is for each app server. How many app servers do you have?\n>\n> \n>> On our main database\n>> server, we handle 7 of the databases that this application uses. On\n>> this one server, we usually average around 300 - 500 connections to the\n>> server. In our postgres conf file on this particular machine we set the\n>> max_connection parameter to 1000. If we set it to much lower we end up\n>> with connection errors. Any comments or better way to handle this?\n>> \n>\n> Other than breaking out the databases onto their own server, or trying\n> to pool to fewer connections, not really. OTOH, a db server with\n> enough memory can handle having that many connections as long as you\n> don't run into any thundering herd issues.\n>\n> Oh, and you should really update your pgsql to 8.4.4. 8.4.1 had some\n> issues with heavy load I ran into last year about this time. Which is\n> why I'm still on 8.3.11\n>\n> --\n> To understand recursion, one must first understand recursion.\n> \n\n\n\n\n\n\nthanks for your response....\nwe have 3 app servers that attach to this one particular database\nserver.\nWhat kind of load issues did you find on 8.4.1?  I'd be interested in\nanything documented on it - this might make an upgrade a higher\npriority!\n\nthanks,  Maria\n\nScott Marlowe wrote:\n\nOn Thu, Aug 26, 2010 at 2:29 PM, Maria L. Wilson\n<[email protected]> wrote:\n \n\n\nwe have this application (using jboss/java/hibernate) on linux accessing\ndata on 3 postgres database servers using 8.4.1.\n\nOne of our many concerns has been the way we handle connections to the\ndatabase. java/hibernate handle their own pooling so I understand that\nusing anything else is out of the question.\n \n\n\nIt's not impossible to put another connection pooler between hibernate\nand your database server. May not be useful or better, but it's\npossible.\n\n \n\n Our jboss configuration\ncurrently defaults to 5 connections per database.\n \n\n\nI assume this is for each app server. How many app servers do you have?\n\n \n\n On our main database\nserver, we handle 7 of the databases that this application uses. On\nthis one server, we usually average around 300 - 500 connections to the\nserver. In our postgres conf file on this particular machine we set the\nmax_connection parameter to 1000. If we set it to much lower we end up\nwith connection errors. Any comments or better way to handle this?\n \n\n\nOther than breaking out the databases onto their own server, or trying\nto pool to fewer connections, not really. OTOH, a db server with\nenough memory can handle having that many connections as long as you\ndon't run into any thundering herd issues.\n\nOh, and you should really update your pgsql to 8.4.4. 8.4.1 had some\nissues with heavy load I ran into last year about this time. Which is\nwhy I'm still on 8.3.11\n\n--\nTo understand recursion, one must first understand recursion.", "msg_date": "Mon, 30 Aug 2010 09:53:20 -0400", "msg_from": "\"Maria L. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: postgres 8.4.1 number of connections]" } ]
[ { "msg_contents": "I am new to Postgres and I am trying to understand the Explain Analyze\nso I can tune the following query. I run the same query using mysql and\nit takes less than 50ms. I run it on postgres and it takes 10 seconds.\nI feel like I am missing something very obvious. (VehicleUsed is a big\ntable over 750,000records) and datasetgroupyearmakemodel has 150000\nrecords.\n\n \n\nIt looks like the cost is highest in the Hash Join on Postalcode. Am\nI reading this correctly.? I do have indexes on the lower(postalcode)\nin both tables. Why wouldn't be using the index? Thanks in advance for\nany help.\n\n \n\nHere is my query:\n\n \n\nselect distinct VehicleMake.VehicleMake\n\nfrom VehicleUsed\n\ninner join PostalCodeRegionCountyCity on ( lower (\nVehicleUsed.PostalCode ) = lower ( PostalCodeRegionCountyCity.PostalCode\n) )\n\n INNER JOIN DATASETGROUPYEARMAKEMODEL ON ( VEHICLEUSED.VEHICLEYEAR =\nDATASETGROUPYEARMAKEMODEL.VEHICLEYEAR ) \n\n AND ( VEHICLEUSED.VEHICLEMAKEID =\nDATASETGROUPYEARMAKEMODEL.VEHICLEMAKEID ) \n\n AND ( VEHICLEUSED.VEHICLEMODELID =\nDATASETGROUPYEARMAKEMODEL.VEHICLEMODELID )\n\n inner join VehicleMake on ( VehicleUsed.VehicleMakeId =\nVehicleMake.VehicleMakeId )\n\nwhere \n\n( DatasetGroupYearMakeModel.DatasetGroupId = 3 ) and\n\n ( VehicleUsed.DatasetId <> 113 ) \n\n and ( VehicleUsed.ProductGroupId <> 13 ) \n\n and ( PostalCodeRegionCountyCity.RegionId = 36 )\n\norder by VehicleMake.VehicleMake\n\n limit 500000\n\n \n\nHere is the explain analyze\n\n \n\n\"Limit (cost=38292.53..38293.19 rows=261 width=8) (actual\ntime=10675.857..10675.892 rows=42 loops=1)\"\n\n\" -> Sort (cost=38292.53..38293.19 rows=261 width=8) (actual\ntime=10675.855..10675.868 rows=42 loops=1)\"\n\n\" Sort Key: vehiclemake.vehiclemake\"\n\n\" Sort Method: quicksort Memory: 18kB\"\n\n\" -> HashAggregate (cost=38279.45..38282.06 rows=261 width=8)\n(actual time=10675.710..10675.728 rows=42 loops=1)\"\n\n\" -> Hash Join (cost=436.31..38270.51 rows=3576 width=8)\n(actual time=4.471..10658.291 rows=10425 loops=1)\"\n\n\" Hash Cond: (vehicleused.vehiclemakeid =\nvehiclemake.vehiclemakeid)\"\n\n\" -> Hash Join (cost=428.43..38213.47 rows=3576\nwidth=4) (actual time=4.152..10639.742 rows=10425 loops=1)\"\n\n\" Hash Cond:\n(lower((vehicleused.postalcode)::text) =\nlower((postalcoderegioncountycity.postalcode)::text))\"\n\n\" -> Nested Loop (cost=101.81..37776.78\nrows=11887 width=10) (actual time=1.172..9876.586 rows=382528 loops=1)\"\n\n\" -> Bitmap Heap Scan on\ndatasetgroupyearmakemodel (cost=101.81..948.81 rows=5360 width=6)\n(actual time=0.988..17.800 rows=5377 loops=1)\"\n\n\" Recheck Cond: (datasetgroupid =\n3)\"\n\n\" -> Bitmap Index Scan on\ndatasetgroupyearmakemodel_i04 (cost=0.00..100.47 rows=5360 width=0)\n(actual time=0.830..0.830 rows=5377 loops=1)\"\n\n\" Index Cond: (datasetgroupid\n= 3)\"\n\n\" -> Index Scan using vehicleused_i10 on\nvehicleused (cost=0.00..6.85 rows=1 width=12) (actual time=0.049..1.775\nrows=71 loops=5377)\"\n\n\" Index Cond:\n((vehicleused.vehiclemodelid = datasetgroupyearmakemodel.vehiclemodelid)\nAND (vehicleused.vehiclemakeid =\ndatasetgroupyearmakemodel.vehiclemakeid) AND (vehicleused.vehicleyear =\ndatasetgroupyearmakemodel.vehicleyear))\"\n\n\" Filter: ((vehicleused.datasetid\n<> 113) AND (vehicleused.productgroupid <> 13))\"\n\n\" -> Hash (cost=308.93..308.93 rows=1416\nwidth=6) (actual time=2.738..2.738 rows=1435 loops=1)\"\n\n\" -> Bitmap Heap Scan on\npostalcoderegioncountycity (cost=27.23..308.93 rows=1416 width=6)\n(actual time=0.222..0.955 rows=1435 loops=1)\"\n\n\" Recheck Cond: (regionid = 36)\"\n\n\" -> Bitmap Index Scan on\npostalcoderegioncountycity_i05 (cost=0.00..26.87 rows=1416 width=0)\n(actual time=0.202..0.202 rows=1435 loops=1)\"\n\n\" Index Cond: (regionid =\n36)\"\n\n\" -> Hash (cost=4.61..4.61 rows=261 width=10)\n(actual time=0.307..0.307 rows=261 loops=1)\"\n\n\" -> Seq Scan on vehiclemake (cost=0.00..4.61\nrows=261 width=10) (actual time=0.033..0.154 rows=261 loops=1)\"\n\n\"Total runtime: 10676.058 ms\"\n\n \n\nPam Ozer\n\nData Architect\n\[email protected] <mailto:[email protected]> \n\ntel. 949.705.3468\n\n \n\n \n\nSource Interlink Media\n\n1733 Alton Pkwy Suite 100, Irvine, CA 92606\n\nwww.simautomotive.com <http://www.simautomotive.com> \n\nConfidentiality Notice- This electronic communication, and all\ninformation herein, including files attached hereto, is private, and is\nthe property of the sender. This communication is intended only for the\nuse of the individual or entity named above. If you are not the intended\nrecipient, you are hereby notified that any disclosure of; dissemination\nof; distribution of; copying of; or, taking any action in reliance upon\nthis communication, is strictly prohibited. If you have received this\ncommunication in error, please immediately notify us by telephone,\n(949)-705-3000, and destroy all copies of this communication. Thank you.", "msg_date": "Thu, 26 Aug 2010 17:03:27 -0700", "msg_from": "\"Ozer, Pam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Query " }, { "msg_contents": "On Thu, Aug 26, 2010 at 6:03 PM, Ozer, Pam <[email protected]> wrote:\n>\n> I am new to Postgres and I am trying to understand the Explain Analyze so I can tune the following query.  I run the same query using mysql and it takes less than 50ms.  I run it on postgres and it takes 10 seconds. I feel like I am missing something very obvious. (VehicleUsed is a big table over 750,000records) and datasetgroupyearmakemodel has 150000 records.\n>\n> It looks like the cost is highest in the Hash Join  on Postalcode.   Am I reading this correctly.?  I do have indexes on the lower(postalcode) in both tables.  Why wouldn’t be using the index?\n\nNo, it's spending most of its time here:\n\n\n\n> \"                          ->  Nested Loop  (cost=101.81..37776.78 rows=11887 width=10) (actual time=1.172..9876.586 rows=382528 loops=1)\"\n\nNote that it expects 11,887 rows but gets 382k rows.\n\nTry turning up default stats target and running analyze again and see\nhow it runs.\n", "msg_date": "Thu, 26 Aug 2010 18:18:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" }, { "msg_contents": "We need more information than that, like:\n\nWhat version of PostgreSQL?\nWhat does the hardware look like?\nWhat does the disk and tablespace layout look like?\nHow are your configuration variables set?\n\nOther than that, are the statistics up to date on the VehicleMake table?\n\nBob Lunney\n\n--- On Thu, 8/26/10, Ozer, Pam <[email protected]> wrote:\n\nFrom: Ozer, Pam <[email protected]>\nSubject: [PERFORM] Slow Query\nTo: [email protected]\nDate: Thursday, August 26, 2010, 8:03 PM\n\n\n\n\n \n \n\n\n\n\n\n\n\nI am new to Postgres and I am trying to understand the\nExplain Analyze so I can tune the following query.  I run the same query using\nmysql and it takes less than 50ms.  I run it on postgres and it takes 10\nseconds. I feel like I am missing something very obvious. (VehicleUsed is a big\ntable over 750,000records) and datasetgroupyearmakemodel has 150000 records. \n\n   \n\nIt looks like the cost is highest in the Hash Join  on\nPostalcode.   Am I reading this correctly.?  I do have indexes on the\nlower(postalcode) in both tables.  Why wouldn’t be using the index?  Thanks\nin advance for any help. \n\n   \n\nHere is my query: \n\n   \n\nselect  distinct VehicleMake.VehicleMake \n\nfrom VehicleUsed \n\ninner join PostalCodeRegionCountyCity on ( lower (\nVehicleUsed.PostalCode ) = lower ( PostalCodeRegionCountyCity.PostalCode ) ) \n\n INNER JOIN DATASETGROUPYEARMAKEMODEL ON (\nVEHICLEUSED.VEHICLEYEAR = DATASETGROUPYEARMAKEMODEL.VEHICLEYEAR ) \n\n AND ( VEHICLEUSED.VEHICLEMAKEID =\nDATASETGROUPYEARMAKEMODEL.VEHICLEMAKEID ) \n\n AND ( VEHICLEUSED.VEHICLEMODELID =\nDATASETGROUPYEARMAKEMODEL.VEHICLEMODELID ) \n\n inner join VehicleMake on ( VehicleUsed.VehicleMakeId =\nVehicleMake.VehicleMakeId ) \n\nwhere \n\n( DatasetGroupYearMakeModel.DatasetGroupId = 3 )  and \n\n ( VehicleUsed.DatasetId <> 113 ) \n\n and ( VehicleUsed.ProductGroupId <> 13 ) \n\n and ( PostalCodeRegionCountyCity.RegionId = 36 ) \n\norder by VehicleMake.VehicleMake \n\n limit 500000 \n\n   \n\nHere is the explain analyze \n\n   \n\n\"Limit  (cost=38292.53..38293.19 rows=261 width=8)\n(actual time=10675.857..10675.892 rows=42 loops=1)\" \n\n\"  ->  Sort  (cost=38292.53..38293.19 rows=261\nwidth=8) (actual time=10675.855..10675.868 rows=42 loops=1)\" \n\n\"        Sort Key: vehiclemake.vehiclemake\" \n\n\"        Sort Method:  quicksort  Memory: 18kB\" \n\n\"        ->  HashAggregate  (cost=38279.45..38282.06\nrows=261 width=8) (actual time=10675.710..10675.728 rows=42 loops=1)\" \n\n\"              ->  Hash Join  (cost=436.31..38270.51\nrows=3576 width=8) (actual time=4.471..10658.291 rows=10425 loops=1)\" \n\n\"                    Hash Cond:\n(vehicleused.vehiclemakeid = vehiclemake.vehiclemakeid)\" \n\n\"                    ->  Hash Join \n(cost=428.43..38213.47 rows=3576 width=4) (actual time=4.152..10639.742\nrows=10425 loops=1)\" \n\n\"                          Hash Cond:\n(lower((vehicleused.postalcode)::text) =\nlower((postalcoderegioncountycity.postalcode)::text))\" \n\n\"                          ->  Nested Loop \n(cost=101.81..37776.78 rows=11887 width=10) (actual time=1.172..9876.586\nrows=382528 loops=1)\" \n\n\"                                ->  Bitmap Heap\nScan on datasetgroupyearmakemodel  (cost=101.81..948.81 rows=5360 width=6)\n(actual time=0.988..17.800 rows=5377 loops=1)\" \n\n\"                                      Recheck Cond:\n(datasetgroupid = 3)\" \n\n\"                                      ->  Bitmap\nIndex Scan on datasetgroupyearmakemodel_i04  (cost=0.00..100.47 rows=5360\nwidth=0) (actual time=0.830..0.830 rows=5377 loops=1)\" \n\n\"                                            Index\nCond: (datasetgroupid = 3)\" \n\n\"                                ->  Index Scan\nusing vehicleused_i10 on vehicleused  (cost=0.00..6.85 rows=1 width=12) (actual\ntime=0.049..1.775 rows=71 loops=5377)\" \n\n\"                                      Index Cond:\n((vehicleused.vehiclemodelid = datasetgroupyearmakemodel.vehiclemodelid) AND\n(vehicleused.vehiclemakeid = datasetgroupyearmakemodel.vehiclemakeid) AND\n(vehicleused.vehicleyear = datasetgroupyearmakemodel.vehicleyear))\" \n\n\"                                      Filter:\n((vehicleused.datasetid <> 113) AND (vehicleused.productgroupid <>\n13))\" \n\n\"                          ->  Hash \n(cost=308.93..308.93 rows=1416 width=6) (actual time=2.738..2.738 rows=1435\nloops=1)\" \n\n\"                                ->  Bitmap Heap\nScan on postalcoderegioncountycity  (cost=27.23..308.93 rows=1416 width=6)\n(actual time=0.222..0.955 rows=1435 loops=1)\" \n\n\"                                      Recheck Cond:\n(regionid = 36)\" \n\n\"                                      ->  Bitmap\nIndex Scan on postalcoderegioncountycity_i05  (cost=0.00..26.87 rows=1416\nwidth=0) (actual time=0.202..0.202 rows=1435 loops=1)\" \n\n\"                                            Index\nCond: (regionid = 36)\" \n\n\"                    ->  Hash  (cost=4.61..4.61\nrows=261 width=10) (actual time=0.307..0.307 rows=261 loops=1)\" \n\n\"                          ->  Seq Scan on\nvehiclemake  (cost=0.00..4.61 rows=261 width=10) (actual time=0.033..0.154\nrows=261 loops=1)\" \n\n\"Total runtime: 10676.058 ms\" \n\n   \n\n\n\nPam Ozer \n\nData Architect \n\[email protected]\n \n\n\n \n \n tel. 949.705.3468 \n \n \n\n\n\n\n\n \n \n \n \n \n \n \n \n Source Interlink Media \n 1733 Alton Pkwy Suite 100, Irvine, CA 92606 \n www.simautomotive.com\n \n \n \n\n\n\n\nConfidentiality Notice- This electronic communication, and all\ninformation herein, including files attached hereto, is private, and is the\nproperty of the sender. This communication is intended only for the use of the\nindividual or entity named above. If you are not the intended recipient, you\nare hereby notified that any disclosure of; dissemination of; distribution of;\ncopying of; or, taking any action in reliance upon this communication, is\nstrictly prohibited. If you have received this communication in error, please\nimmediately notify us by telephone, (949)-705-3000, and destroy all copies of\nthis communication. Thank you.", "msg_date": "Thu, 26 Aug 2010 17:19:37 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" }, { "msg_contents": "\"Ozer, Pam\" <[email protected]> wrote:\n> I am new to Postgres and I am trying to understand the Explain\n> Analyze so I can tune the following query.\n \nFair enough. If you wanted advice from others on how to tune the\nquery, you would need to provide additional information, as\ndescribed here:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n> It looks like the cost is highest in the Hash Join on Postalcode.\n> Am I reading this correctly.?\n \nNo.\n \nHere's how I read it:\n \nIt's scanning the datasetgroupyearmakemodel_i04 index for\ndatasetgroupid = 3, and finding 5377 index entries in 1.775 ms. \nThen it read the related tuples (in the order they exist in the\ntable's heap space). Added to the index scan, that brings us to\n17.8 ms.\n \nFor each of the preceding rows, it uses the vehicleused_i10 index to\nfind matching vehicleused rows. On average it finds 71 rows in\n1.775 ms, but it does this 5377 times, which brings us to 382528\nrows in 9876.586 ms.\n \nIndependently of the above, it uses the\npostalcoderegioncountycity_i05 index to find\npostalcoderegioncountycity rows where regionid = 36, and puts them\ninto a hash table. It takes 2.738 ms to hash the 1435 matching\nrows. Probing this hash for each of the 382528 rows leaves us with\n10425 rows, and brings the run time to 10639.742 ms.\n \nIndependently of the above, the vehiclemake table is sequentially\nscanned to create a hash table with 261 rows. That takes 0.307 ms. \nProbing that for each of the 10425 rows doesn't eliminate anything\nand bring the run time to 10658.291 ms.\n \nSince DISTINCT was specified, the results of the preceding are fed\ninto a hash table, leaving 42 distinct values. Run time has now\nreached 10675.728 ms.\n \nThese 42 rows are sorted, based on the ORDER BY clause, bringing us\nto 10675.868 ms.\n \nThe output of the sort goes through a filter which will cut off\noutput after 500000 rows, per the LIMIT clause, bringing run time to\n10675.892 ms.\n \n> I do have indexes on the lower(postalcode)\n> in both tables. Why wouldn't be using the index?\n \nEither it doesn't think it can use that index for some reason (e.g.,\none column is char(n) and the other is varchar(n)), or it thinks\nthat using that index will be slower. The latter could be due to\nbad statistics or improper configuration. Without more information,\nit's hard to guess why it thinks that or whether it's right.\n \nI suggest you read the SlowQueryQuestions link and post with more\ninformation.\n \nI will offer the observation that the numbers above suggest a fair\namount of the data for this query came from cache, and unless you\nhave tuned your database's costing factors to allow it to expect\nthat, it's not surprising that it makes bad choices. Just as an\nexperiment, try running these statements in a connection before your\nquery on that same connection -- if they work out well, it might\nmake sense to make similar changes to the postgresql.conf file:\n \nset effective_cache_size = '3GB'; -- or 1GB less than total RAM\nset seq_page_cost = '0.1';\nset random_page_cost = '0.1';\n \n-Kevin\n", "msg_date": "Fri, 27 Aug 2010 09:17:39 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" }, { "msg_contents": "Ozer, Pam wrote:\n>\n> I am new to Postgres and I am trying to understand the Explain Analyze \n> so I can tune the following query.\n>\n\nYou should try http://explain.depesz.com/ , where you can post query \nplans like this and see where the time is going in a very easy to work \nwith form. It will highlight row estimation mistakes for you too.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n\n\n\n\n\n\nOzer, Pam wrote:\n\n\n\n\n\n\nI am new to Postgres and I am trying to\nunderstand the\nExplain Analyze so I can tune the following query. \n\n\n\n\nYou should try http://explain.depesz.com/ , where you can post query\nplans like this and see where the time is going in a very easy to work\nwith form.  It will highlight row estimation mistakes for you too.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us", "msg_date": "Fri, 27 Aug 2010 13:30:50 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" }, { "msg_contents": "Thank you all for your help. I will also get the rest of the\ninformation so it will be more clear.\n\n \n\nFrom: Greg Smith [mailto:[email protected]] \nSent: Friday, August 27, 2010 10:31 AM\nTo: Ozer, Pam\nCc: [email protected]\nSubject: Re: [PERFORM] Slow Query\n\n \n\nOzer, Pam wrote: \n\nI am new to Postgres and I am trying to understand the Explain Analyze\nso I can tune the following query. \n\n\nYou should try http://explain.depesz.com/ , where you can post query\nplans like this and see where the time is going in a very easy to work\nwith form. It will highlight row estimation mistakes for you too.\n\n\n\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n\n\n\n\n\n\n\n\n\nThank you all for your help.  I\nwill also get the rest of the information so it will be more clear.\n \n\n\nFrom: Greg Smith\n[mailto:[email protected]] \nSent: Friday, August 27, 2010 10:31 AM\nTo: Ozer, Pam\nCc: [email protected]\nSubject: Re: [PERFORM] Slow Query\n\n\n \nOzer, Pam wrote: \nI am new to Postgres and I am trying to understand the\nExplain Analyze so I can tune the following query. \n\nYou should try http://explain.depesz.com/\n, where you can post query plans like this and see where the time is going in a\nvery easy to work with form.  It will highlight row estimation mistakes\nfor you too.\n\n\n\n-- Greg Smith  2ndQuadrant US  Baltimore, MDPostgreSQL Training, Services and [email protected]   www.2ndQuadrant.us", "msg_date": "Fri, 27 Aug 2010 10:44:19 -0700", "msg_from": "\"Ozer, Pam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Query" } ]
[ { "msg_contents": "I have a query that\n\n \n\nSelect Distinct VehicleId\n\n>From Vehicle\n\nWhere VehicleMileage between 0 and 15000.\n\n \n\nI have an index on VehicleMileage. Is there another way to put an index\non a between? The index is not being picked up. It does get picked up\nwhen I run \n\n \n\nSelect Distinct VehicleId\n\n>From Vehicle\n\nWhere VehicleMileage = 15000.\n\n \n\n \n\nI just want to make sure that there is not a special index I should be\nusing.\n\n \n\nThanks\n\n \n\nPam Ozer\n\n\n\n\n\n\n\n\n\n\n\nI have a query that\n \nSelect  Distinct VehicleId\nFrom Vehicle\nWhere VehicleMileage between 0 and 15000.\n \nI have an index on VehicleMileage.  Is there another way to\nput an index on a between?  The index is not being picked up.  It does get\npicked up when I run \n \nSelect  Distinct VehicleId\nFrom Vehicle\nWhere VehicleMileage = 15000.\n \n \nI just want to make sure that there is not a special index I\nshould be using.\n \nThanks\n \n\nPam Ozer", "msg_date": "Fri, 27 Aug 2010 17:21:48 -0700", "msg_from": "\"Ozer, Pam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Using Between" }, { "msg_contents": "On 8/27/10 5:21 PM, Ozer, Pam wrote:\n> I have a query that\n>\n> Select Distinct VehicleId\n>\n> From Vehicle\n>\n> Where VehicleMileage between 0 and 15000.\n>\n> I have an index on VehicleMileage. Is there another way to put an index on a between? The index is not being picked up. It does get picked up when I run\n>\n> Select Distinct VehicleId\n>\n> From Vehicle\n>\n> Where VehicleMileage = 15000.\n>\n> I just want to make sure that there is not a special index I should be using.\n\nYou need to post EXPLAIN ANALYZE of your query. It could be that an index scan is actually not a good plan (for example, a sequential scan might be faster if most of your vehicles have low mileage). Without the EXPLAIN ANALYZE, there's no way to say what's going on.\n\nDid you ANALYZE your database after you loaded the data?\n\nCraig\n\n> Thanks\n>\n> *Pam Ozer*\n>\n\n", "msg_date": "Fri, 27 Aug 2010 17:41:53 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Between" }, { "msg_contents": "Yes. ANALYZE was run after we loaded the data. Thanks for your\nassistance\nHere is the full Query.\n\nselect distinct VehicleUsed.VehicleUsedId as VehicleUsedId ,\n VehicleUsed.VehicleUsedDisplayPriority as VehicleUsedDisplayPriority ,\n VehicleUsed.HasVehicleUsedThumbnail as HasVehicleUsedThumbnail ,\n VehicleUsed.HasVehicleUsedPrice as HasVehicleUsedPrice ,\n VehicleUsed.VehicleUsedPrice as VehicleUsedPrice ,\n VehicleUsed.HasVehicleUsedMileage as HasVehicleUsedMileage ,\n VehicleUsed.VehicleUsedMileage as VehicleUsedMileage ,\n VehicleUsed.IsCPO as IsCPO ,\n VehicleUsed.IsMTCA as IsMTCA\nfrom VehicleUsed\ninner join PostalCodeRegionCountyCity on ( lower (\nVehicleUsed.PostalCode ) = lower ( PostalCodeRegionCountyCity.PostalCode\n) )\nwhere \n( VehicleUsed.VehicleUsedPriceRangeFloor between 0 and 15000)\n and \n( PostalCodeRegionCountyCity.RegionId = 26 )\n\norder by VehicleUsed.VehicleUsedDisplayPriority ,\n VehicleUsed.HasVehicleUsedThumbnail desc ,\n VehicleUsed.HasVehicleUsedPrice desc ,\n VehicleUsed.VehicleUsedPrice ,\n VehicleUsed.HasVehicleUsedMileage desc ,\n VehicleUsed.VehicleUsedMileage ,\n VehicleUsed.IsCPO desc ,\n VehicleUsed.IsMTCA desc\nlimit 500000\n\n\n\n\nHere is the explain Analyze\n\nLimit (cost=59732.41..60849.24 rows=44673 width=39) (actual\ntime=1940.274..1944.312 rows=2363 loops=1)\n Output: vehicleused.vehicleusedid,\nvehicleused.vehicleuseddisplaypriority,\nvehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\nvehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\nvehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n -> Unique (cost=59732.41..60849.24 rows=44673 width=39) (actual\ntime=1940.272..1943.011 rows=2363 loops=1)\n Output: vehicleused.vehicleusedid,\nvehicleused.vehicleuseddisplaypriority,\nvehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\nvehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\nvehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n -> Sort (cost=59732.41..59844.10 rows=44673 width=39) (actual\ntime=1940.270..1941.101 rows=2363 loops=1)\n Output: vehicleused.vehicleusedid,\nvehicleused.vehicleuseddisplaypriority,\nvehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\nvehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\nvehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n Sort Key: vehicleused.vehicleuseddisplaypriority,\nvehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\nvehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\nvehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca,\nvehicleused.vehicleusedid\n Sort Method: quicksort Memory: 231kB\n -> Hash Join (cost=289.85..55057.07 rows=44673 width=39)\n(actual time=3.799..1923.958 rows=2363 loops=1)\n Output: vehicleused.vehicleusedid,\nvehicleused.vehicleuseddisplaypriority,\nvehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\nvehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\nvehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n Hash Cond: (lower((vehicleused.postalcode)::text) =\nlower((postalcoderegioncountycity.postalcode)::text))\n -> Seq Scan on vehicleused (cost=0.00..51807.63\nrows=402058 width=45) (actual time=0.016..1454.616 rows=398495 loops=1)\n Output: vehicleused.vehicleusedid,\nvehicleused.datasetid, vehicleused.vehicleusedproductid,\nvehicleused.sellernodeid, vehicleused.vehicleyear,\nvehicleused.vehiclemakeid, vehicleused.vehiclemodelid,\nvehicleused.vehiclesubmodelid, vehicleused.vehiclebodystyleid,\nvehicleused.vehicledoors, vehicleused.vehicleenginetypeid,\nvehicleused.vehicledrivetrainid, vehicleused.vehicleexteriorcolorid,\nvehicleused.hasvehicleusedthumbnail, vehicleused.postalcode,\nvehicleused.vehicleusedprice, vehicleused.vehicleusedmileage,\nvehicleused.buyerguid, vehicleused.vehicletransmissiontypeid,\nvehicleused.vehicleusedintid, vehicleused.vehicleuseddisplaypriority,\nvehicleused.vehicleusedsearchradius, vehicleused.vehiclejatoimagepath,\nvehicleused.vehiclebodystylegroupid, vehicleused.productid,\nvehicleused.productgroupid, vehicleused.vehiclevin,\nvehicleused.vehicleclassgroupid,\nvehicleused.vehiclegenericexteriorcolorid, vehicleused.highlight,\nvehicleused.buyerid, vehicleused.dealerid,\nvehicleused.hasvehicleusedprice, vehicleused.dealerstockid,\nvehicleused.datesold, vehicleused.hasthumbnailimagepath,\nvehicleused.vehicleinteriorcolorid, vehicleused.vehicleconditionid,\nvehicleused.vehicletitletypeid, vehicleused.warranty,\nvehicleused.thumbnailimagepath, vehicleused.fullsizeimagepath,\nvehicleused.description, vehicleused.inserteddate,\nvehicleused.feeddealerid, vehicleused.vehicleusedpricerangefloor,\nvehicleused.vehicleusedmileagerangefloor,\nvehicleused.hasvehicleusedmileage,\nvehicleused.VehicleUsedIntId.distinct_count,\nvehicleused.VehicleUsedPrice.average,\nvehicleused.VehicleUsedId.distinct_count, vehicleused.iscpo,\nvehicleused.ismtca, vehicleused.cpoprogramoemid,\nvehicleused.cpoprogram3rdpartyid\n Filter: ((vehicleusedpricerangefloor >= 0) AND\n(vehicleusedpricerangefloor <= 15000))\n -> Hash (cost=283.32..283.32 rows=522 width=6)\n(actual time=1.084..1.084 rows=532 loops=1)\n Output: postalcoderegioncountycity.postalcode\n -> Bitmap Heap Scan on\npostalcoderegioncountycity (cost=12.30..283.32 rows=522 width=6)\n(actual time=0.092..0.361 rows=532 loops=1)\n Output:\npostalcoderegioncountycity.postalcode\n Recheck Cond: (regionid = 26)\n -> Bitmap Index Scan on\npostalcoderegioncountycity_i05 (cost=0.00..12.17 rows=522 width=0)\n(actual time=0.082..0.082 rows=532 loops=1)\n Index Cond: (regionid = 26)\nTotal runtime: 1945.244 ms\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Craig James\nSent: Friday, August 27, 2010 5:42 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Using Between\n\nOn 8/27/10 5:21 PM, Ozer, Pam wrote:\n> I have a query that\n>\n> Select Distinct VehicleId\n>\n> From Vehicle\n>\n> Where VehicleMileage between 0 and 15000.\n>\n> I have an index on VehicleMileage. Is there another way to put an\nindex on a between? The index is not being picked up. It does get picked\nup when I run\n>\n> Select Distinct VehicleId\n>\n> From Vehicle\n>\n> Where VehicleMileage = 15000.\n>\n> I just want to make sure that there is not a special index I should be\nusing.\n\nYou need to post EXPLAIN ANALYZE of your query. It could be that an\nindex scan is actually not a good plan (for example, a sequential scan\nmight be faster if most of your vehicles have low mileage). Without the\nEXPLAIN ANALYZE, there's no way to say what's going on.\n\nDid you ANALYZE your database after you loaded the data?\n\nCraig\n\n> Thanks\n>\n> *Pam Ozer*\n>\n\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 Aug 2010 09:51:21 -0700", "msg_from": "\"Ozer, Pam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using Between" }, { "msg_contents": "On Mon, Aug 30, 2010 at 12:51 PM, Ozer, Pam <[email protected]> wrote:\n> Yes.  ANALYZE was run after we loaded the data.  Thanks for your\n> assistance\n> Here is the full Query.\n>\n> select distinct VehicleUsed.VehicleUsedId as VehicleUsedId ,\n>  VehicleUsed.VehicleUsedDisplayPriority as VehicleUsedDisplayPriority ,\n>  VehicleUsed.HasVehicleUsedThumbnail as HasVehicleUsedThumbnail ,\n>  VehicleUsed.HasVehicleUsedPrice as HasVehicleUsedPrice ,\n>  VehicleUsed.VehicleUsedPrice as VehicleUsedPrice ,\n>  VehicleUsed.HasVehicleUsedMileage as HasVehicleUsedMileage ,\n>  VehicleUsed.VehicleUsedMileage as VehicleUsedMileage ,\n>  VehicleUsed.IsCPO as IsCPO ,\n>  VehicleUsed.IsMTCA as IsMTCA\n> from VehicleUsed\n> inner join PostalCodeRegionCountyCity on ( lower (\n> VehicleUsed.PostalCode ) = lower ( PostalCodeRegionCountyCity.PostalCode\n> ) )\n> where\n> ( VehicleUsed.VehicleUsedPriceRangeFloor between 0 and 15000)\n>  and\n> ( PostalCodeRegionCountyCity.RegionId = 26 )\n>\n> order by VehicleUsed.VehicleUsedDisplayPriority ,\n>  VehicleUsed.HasVehicleUsedThumbnail desc ,\n>  VehicleUsed.HasVehicleUsedPrice desc ,\n>  VehicleUsed.VehicleUsedPrice ,\n>  VehicleUsed.HasVehicleUsedMileage desc ,\n>  VehicleUsed.VehicleUsedMileage ,\n>  VehicleUsed.IsCPO desc ,\n>  VehicleUsed.IsMTCA desc\n> limit 500000\n>\n>\n>\n>\n> Here is the explain Analyze\n>\n> Limit  (cost=59732.41..60849.24 rows=44673 width=39) (actual\n> time=1940.274..1944.312 rows=2363 loops=1)\n>  Output: vehicleused.vehicleusedid,\n> vehicleused.vehicleuseddisplaypriority,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\n> vehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\n> vehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n>  ->  Unique  (cost=59732.41..60849.24 rows=44673 width=39) (actual\n> time=1940.272..1943.011 rows=2363 loops=1)\n>        Output: vehicleused.vehicleusedid,\n> vehicleused.vehicleuseddisplaypriority,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\n> vehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\n> vehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n>        ->  Sort  (cost=59732.41..59844.10 rows=44673 width=39) (actual\n> time=1940.270..1941.101 rows=2363 loops=1)\n>              Output: vehicleused.vehicleusedid,\n> vehicleused.vehicleuseddisplaypriority,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\n> vehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\n> vehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n>              Sort Key: vehicleused.vehicleuseddisplaypriority,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\n> vehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\n> vehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca,\n> vehicleused.vehicleusedid\n>              Sort Method:  quicksort  Memory: 231kB\n>              ->  Hash Join  (cost=289.85..55057.07 rows=44673 width=39)\n> (actual time=3.799..1923.958 rows=2363 loops=1)\n>                    Output: vehicleused.vehicleusedid,\n> vehicleused.vehicleuseddisplaypriority,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\n> vehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\n> vehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n>                    Hash Cond: (lower((vehicleused.postalcode)::text) =\n> lower((postalcoderegioncountycity.postalcode)::text))\n>                    ->  Seq Scan on vehicleused  (cost=0.00..51807.63\n> rows=402058 width=45) (actual time=0.016..1454.616 rows=398495 loops=1)\n>                          Output: vehicleused.vehicleusedid,\n> vehicleused.datasetid, vehicleused.vehicleusedproductid,\n> vehicleused.sellernodeid, vehicleused.vehicleyear,\n> vehicleused.vehiclemakeid, vehicleused.vehiclemodelid,\n> vehicleused.vehiclesubmodelid, vehicleused.vehiclebodystyleid,\n> vehicleused.vehicledoors, vehicleused.vehicleenginetypeid,\n> vehicleused.vehicledrivetrainid, vehicleused.vehicleexteriorcolorid,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.postalcode,\n> vehicleused.vehicleusedprice, vehicleused.vehicleusedmileage,\n> vehicleused.buyerguid, vehicleused.vehicletransmissiontypeid,\n> vehicleused.vehicleusedintid, vehicleused.vehicleuseddisplaypriority,\n> vehicleused.vehicleusedsearchradius, vehicleused.vehiclejatoimagepath,\n> vehicleused.vehiclebodystylegroupid, vehicleused.productid,\n> vehicleused.productgroupid, vehicleused.vehiclevin,\n> vehicleused.vehicleclassgroupid,\n> vehicleused.vehiclegenericexteriorcolorid, vehicleused.highlight,\n> vehicleused.buyerid, vehicleused.dealerid,\n> vehicleused.hasvehicleusedprice, vehicleused.dealerstockid,\n> vehicleused.datesold, vehicleused.hasthumbnailimagepath,\n> vehicleused.vehicleinteriorcolorid, vehicleused.vehicleconditionid,\n> vehicleused.vehicletitletypeid, vehicleused.warranty,\n> vehicleused.thumbnailimagepath, vehicleused.fullsizeimagepath,\n> vehicleused.description, vehicleused.inserteddate,\n> vehicleused.feeddealerid, vehicleused.vehicleusedpricerangefloor,\n> vehicleused.vehicleusedmileagerangefloor,\n> vehicleused.hasvehicleusedmileage,\n> vehicleused.VehicleUsedIntId.distinct_count,\n> vehicleused.VehicleUsedPrice.average,\n> vehicleused.VehicleUsedId.distinct_count, vehicleused.iscpo,\n> vehicleused.ismtca, vehicleused.cpoprogramoemid,\n> vehicleused.cpoprogram3rdpartyid\n>                          Filter: ((vehicleusedpricerangefloor >= 0) AND\n> (vehicleusedpricerangefloor <= 15000))\n>                    ->  Hash  (cost=283.32..283.32 rows=522 width=6)\n> (actual time=1.084..1.084 rows=532 loops=1)\n>                          Output: postalcoderegioncountycity.postalcode\n>                          ->  Bitmap Heap Scan on\n> postalcoderegioncountycity  (cost=12.30..283.32 rows=522 width=6)\n> (actual time=0.092..0.361 rows=532 loops=1)\n>                                Output:\n> postalcoderegioncountycity.postalcode\n>                                Recheck Cond: (regionid = 26)\n>                                ->  Bitmap Index Scan on\n> postalcoderegioncountycity_i05  (cost=0.00..12.17 rows=522 width=0)\n> (actual time=0.082..0.082 rows=532 loops=1)\n>                                      Index Cond: (regionid = 26)\n> Total runtime: 1945.244 ms\n\nHow many rows are in the vehicleused table in total?\n\nIs your database small enough to fit in memory?\n\nDo you have any non-default settings in postgresql.conf?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 21 Sep 2010 15:35:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Between" }, { "msg_contents": "There are 850,000 records in vehicleused. And the database is too big to be kept in memory.\n\nHere are our config settings.\n\nlisten_addresses = '*' # what IP address(es) to listen on;\n # comma-separated list of addresses;\n # defaults to 'localhost', '*' = all\n # (change requires restart)\nport = 5432 # (change requires restart)\nmax_connections = 100 # (change requires restart)\n # (change requires restart)\nbonjour_name = 'colapcnt1d' # defaults to the computer name\n # (change requires restart)\n \nshared_buffers = 500MB # min 128kB\neffective_cache_size = 1000MB\n \nlog_destination = 'stderr' # Valid values are combinations of\nlogging_collector = on # Enable capturing of stderr and csvlog\n \n \ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system error message\n # strings\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\n \n# default configuration for text search\ndefault_text_search_config = 'pg_catalog.english'\n \nmax_connections = 100\ntemp_buffers = 100MB\nwork_mem = 100MB\nmaintenance_work_mem = 500MB\nmax_files_per_process = 10000\nseq_page_cost = 1.0\nrandom_page_cost = 1.1\ncpu_tuple_cost = 0.1\ncpu_index_tuple_cost = 0.05\ncpu_operator_cost = 0.01\ndefault_statistics_target = 1000\nautovacuum_max_workers = 1\n \n#log_min_messages = DEBUG1\n#log_min_duration_statement = 1000\n#log_statement = all\n#log_temp_files = 128\n#log_lock_waits = on\n#log_line_prefix = '%m %u %d %h %p %i %c %l %s'\n#log_duration = on\n#debug_print_plan = on\n\n-----Original Message-----\nFrom: Robert Haas [mailto:[email protected]] \nSent: Tuesday, September 21, 2010 12:35 PM\nTo: Ozer, Pam\nCc: Craig James; [email protected]\nSubject: Re: [PERFORM] Using Between\n\nOn Mon, Aug 30, 2010 at 12:51 PM, Ozer, Pam <[email protected]> wrote:\n> Yes.  ANALYZE was run after we loaded the data.  Thanks for your\n> assistance\n> Here is the full Query.\n>\n> select distinct VehicleUsed.VehicleUsedId as VehicleUsedId ,\n>  VehicleUsed.VehicleUsedDisplayPriority as VehicleUsedDisplayPriority ,\n>  VehicleUsed.HasVehicleUsedThumbnail as HasVehicleUsedThumbnail ,\n>  VehicleUsed.HasVehicleUsedPrice as HasVehicleUsedPrice ,\n>  VehicleUsed.VehicleUsedPrice as VehicleUsedPrice ,\n>  VehicleUsed.HasVehicleUsedMileage as HasVehicleUsedMileage ,\n>  VehicleUsed.VehicleUsedMileage as VehicleUsedMileage ,\n>  VehicleUsed.IsCPO as IsCPO ,\n>  VehicleUsed.IsMTCA as IsMTCA\n> from VehicleUsed\n> inner join PostalCodeRegionCountyCity on ( lower (\n> VehicleUsed.PostalCode ) = lower ( PostalCodeRegionCountyCity.PostalCode\n> ) )\n> where\n> ( VehicleUsed.VehicleUsedPriceRangeFloor between 0 and 15000)\n>  and\n> ( PostalCodeRegionCountyCity.RegionId = 26 )\n>\n> order by VehicleUsed.VehicleUsedDisplayPriority ,\n>  VehicleUsed.HasVehicleUsedThumbnail desc ,\n>  VehicleUsed.HasVehicleUsedPrice desc ,\n>  VehicleUsed.VehicleUsedPrice ,\n>  VehicleUsed.HasVehicleUsedMileage desc ,\n>  VehicleUsed.VehicleUsedMileage ,\n>  VehicleUsed.IsCPO desc ,\n>  VehicleUsed.IsMTCA desc\n> limit 500000\n>\n>\n>\n>\n> Here is the explain Analyze\n>\n> Limit  (cost=59732.41..60849.24 rows=44673 width=39) (actual\n> time=1940.274..1944.312 rows=2363 loops=1)\n>  Output: vehicleused.vehicleusedid,\n> vehicleused.vehicleuseddisplaypriority,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\n> vehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\n> vehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n>  ->  Unique  (cost=59732.41..60849.24 rows=44673 width=39) (actual\n> time=1940.272..1943.011 rows=2363 loops=1)\n>        Output: vehicleused.vehicleusedid,\n> vehicleused.vehicleuseddisplaypriority,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\n> vehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\n> vehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n>        ->  Sort  (cost=59732.41..59844.10 rows=44673 width=39) (actual\n> time=1940.270..1941.101 rows=2363 loops=1)\n>              Output: vehicleused.vehicleusedid,\n> vehicleused.vehicleuseddisplaypriority,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\n> vehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\n> vehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n>              Sort Key: vehicleused.vehicleuseddisplaypriority,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\n> vehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\n> vehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca,\n> vehicleused.vehicleusedid\n>              Sort Method:  quicksort  Memory: 231kB\n>              ->  Hash Join  (cost=289.85..55057.07 rows=44673 width=39)\n> (actual time=3.799..1923.958 rows=2363 loops=1)\n>                    Output: vehicleused.vehicleusedid,\n> vehicleused.vehicleuseddisplaypriority,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.hasvehicleusedprice,\n> vehicleused.vehicleusedprice, vehicleused.hasvehicleusedmileage,\n> vehicleused.vehicleusedmileage, vehicleused.iscpo, vehicleused.ismtca\n>                    Hash Cond: (lower((vehicleused.postalcode)::text) =\n> lower((postalcoderegioncountycity.postalcode)::text))\n>                    ->  Seq Scan on vehicleused  (cost=0.00..51807.63\n> rows=402058 width=45) (actual time=0.016..1454.616 rows=398495 loops=1)\n>                          Output: vehicleused.vehicleusedid,\n> vehicleused.datasetid, vehicleused.vehicleusedproductid,\n> vehicleused.sellernodeid, vehicleused.vehicleyear,\n> vehicleused.vehiclemakeid, vehicleused.vehiclemodelid,\n> vehicleused.vehiclesubmodelid, vehicleused.vehiclebodystyleid,\n> vehicleused.vehicledoors, vehicleused.vehicleenginetypeid,\n> vehicleused.vehicledrivetrainid, vehicleused.vehicleexteriorcolorid,\n> vehicleused.hasvehicleusedthumbnail, vehicleused.postalcode,\n> vehicleused.vehicleusedprice, vehicleused.vehicleusedmileage,\n> vehicleused.buyerguid, vehicleused.vehicletransmissiontypeid,\n> vehicleused.vehicleusedintid, vehicleused.vehicleuseddisplaypriority,\n> vehicleused.vehicleusedsearchradius, vehicleused.vehiclejatoimagepath,\n> vehicleused.vehiclebodystylegroupid, vehicleused.productid,\n> vehicleused.productgroupid, vehicleused.vehiclevin,\n> vehicleused.vehicleclassgroupid,\n> vehicleused.vehiclegenericexteriorcolorid, vehicleused.highlight,\n> vehicleused.buyerid, vehicleused.dealerid,\n> vehicleused.hasvehicleusedprice, vehicleused.dealerstockid,\n> vehicleused.datesold, vehicleused.hasthumbnailimagepath,\n> vehicleused.vehicleinteriorcolorid, vehicleused.vehicleconditionid,\n> vehicleused.vehicletitletypeid, vehicleused.warranty,\n> vehicleused.thumbnailimagepath, vehicleused.fullsizeimagepath,\n> vehicleused.description, vehicleused.inserteddate,\n> vehicleused.feeddealerid, vehicleused.vehicleusedpricerangefloor,\n> vehicleused.vehicleusedmileagerangefloor,\n> vehicleused.hasvehicleusedmileage,\n> vehicleused.VehicleUsedIntId.distinct_count,\n> vehicleused.VehicleUsedPrice.average,\n> vehicleused.VehicleUsedId.distinct_count, vehicleused.iscpo,\n> vehicleused.ismtca, vehicleused.cpoprogramoemid,\n> vehicleused.cpoprogram3rdpartyid\n>                          Filter: ((vehicleusedpricerangefloor >= 0) AND\n> (vehicleusedpricerangefloor <= 15000))\n>                    ->  Hash  (cost=283.32..283.32 rows=522 width=6)\n> (actual time=1.084..1.084 rows=532 loops=1)\n>                          Output: postalcoderegioncountycity.postalcode\n>                          ->  Bitmap Heap Scan on\n> postalcoderegioncountycity  (cost=12.30..283.32 rows=522 width=6)\n> (actual time=0.092..0.361 rows=532 loops=1)\n>                                Output:\n> postalcoderegioncountycity.postalcode\n>                                Recheck Cond: (regionid = 26)\n>                                ->  Bitmap Index Scan on\n> postalcoderegioncountycity_i05  (cost=0.00..12.17 rows=522 width=0)\n> (actual time=0.082..0.082 rows=532 loops=1)\n>                                      Index Cond: (regionid = 26)\n> Total runtime: 1945.244 ms\n\nHow many rows are in the vehicleused table in total?\n\nIs your database small enough to fit in memory?\n\nDo you have any non-default settings in postgresql.conf?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 21 Sep 2010 13:04:01 -0700", "msg_from": "\"Ozer, Pam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using Between" }, { "msg_contents": "On Tue, Sep 21, 2010 at 4:04 PM, Ozer, Pam <[email protected]> wrote:\n> There are 850,000 records in vehicleused.  And the database is too big to be kept in memory.\n\nAh. So in other words, you are retrieving about half the rows in that\ntable. For those kinds of queries, using the index tends to actually\nbe slower, because (1) you read the index in addition to reading the\ntable, which has CPU and I/O cost, and (2) instead of reading the\ntable sequentially, you end up jumping around and reading it out of\norder, which tends to result in more disk seeks and defeats the OS\nprefetch logic. The query planner is usually pretty smart about\nmaking good decisions about this kind of thing. As a debugging aid\n(but never in production), you can try disabling enable_seqscan and\nsee what plan you get that way. If it's slower, well then the query\nplanner did the right thing. If it's faster, then probably you need\nto adjust seq_page_cost and random_page_cost a bit. But my guess is\nthat it will be somewhere between a lot slower and only very slightly\nfaster.\n\nA whole different line of inquiry is ask the more general question\n\"how can I make this query faster?\", but I'm not sure whether you're\nunhappy with how the query is running or just curious about why the\nindex isn't being used.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 22 Sep 2010 06:51:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Between" }, { "msg_contents": "The question is how can we make it faster. \n\n-----Original Message-----\nFrom: Robert Haas [mailto:[email protected]] \nSent: Wednesday, September 22, 2010 3:52 AM\nTo: Ozer, Pam\nCc: Craig James; [email protected]\nSubject: Re: [PERFORM] Using Between\n\nOn Tue, Sep 21, 2010 at 4:04 PM, Ozer, Pam <[email protected]> wrote:\n> There are 850,000 records in vehicleused.  And the database is too big to be kept in memory.\n\nAh. So in other words, you are retrieving about half the rows in that\ntable. For those kinds of queries, using the index tends to actually\nbe slower, because (1) you read the index in addition to reading the\ntable, which has CPU and I/O cost, and (2) instead of reading the\ntable sequentially, you end up jumping around and reading it out of\norder, which tends to result in more disk seeks and defeats the OS\nprefetch logic. The query planner is usually pretty smart about\nmaking good decisions about this kind of thing. As a debugging aid\n(but never in production), you can try disabling enable_seqscan and\nsee what plan you get that way. If it's slower, well then the query\nplanner did the right thing. If it's faster, then probably you need\nto adjust seq_page_cost and random_page_cost a bit. But my guess is\nthat it will be somewhere between a lot slower and only very slightly\nfaster.\n\nA whole different line of inquiry is ask the more general question\n\"how can I make this query faster?\", but I'm not sure whether you're\nunhappy with how the query is running or just curious about why the\nindex isn't being used.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 22 Sep 2010 08:18:55 -0700", "msg_from": "\"Ozer, Pam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using Between" }, { "msg_contents": "On Wed, Sep 22, 2010 at 11:18 AM, Ozer, Pam <[email protected]> wrote:\n> The question is how can we make it faster.\n\nIf there's just one region ID for any given postal code, you might try\nadding a column to vehicleused and storing the postal codes there.\nYou could possibly populate that column using a trigger; probably it\ndoesn't change unless the postalcode changes. Then you could index\nthat column and query against it directly, rather than joining to\nPostalCodeRegionCountyCity. Short of that, I don't see any obvious\nway to avoid reading most of the vehicleused table. There may or may\nnot be an index that can speed that up slightly and of course you can\nalways throw hardware at the problem, but fundamentally reading half a\nmillion or more rows isn't going to be instantaneous.\n\nIncidentally, it would probably simplify things to store postal codes\nin the same case throughout the system. If you can avoid the need to\nwrite lower(x) = lower(y) and just write x = y you may get better\nplans. I'm not sure that's the case in this particular example but\nit's something to think about.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 22 Sep 2010 12:27:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Between" }, { "msg_contents": "Thank you. I will take a look at those suggestions.\n\n-----Original Message-----\nFrom: Robert Haas [mailto:[email protected]] \nSent: Wednesday, September 22, 2010 9:27 AM\nTo: Ozer, Pam\nCc: Craig James; [email protected]\nSubject: Re: [PERFORM] Using Between\n\nOn Wed, Sep 22, 2010 at 11:18 AM, Ozer, Pam <[email protected]>\nwrote:\n> The question is how can we make it faster.\n\nIf there's just one region ID for any given postal code, you might try\nadding a column to vehicleused and storing the postal codes there.\nYou could possibly populate that column using a trigger; probably it\ndoesn't change unless the postalcode changes. Then you could index\nthat column and query against it directly, rather than joining to\nPostalCodeRegionCountyCity. Short of that, I don't see any obvious\nway to avoid reading most of the vehicleused table. There may or may\nnot be an index that can speed that up slightly and of course you can\nalways throw hardware at the problem, but fundamentally reading half a\nmillion or more rows isn't going to be instantaneous.\n\nIncidentally, it would probably simplify things to store postal codes\nin the same case throughout the system. If you can avoid the need to\nwrite lower(x) = lower(y) and just write x = y you may get better\nplans. I'm not sure that's the case in this particular example but\nit's something to think about.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 22 Sep 2010 09:28:43 -0700", "msg_from": "\"Ozer, Pam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using Between" }, { "msg_contents": ">>> The question is how can we make it faster.\n\n>>If there's just one region ID for any given postal code, you might try\n>>adding a column to vehicleused and storing the postal codes there.\n>>You could possibly populate that column using a trigger; probably it\n>>doesn't change unless the postalcode changes. Then you could index\n>>that column and query against it directly, rather than joining to\n>>PostalCodeRegionCountyCity. Short of that, I don't see any obvious\n>>way to avoid reading most of the vehicleused table. There may or may\n>>not be an index that can speed that up slightly and of course you can\n>>always throw hardware at the problem, but fundamentally reading half a\n>>million or more rows isn't going to be instantaneous.\n\n>>Incidentally, it would probably simplify things to store postal codes\n>>in the same case throughout the system. If you can avoid the need to\n>>write lower(x) = lower(y) and just write x = y you may get better\n>>plans. I'm not sure that's the case in this particular example but\n>>it's something to think about.\n\nSomething else you might test is bumping the read-ahead value. Most linux\ninstalls have this at 256, might try bumping the value to ~8Meg and tune\nfrom there . this may help you slightly for seq scan performance. As always:\nYMMV. It's not going to magically fix low performing I/O subsystems and it\nwon't help many applications of PG but there are a few outlying instances\nwhere this change can help a little bit. \n\n\nI am sure someone will step in and tell you it is a bad idea - AND they will\nprobably have perfectly valid reasons for why it is, so you will need to\nconsider the ramifications.. if at all possible test and tune to see. \n..: Mark\n\n\n\n>>-- \n>>Robert Haas\n>>EnterpriseDB: http://www.enterprisedb.com\n>>The Enterprise Postgres Company\n\n-- \n>>Sent via pgsql-performance mailing list ([email protected])\n>>To make changes to your subscription:\n>>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 22 Sep 2010 18:51:23 -0600", "msg_from": "\"mark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Between" } ]