threads
listlengths
1
275
[ { "msg_contents": "I'm about to buy a combined web- and database server. When (if) the site \ngets sufficiently popular, we will split the database out to a separate \nserver.\n\nOur budget is limited, so how should we prioritize?\n\n* We think about buying some HP Proliant server with at least 4GB ram \nand at least a duo core processor. Possibly quad core. The OS will be \ndebian/Linux.\n\n* Much of the database will fit in RAM so it is not *that* necessary to \nprefer the more expensive SAS 10000 RPM drives to the cheaper 7500 RPM \nSATA drives, is it? There will both be many read- and write queries and \na *lot* (!) of random reads.\n\n* I think we will go for hardware-based RAID 1 with a good \nbattery-backed-up controller. I have read that software RAID perform \nsurprisingly good, but for a production site where hotplug replacement \nof dead disks is required, is software RAID still worth it?\n\nAnything else we should be aware of?\n\nThanks!\n", "msg_date": "Thu, 28 Aug 2008 21:22:38 +0200", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Best hardware/cost tradoff?" }, { "msg_contents": "On Thu, Aug 28, 2008 at 1:22 PM, cluster <[email protected]> wrote:\n> I'm about to buy a combined web- and database server. When (if) the site\n> gets sufficiently popular, we will split the database out to a separate\n> server.\n>\n> Our budget is limited, so how should we prioritize?\n\nStandard prioritization for a db server is: Disks and controller, RAM, CPU.\n\n> * We think about buying some HP Proliant server with at least 4GB ram and at\n> least a duo core processor. Possibly quad core. The OS will be debian/Linux.\n\nHP Makes nice equipment. Also, since this machine will have apache as\nwell as pgsql running on it, you might want to look at more memory if\nit's reasonably priced. If pg and apache are using 1.5Gig total to\nrun, you've got 2.5Gig for the OS to cache in. With 8 Gig of ram,\nyou'd have 6.5Gig to cache in. Also, the cost of a quad core nowadays\nis pretty reasonable.\n\n> * Much of the database will fit in RAM so it is not *that* necessary to\n> prefer the more expensive SAS 10000 RPM drives to the cheaper 7500 RPM SATA\n> drives, is it?\n\nThat depends. Writes will still have to hit the drives. Reads will\nbe mostly from memory. Be sure to set your effective_cache_size\nappropriately.\n\n> There will both be many read- and write queries and a *lot*\n> (!) of random reads.\n>\n> * I think we will go for hardware-based RAID 1 with a good battery-backed-up\n> controller.\n\nThe HP RAID controller that's been mentioned on the list seems like a\ngood performer.\n\n> I have read that software RAID perform surprisingly good, but\n> for a production site where hotplug replacement of dead disks is required,\n> is software RAID still worth it?\n\nThe answre is maybe. The reason people keep testing software RAID is\nthat a lot of cheap (not necessarily in cost, just in design)\ncontrollers give mediocre performance compared to SW RAID.\n\nWith SW RAID on top of a caching controller in jbod mode, the\ncontroller simply becomes a cache that can survive power loss, and\ndoesn't have to do any RAID calculations any more. With today's very\nfast CPUs, and often running RAID-10 for dbs, which requires little\nreal overhead, it's not uncommon for SW RAID to outrun HW.\n\nWith better controllers, the advantage is small to none.\n\n> Anything else we should be aware of?\n\nCan you go with 4 drives? Even if they're just SATA drives, you'd be\namazed at what going from a 2 drive mirror to a 4 drive RAID-10 can do\nfor your performance. Note you'll have no more storage going from 2\ndrive mirror to 4 drive RAID-10, but your aggregate bandwidth on reads\nwill be doubled.\n", "msg_date": "Thu, 28 Aug 2008 13:46:01 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best hardware/cost tradoff?" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> De: [email protected] \n> [mailto:[email protected]] En nombre de cluster\n> \n> I'm about to buy a combined web- and database server. When \n> (if) the site gets sufficiently popular, we will split the \n> database out to a separate server.\n> \n> Our budget is limited, so how should we prioritize?\n> \n> * We think about buying some HP Proliant server with at least \n> 4GB ram and at least a duo core processor. Possibly quad \n> core. The OS will be debian/Linux.\n> \n> * Much of the database will fit in RAM so it is not *that* \n> necessary to prefer the more expensive SAS 10000 RPM drives \n> to the cheaper 7500 RPM SATA drives, is it? There will both \n> be many read- and write queries and a *lot* (!) of random reads.\n> \n> * I think we will go for hardware-based RAID 1 with a good \n> battery-backed-up controller. I have read that software RAID \n> perform surprisingly good, but for a production site where \n> hotplug replacement of dead disks is required, is software \n> RAID still worth it?\n> \n> Anything else we should be aware of?\n> \n\nI havent had any issues with software raid (mdadm) and hot-swaps. It keeps\nworking in degraded mode and as soon as you replace the defective disk it\ncan reconstruct the array on the fly. Performance will suffer while at it\nbut the service keeps up.\nThe battery backup makes a very strong point for a hw controller. Still, I\nhave heard good things on combining a HW controller with JBODS leaving the\nRAID affair to mdadm. In your scenario though with \"*lots* of random reads\",\nif I had to choose between a HW controller & 2 disks or software RAID with 4\nor 6 disks, I would go for the disks. There are motherboards with 6 SATA\nports. For the money you will save on the controller you can afford 6 disks\nin a RAID 10 setup. \n\nCheers,\nFernando.\n\n", "msg_date": "Thu, 28 Aug 2008 17:04:46 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best hardware/cost tradoff?" }, { "msg_contents": ">> -----Mensaje original-----\n>> De: [email protected] \n>> * I think we will go for hardware-based RAID 1 with a good \n>> battery-backed-up controller. I have read that software RAID \n>> perform surprisingly good, but for a production site where \n>> hotplug replacement of dead disks is required, is software \n>> RAID still worth it?\n>> ... \n> I havent had any issues with software raid (mdadm) and hot-swaps. It keeps\n> working in degraded mode and as soon as you replace the defective disk it\n> can reconstruct the array on the fly. Performance will suffer while at it\n> but the service keeps up.\n> The battery backup makes a very strong point for a hw controller. Still, I\n> have heard good things on combining a HW controller with JBODS leaving the\n> RAID affair to mdadm. In your scenario though with \"*lots* of random reads\",\n> if I had to choose between a HW controller & 2 disks or software RAID with 4\n> or 6 disks, I would go for the disks. There are motherboards with 6 SATA\n> ports. For the money you will save on the controller you can afford 6 disks\n> in a RAID 10 setup.\n\nThis is good advice. Hot-swapping seems cool, but how often will you actually use it? Maybe once every year? With Software RAID, replacing a disk means shutdown, swap the hardware, and reboot, which is usually less than ten minutes, and you're back in business. If that's the only thing that happens, you'll have 99.97% uptime on your server.\n\nIf you're on a limited budget, a software RAID 1+0 will be very cost effective and give good performance for lots of random reads. Hardware RAID with a battery-backed cache helps with writes and hot swapping. If your random-read performance needs outweigh these two factors, consider software RAID.\n\nCraig\n\n", "msg_date": "Thu, 28 Aug 2008 13:25:09 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best hardware/cost tradoff?" }, { "msg_contents": "Thanks for all your replies! They are enlightening. I have some \nadditional questions:\n\n1) Would you prefer\n a) 5.4k 2\" SATA RAID10 on four disks or\n b) 10k 2\" SAS RAID1 on two disks?\n(Remember the lots (!) of random reads)\n\n2) Should I just make one large partition of my RAID? Does it matter at all?\n\n3) Will I gain much by putting the OS on a saparate disk, not included \nin the RAID? (The webserver and database would still share the RAID - \nbut I guess the OS will cache my (small) web content in RAM anyway).\n\nThanks again!\n", "msg_date": "Thu, 28 Aug 2008 23:29:09 +0200", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best hardware/cost tradoff?" }, { "msg_contents": "On Thu, Aug 28, 2008 at 2:04 PM, Fernando Hevia <[email protected]> wrote:\n>\n> I havent had any issues with software raid (mdadm) and hot-swaps. It keeps\n> working in degraded mode and as soon as you replace the defective disk it\n> can reconstruct the array on the fly. Performance will suffer while at it\n> but the service keeps up.\n\nI too put my vote behind mdadm for ease of use. However, there are\nreports that certain levels of RAID in linux kernel RAID that are\nsupposed to NOT handle write barriers properly. So that's what\nworries me.\n\n> The battery backup makes a very strong point for a hw controller. Still, I\n> have heard good things on combining a HW controller with JBODS leaving the\n> RAID affair to mdadm. In your scenario though with \"*lots* of random reads\",\n\nThis is especially true on slower RAID controllers. A lot of RAID\ncontrollers in the $300 range with battery backed caching don't do\nRAID real well, but do caching ok. If you can't afford a $1200 RAID\ncard then this might be a good option.\n", "msg_date": "Thu, 28 Aug 2008 15:47:37 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best hardware/cost tradoff?" }, { "msg_contents": "On Thu, Aug 28, 2008 at 3:29 PM, cluster <[email protected]> wrote:\n> Thanks for all your replies! They are enlightening. I have some additional\n> questions:\n>\n> 1) Would you prefer\n> a) 5.4k 2\" SATA RAID10 on four disks or\n> b) 10k 2\" SAS RAID1 on two disks?\n> (Remember the lots (!) of random reads)\n\nI'd lean towards 4 disks in RAID-10. Better performance when > 1 read\nis going on. Similar commit rates to the two 10k drives. Probably\nbigger drives too, right? Always nice to have room to work in.\n\n> 2) Should I just make one large partition of my RAID? Does it matter at all?\n\nProbably. With more disks it might be advantageous to split out two\ndrives into RAID-10 for pg_xlog. with 2 or 4 disks, splitting off two\nfor pg_xlog might slow down the data partition more than you gain from\na separate pg_xlog drive set.\n\n> 3) Will I gain much by putting the OS on a saparate disk, not included in\n> the RAID? (The webserver and database would still share the RAID - but I\n> guess the OS will cache my (small) web content in RAM anyway).\n\nThe real reason you want your OS on a different set of drives is that\nit allows you to reconfigure your underlying RAID array as needed\nwithout having to reinstall the whole OS again. Yeah, logging to\n/var/log will eat some bandwidth on your RAID as well, but the ease of\nmaintenance is why I do it as much as anything. A lot of large\nservers support 2 fixed drives for the OS and a lot of removeable\ndrives hooked up to a RAID controller for this reason.\n", "msg_date": "Thu, 28 Aug 2008 16:03:24 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best hardware/cost tradoff?" }, { "msg_contents": "We are now leaning towards just buying 4 SAS disks.\n\nSo should I just make one large RAID-10 partition or make two RAID-1's \nhaving the log on one RAID and everything else on the second RAID?\nHow can I get the best read/write performance out of these four disks?\n(Remember, that it is a combined web-/database server).\n", "msg_date": "Sat, 30 Aug 2008 12:21:29 +0200", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best hardware/cost tradoff?" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> De: [email protected] \n> [mailto:[email protected]] En nombre de cluster\n> Enviado el: Sábado, 30 de Agosto de 2008 07:21\n> Para: [email protected]\n> Asunto: Re: [PERFORM] Best hardware/cost tradoff?\n> \n> We are now leaning towards just buying 4 SAS disks.\n> \n> So should I just make one large RAID-10 partition or make two \n> RAID-1's having the log on one RAID and everything else on \n> the second RAID?\n> How can I get the best read/write performance out of these four disks?\n> (Remember, that it is a combined web-/database server).\n> \n\nMake a single RAID 10. It´s simpler and it will provide you better write\nperformance which is where your bottleneck will be. I think you should\nminimize the web server role in this equation as it should mostly work on\ncached data.\n\n", "msg_date": "Mon, 1 Sep 2008 10:24:46 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best hardware/cost tradoff?" } ]
[ { "msg_contents": "Good morning,\n\nTried to compare Table1 based on Table2\n\n. update table1.col = false\n if table1.pk_cols not in table2.pk_cols\n\n\n\nFor the following two ways, (2) always performs better than (1) right,\nand I need your inputs.\n========================================================================\n(1) update table1\n set col = false\n where table1.pk_co1 || table1.pk_col2.... || table1.pk_colN\n\n NOT IN\n\n (select pk_co1 || pk_col2.... || pk_colN\n from table2\n )\n\n(2) ResultSet(rs) =\n select pk_col1||pk_col2... || pk_colN\n from table1\n left join table2 using (pk_col1..., pk_colN)\n where table2.pk_col1 is null\n\n Then for each rs record, do:\n update table1\n set col = false\n where col1||... colN in rs.value\n\nThanks a lot!\n", "msg_date": "Thu, 28 Aug 2008 15:31:04 -0400", "msg_from": "Emi Lu <[email protected]>", "msg_from_op": true, "msg_subject": "update - which way quicker?" }, { "msg_contents": "\nOn 2008-08-28, at 21:31, Emi Lu wrote:\n\n> Good morning,\n>\n> Tried to compare Table1 based on Table2\n>\n> . update table1.col = false\n> if table1.pk_cols not in table2.pk_cols\n>\n>\n>\n> For the following two ways, (2) always performs better than (1) right,\n> and I need your inputs.\n> ====================================================================== \n> ==\n> (1) update table1\n> set col = false\n> where table1.pk_co1 || table1.pk_col2.... || table1.pk_colN\n>\n> NOT IN\n>\n> (select pk_co1 || pk_col2.... || pk_colN\n> from table2\n> )\n>\n> (2) ResultSet(rs) =\n> select pk_col1||pk_col2... || pk_colN\n> from table1\n> left join table2 using (pk_col1..., pk_colN)\n> where table2.pk_col1 is null\n>\n> Then for each rs record, do:\n> update table1\n> set col = false\n> where col1||... colN in rs.value\n>\n> Thanks a lot!\n>\n> -- \n> Sent via pgsql-performance mailing list (pgsql- \n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\nCheck EXISTS\n\nhttp://www.postgresql.org/docs/8.3/interactive/functions-subquery.html\n\n\nSerdecznie pozdrawiam\n\nPawel Socha\[email protected]\n\nprogramista/administrator\n\nperl -le 's**02).4^&-%2,).^9%4^!./4(%2^3,!#+7!2%^53%2&**y%& -;^[%\"`- \n{ a%%s%%$_%ee'\n\n", "msg_date": "Thu, 28 Aug 2008 23:13:55 +0200", "msg_from": "paul socha <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update - which way quicker?" } ]
[ { "msg_contents": "Hi\n\nI am working on a table which stores up to 125K rows per second and I \nfind that the inserts are a little bit slow. The insert is in reality a \nCOPY of a chunk of rows, up to 125K. A COPY og 25K rows, without an \nindex, is fast enough, about 150ms. With the index, the insert takes \nabout 500ms. The read though, is lightning fast, because of the index. \nIt takes only 10ms to retrieve 1000 rows from a 15M row table. As the \ntable grows to several billion rows, that might change though.\n\nI would like the insert, with an index, to be a lot faster than 500ms, \npreferrably closer to 150ms. Any advice on what to do?\nAdditionally, I dont enough about pg configuring to be sure I have \nincluded all the important directives and given them proportional \nvalues, so any help on that as well would be appreciated.\n\nHere are the details:\n\npostgres 8.2.7 on latest kubuntu, running on dual Opteron quad cores, \nwith 8GB memory and 8 sata disks on a raid controller (no raid config)\n\ntable:\n\ncreate table v1\n(\n id_s\t integer,\n id_f\t\tinteger,\n id_st \t\tinteger,\n id_t\t integer,\n value1 real,\n value2 real,\n value3 real,\n value4 real,\n value5 real,\n\t...\n value20 real\n);\n\ncreate index idx_v1 on v1 (id_s, id_st, id_t);\n\n- insert is a COPY into the 5-8 first columns. the rest are unused so\n far.\n\npostgres config:\n\nautovacuum = off\ncheckpoint_segments = 96\ncommit_delay = 5\neffective_cache_size = 128000\nfsync = on\nmax_fsm_pages = 208000\nmax_fsm_relations = 10000\nmax_connections = 20\nshared_buffers = 128000\nwal_sync_method = fdatasync\nwal_buffers = 256\nwork_mem = 512000\nmaintenance_work_mem = 2000000\n", "msg_date": "Sun, 31 Aug 2008 15:32:21 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "slow update of index during insert/copy" }, { "msg_contents": "You may want to investigate pg_bulkload.\n\nhttp://pgbulkload.projects.postgresql.org/\n\nOne major enhancement over COPY is that it does an index merge, rather than\nmodify the index one row at a time.\nhttp://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf\n\n\n\nOn Sun, Aug 31, 2008 at 6:32 AM, Thomas Finneid <\[email protected]> wrote:\n\n> Hi\n>\n> I am working on a table which stores up to 125K rows per second and I find\n> that the inserts are a little bit slow. The insert is in reality a COPY of a\n> chunk of rows, up to 125K. A COPY og 25K rows, without an index, is fast\n> enough, about 150ms. With the index, the insert takes about 500ms. The read\n> though, is lightning fast, because of the index. It takes only 10ms to\n> retrieve 1000 rows from a 15M row table. As the table grows to several\n> billion rows, that might change though.\n>\n> I would like the insert, with an index, to be a lot faster than 500ms,\n> preferrably closer to 150ms. Any advice on what to do?\n> Additionally, I dont enough about pg configuring to be sure I have included\n> all the important directives and given them proportional values, so any help\n> on that as well would be appreciated.\n>\n> Here are the details:\n>\n> postgres 8.2.7 on latest kubuntu, running on dual Opteron quad cores, with\n> 8GB memory and 8 sata disks on a raid controller (no raid config)\n>\n> table:\n>\n> create table v1\n> (\n> id_s integer,\n> id_f integer,\n> id_st integer,\n> id_t integer,\n> value1 real,\n> value2 real,\n> value3 real,\n> value4 real,\n> value5 real,\n> ...\n> value20 real\n> );\n>\n> create index idx_v1 on v1 (id_s, id_st, id_t);\n>\n> - insert is a COPY into the 5-8 first columns. the rest are unused so\n> far.\n>\n> postgres config:\n>\n> autovacuum = off\n> checkpoint_segments = 96\n> commit_delay = 5\n> effective_cache_size = 128000\n> fsync = on\n> max_fsm_pages = 208000\n> max_fsm_relations = 10000\n> max_connections = 20\n> shared_buffers = 128000\n> wal_sync_method = fdatasync\n> wal_buffers = 256\n> work_mem = 512000\n> maintenance_work_mem = 2000000\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYou may want to investigate pg_bulkload.http://pgbulkload.projects.postgresql.org/One major enhancement over COPY is that it does an index merge, rather than modify the index one row at a time.  \nhttp://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdfOn Sun, Aug 31, 2008 at 6:32 AM, Thomas Finneid <[email protected]> wrote:\nHi\n\nI am working on a table which stores up to 125K rows per second and I find that the inserts are a little bit slow. The insert is in reality a COPY of a chunk of rows, up to 125K. A COPY og 25K rows, without an index, is fast enough, about 150ms. With the index, the insert takes about 500ms. The read though, is lightning fast, because of the index. It takes only 10ms to retrieve 1000 rows from a 15M row table. As the table grows to several billion rows, that might change though.\n\nI would like the insert, with an index, to be a lot faster than 500ms, preferrably closer to 150ms. Any advice on what to do?\nAdditionally, I dont enough about pg configuring to be sure I have included all the important directives and given them proportional values, so any help on that as well would be appreciated.\n\nHere are the details:\n\npostgres 8.2.7 on latest kubuntu, running on dual Opteron quad cores, with 8GB memory and 8 sata disks on a raid controller (no raid config)\n\ntable:\n\ncreate table v1\n(\n        id_s            integer,\n        id_f            integer,\n        id_st           integer,\n        id_t            integer,\n        value1          real,\n        value2          real,\n        value3          real,\n        value4          real,\n        value5          real,\n        ...\n        value20         real\n);\n\ncreate index idx_v1 on v1 (id_s, id_st, id_t);\n\n- insert is a COPY into the 5-8 first columns. the rest are unused so\n  far.\n\npostgres config:\n\nautovacuum = off\ncheckpoint_segments = 96\ncommit_delay = 5\neffective_cache_size = 128000\nfsync = on\nmax_fsm_pages = 208000\nmax_fsm_relations = 10000\nmax_connections = 20\nshared_buffers = 128000\nwal_sync_method = fdatasync\nwal_buffers = 256\nwork_mem = 512000\nmaintenance_work_mem = 2000000\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 31 Aug 2008 11:38:15 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "\nScott Carey wrote:\n> You may want to investigate pg_bulkload.\n> \n> http://pgbulkload.projects.postgresql.org/\n> \n> One major enhancement over COPY is that it does an index merge, rather \n> than modify the index one row at a time. \n\nThis is a command line tool, right? I need a jdbc driver tool, is that \npossible?\n\nregards\n\nthomas\n\n", "msg_date": "Sun, 31 Aug 2008 21:20:16 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "Thomas Finneid wrote:\n> Hi\n> \n> I am working on a table which stores up to 125K rows per second and I\n> find that the inserts are a little bit slow. The insert is in reality a\n> COPY of a chunk of rows, up to 125K. A COPY og 25K rows, without an\n> index, is fast enough, about 150ms. With the index, the insert takes\n> about 500ms. The read though, is lightning fast, because of the index.\n> It takes only 10ms to retrieve 1000 rows from a 15M row table. As the\n> table grows to several billion rows, that might change though.\n> \n> I would like the insert, with an index, to be a lot faster than 500ms,\n> preferrably closer to 150ms. Any advice on what to do?\n> Additionally, I dont enough about pg configuring to be sure I have\n> included all the important directives and given them proportional\n> values, so any help on that as well would be appreciated.\n> \n> Here are the details:\n> \n> postgres 8.2.7 on latest kubuntu, running on dual Opteron quad cores,\n> with 8GB memory and 8 sata disks on a raid controller (no raid config)\n\nJust on a side note, your system is pretty strangely heavy on CPU\ncompared to its RAM and disk configuration. Unless your workload in Pg\nis computationally intensive or you have something else hosted on the\nsame machine, those CPUs will probably sit mostly idle.\n\nThe first thing you need to do is determine where, during your bulk\nloads, the system is bottlenecked. I'd guess it's stuck waiting for disk\nwrites, personally, but I'd want to investigate anyway.\n\nIf you're not satisfied with the results from pg_bulkload you can look\ninto doing things like moving your indexes to separate tablespaces (so\nthey don't fight for I/O on the same disk sets as your tables),\nseparating your bulk load tables from other online/transactional tables,\netc.\n\nAlso, to relay common advice from this list:\n\nIf you land up considering hardware as a performance answer, getting a\ndecent SAS RAID controller with a battery backed cache (so you can\nenable its write cache) and a set of fast SAS disks might be worth it.\nFor that matter, a good SATA RAID controller and some 10kRPM SATA disks\ncould help too. It all appears to depend a lot on the particular\nworkload and the characteristics of the controller, though.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 01 Sep 2008 09:49:33 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "Are you even getting COPY to work with JDBC? As far as I am aware, COPY\ndoesn't work with JDBC at the moment:\nhttp://jdbc.postgresql.org/todo.html Listed in the todo page, under \"PG\nExtensions\" is \"Add support for COPY.\" I tried to use it with JDBC a\nwhile ago and gave up after a couple limited experiments and reading that --\nbut didn't dig very deep into it.\n\nAs suggested, you should determine if you are disk bound or CPU bound. My\nexperience with COPY is that it is suprisingly easy to make it CPU bound,\nbut the conditions for that can vary quire a bit from schema to schema and\nhardware to hardware.\n\npg_bulkload may not be the tool for you for many reasons -- it requires a\nrigid data format and control file, very similar to Oracle's sqlloader. It\nmay not fit your needs at all -- its just worth a look to see if it does\nsince if there's a match, it will be much faster.\n\nOn Sun, Aug 31, 2008 at 6:49 PM, Craig Ringer\n<[email protected]>wrote:\n\n> Thomas Finneid wrote:\n> > Hi\n> >\n> > I am working on a table which stores up to 125K rows per second and I\n> > find that the inserts are a little bit slow. The insert is in reality a\n> > COPY of a chunk of rows, up to 125K. A COPY og 25K rows, without an\n> > index, is fast enough, about 150ms. With the index, the insert takes\n> > about 500ms. The read though, is lightning fast, because of the index.\n> > It takes only 10ms to retrieve 1000 rows from a 15M row table. As the\n> > table grows to several billion rows, that might change though.\n> >\n> > I would like the insert, with an index, to be a lot faster than 500ms,\n> > preferrably closer to 150ms. Any advice on what to do?\n> > Additionally, I dont enough about pg configuring to be sure I have\n> > included all the important directives and given them proportional\n> > values, so any help on that as well would be appreciated.\n> >\n> > Here are the details:\n> >\n> > postgres 8.2.7 on latest kubuntu, running on dual Opteron quad cores,\n> > with 8GB memory and 8 sata disks on a raid controller (no raid config)\n>\n> Just on a side note, your system is pretty strangely heavy on CPU\n> compared to its RAM and disk configuration. Unless your workload in Pg\n> is computationally intensive or you have something else hosted on the\n> same machine, those CPUs will probably sit mostly idle.\n>\n> The first thing you need to do is determine where, during your bulk\n> loads, the system is bottlenecked. I'd guess it's stuck waiting for disk\n> writes, personally, but I'd want to investigate anyway.\n>\n> If you're not satisfied with the results from pg_bulkload you can look\n> into doing things like moving your indexes to separate tablespaces (so\n> they don't fight for I/O on the same disk sets as your tables),\n> separating your bulk load tables from other online/transactional tables,\n> etc.\n>\n> Also, to relay common advice from this list:\n>\n> If you land up considering hardware as a performance answer, getting a\n> decent SAS RAID controller with a battery backed cache (so you can\n> enable its write cache) and a set of fast SAS disks might be worth it.\n> For that matter, a good SATA RAID controller and some 10kRPM SATA disks\n> could help too. It all appears to depend a lot on the particular\n> workload and the characteristics of the controller, though.\n>\n> --\n> Craig Ringer\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAre you even getting COPY to work with JDBC?  As far as I am aware, COPY doesn't work with JDBC at the moment:http://jdbc.postgresql.org/todo.html   Listed in the todo page, under \"PG Extensions\"   is \"Add support for COPY.\"  I tried to use it with JDBC a while ago and gave up after a couple limited experiments and reading that -- but didn't dig very deep into it.\nAs suggested, you should determine if you are disk bound or CPU bound.  My experience with COPY is that it is suprisingly easy to make it CPU bound, but the conditions for that can vary quire a bit from schema to schema and hardware to hardware.\n pg_bulkload may not be the tool for you for many reasons -- it requires a rigid data format and control file, very similar to Oracle's sqlloader.  It may not fit your needs at all -- its just worth a look to see if it does since if there's a match, it will be much faster.  \nOn Sun, Aug 31, 2008 at 6:49 PM, Craig Ringer <[email protected]> wrote:\nThomas Finneid wrote:\n> Hi\n>\n> I am working on a table which stores up to 125K rows per second and I\n> find that the inserts are a little bit slow. The insert is in reality a\n> COPY of a chunk of rows, up to 125K. A COPY og 25K rows, without an\n> index, is fast enough, about 150ms. With the index, the insert takes\n> about 500ms. The read though, is lightning fast, because of the index.\n> It takes only 10ms to retrieve 1000 rows from a 15M row table. As the\n> table grows to several billion rows, that might change though.\n>\n> I would like the insert, with an index, to be a lot faster than 500ms,\n> preferrably closer to 150ms. Any advice on what to do?\n> Additionally, I dont enough about pg configuring to be sure I have\n> included all the important directives and given them proportional\n> values, so any help on that as well would be appreciated.\n>\n> Here are the details:\n>\n> postgres 8.2.7 on latest kubuntu, running on dual Opteron quad cores,\n> with 8GB memory and 8 sata disks on a raid controller (no raid config)\n\nJust on a side note, your system is pretty strangely heavy on CPU\ncompared to its RAM and disk configuration. Unless your workload in Pg\nis computationally intensive or you have something else hosted on the\nsame machine, those CPUs will probably sit mostly idle.\n\nThe first thing you need to do is determine where, during your bulk\nloads, the system is bottlenecked. I'd guess it's stuck waiting for disk\nwrites, personally, but I'd want to investigate anyway.\n\nIf you're not satisfied with the results from pg_bulkload you can look\ninto doing things like moving your indexes to separate tablespaces (so\nthey don't fight for I/O on the same disk sets as your tables),\nseparating your bulk load tables from other online/transactional tables,\netc.\n\nAlso, to relay common advice from this list:\n\nIf you land up considering hardware as a performance answer, getting a\ndecent SAS RAID controller with a battery backed cache (so you can\nenable its write cache) and a set of fast SAS disks might be worth it.\nFor that matter, a good SATA RAID controller and some 10kRPM SATA disks\ncould help too. It all appears to depend a lot on the particular\nworkload and the characteristics of the controller, though.\n\n--\nCraig Ringer\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 1 Sep 2008 01:08:48 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "\nScott Carey wrote:\n> Are you even getting COPY to work with JDBC? As far as I am aware, COPY \n> doesn't work with JDBC at the moment:\n\nI used a patched jdbc driver, provided by someone on the list, dont have \nthe reference at hand. It works perfectly and its about 5 times faster, \nfor my job, than insert.\n\n> As suggested, you should determine if you are disk bound or CPU bound. \n> My experience with COPY is that it is suprisingly easy to make it CPU \n> bound, but the conditions for that can vary quire a bit from schema to \n> schema and hardware to hardware.\n\nCOPY is not the problem, as far as I see. The problem is the update \nspeed of the index. I tested the same procedure on a table with and \nwithout an index. Having an index makes it 200-250% slower, than without.\n\nBut as you state I should check whether the problem is cpu or disk \nbound. In addition, as someone else suggested, I might need to move the \nindexes to a different disk, which is not a bad idea considering the \nindex becomes quite large with up 125K rows a second.\n\nBut I haver another consern, which is the db server configuration. I am \nnot entirely convinced the db is configured prperly. I had one problem \nwhere the disk started thrashing after the table had reached a certainb \nsize, so when I configured shmmax, and the corresponding in pg, properly \nI got rid of the trashing. I will have to read through the documentation \n properly.\n\nregards\n\nthomas\n", "msg_date": "Mon, 01 Sep 2008 12:16:23 +0100", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "\nCraig Ringer wrote:\n> Just on a side note, your system is pretty strangely heavy on CPU\n> compared to its RAM and disk configuration. Unless your workload in Pg\n> is computationally intensive or you have something else hosted on the\n> same machine, those CPUs will probably sit mostly idle.\n\nIts a devel machine for experimenting with pg and the disk performance \nand for experimenting with multithreaded java programs. Its not going to \nbe particularily demanding on memory, but 8GB is good enough, I think.\n\n> The first thing you need to do is determine where, during your bulk\n> loads, the system is bottlenecked. I'd guess it's stuck waiting for disk\n> writes, personally, but I'd want to investigate anyway.\n\nWill investigate.\n\n> If you're not satisfied with the results from pg_bulkload you can look\n> into doing things like moving your indexes to separate tablespaces (so\n> they don't fight for I/O on the same disk sets as your tables),\n> separating your bulk load tables from other online/transactional tables,\n> etc.\n\n(Btw, its jdbc copy, not commandline.)\nI dont think its the bulkload thats the problem, in it self, because \nloading it without an index is quite fast (and 5 times faster than \nordinary insert). But of course, the bulkload process affects other \nparts of the system which can cause a bottleneck.\n\n> Also, to relay common advice from this list:\n> \n> If you land up considering hardware as a performance answer, getting a\n> decent SAS RAID controller with a battery backed cache (so you can\n> enable its write cache) and a set of fast SAS disks might be worth it.\n> For that matter, a good SATA RAID controller and some 10kRPM SATA disks\n> could help too. It all appears to depend a lot on the particular\n> workload and the characteristics of the controller, though.\n\nIt does have a sata raid controller, but not have the battery pack, \nbecause its a develmachine and not a production machine, I thought it \nwas not needed. But if you are saying the battery pack enables a cache \nwhich enables faster disk writes I will consider it.\nIts the first time I have worked with a raid controller, so I suspect I \nhave to read up on the features to understand how to utilise it best.\n\nregards\n\nthomas\n\n\n", "msg_date": "Mon, 01 Sep 2008 12:29:50 +0100", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "On Mon, Sep 1, 2008 at 5:29 AM, Thomas Finneid\n<[email protected]> wrote:\n\n> It does have a sata raid controller, but not have the battery pack, because\n> its a develmachine and not a production machine, I thought it was not\n> needed. But if you are saying the battery pack enables a cache which enables\n> faster disk writes I will consider it.\n> Its the first time I have worked with a raid controller, so I suspect I have\n> to read up on the features to understand how to utilise it best.\n\nThe model of the controller will have a large impact on performance as\nwell. The latest fastest RAID controllers have dual core 1.2GHz CPUs\non them, where some slower ones still in produciton are using 333MHz\nsingle core CPUs. The quality of the firmware, the linux driver (or\nwindows, or bsd) all have a large impact on the performance of a raid\ncontroller.\n\nDefinitely look into the battery backing unit. But if it's a $250\ncard, then it might not be enough.\n", "msg_date": "Mon, 1 Sep 2008 11:51:30 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "On Raid Controllers and Dev machines:\n\nFor a dev machine the battery backup is NOT needed.\n\nBattery back up makes a _production_ system faster: In production, data\nintegrity is everything, and write-back caching is dangerous without a\nbattery back up.\n\nSo:\nWithout BBU: Write-through cache = data safe in power failure; Write back\ncache = not safe in power failure.\nWith BBU : Both modes are safe on power loss.\n\nWrite-back is a lot faster for the WAL log in particular.\n\nFor a development box, just enable write-back caching regardless of the\nbattery back up situation. As long as its not your only copy of critical\ndata you just want to improve performance for the dev box. Just make sure\nwhatever data on that array can be replaced without condern if you're in the\nmiddle of writing to that data when power fails.\n-----------\n\nOn JDBC and COPY:\nThanks for the info on the patch to support it -- however the versions\nposted there are rather old, and the 8.3 version is not even the same as the\n8 month old current release -- its 3 releases prior and 8 months older than\nthat. There are quite a few bugfixes between 8.3 - v600 and v603:\nhttp://jdbc.postgresql.org/changes.html and that concerns me. Is there a\npatched version of the latest driver? Or will that have to be undertaken by\nthe user -- I worry about a conflict due to one of the changes since v600\nlisted.\n\nOn the performance impact of using COPY instead of INSERT : out of\ncuriosity, were you comparing COPY against raw row-by-row inserts (slow) or\nJDBC batch inserts (faster) or multi-row inserts: INSERT into X (a,b,c)\nvalues (1,2,3) , (4,5,6) , (7,8,9 ) , (10,11,12) ....\n?\nCopy should be faster than all of these, but I would not expect 5x faster\nfor the latter two.\n\n\nOn Mon, Sep 1, 2008 at 10:51 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Sep 1, 2008 at 5:29 AM, Thomas Finneid\n> <[email protected]> wrote:\n>\n> > It does have a sata raid controller, but not have the battery pack,\n> because\n> > its a develmachine and not a production machine, I thought it was not\n> > needed. But if you are saying the battery pack enables a cache which\n> enables\n> > faster disk writes I will consider it.\n> > Its the first time I have worked with a raid controller, so I suspect I\n> have\n> > to read up on the features to understand how to utilise it best.\n>\n> The model of the controller will have a large impact on performance as\n> well. The latest fastest RAID controllers have dual core 1.2GHz CPUs\n> on them, where some slower ones still in produciton are using 333MHz\n> single core CPUs. The quality of the firmware, the linux driver (or\n> windows, or bsd) all have a large impact on the performance of a raid\n> controller.\n>\n> Definitely look into the battery backing unit. But if it's a $250\n> card, then it might not be enough.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Raid Controllers and Dev machines:For a dev machine the battery backup is NOT needed.Battery back up makes a _production_ system faster:  In production, data integrity is everything, and write-back caching is dangerous without a battery back up.\nSo:Without BBU:   Write-through cache = data safe in power failure; Write back cache = not safe in power failure.With BBU :   Both modes are safe on power loss.Write-back is a lot faster for the WAL log in particular.\nFor a development box, just enable write-back caching regardless of the battery back up situation.  As long as its not your only copy of critical data you just want to improve performance for the dev box.  Just make sure whatever data on that array can be replaced without condern if you're in the middle of writing to that data when power fails.\n-----------On JDBC and COPY:Thanks for the info on the patch to support it -- however the versions posted there are rather old, and the 8.3 version is not even the same as the 8 month old current release -- its 3 releases prior and 8 months older than that.  There are quite a few bugfixes between 8.3 - v600 and v603: http://jdbc.postgresql.org/changes.html  and that concerns me.  Is there a patched version of the latest driver?  Or will that have to be undertaken by the user -- I worry about a conflict due to one of the changes since v600 listed.\nOn the performance impact of using COPY instead of INSERT :  out of curiosity, were you comparing COPY against raw row-by-row inserts (slow) or JDBC batch inserts (faster) or multi-row inserts: INSERT into X (a,b,c) values (1,2,3) , (4,5,6) , (7,8,9 ) , (10,11,12)  ....  \n?Copy should be faster than all of these, but I would not expect 5x faster for the latter two.On Mon, Sep 1, 2008 at 10:51 AM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Sep 1, 2008 at 5:29 AM, Thomas Finneid\n<[email protected]> wrote:\n\n> It does have a sata raid controller, but not have the battery pack, because\n> its a develmachine and not a production machine, I thought it was not\n> needed. But if you are saying the battery pack enables a cache which enables\n> faster disk writes I will consider it.\n> Its the first time I have worked with a raid controller, so I suspect I have\n> to read up on the features to understand how to utilise it best.\n\nThe model of the controller will have a large impact on performance as\nwell.  The latest fastest RAID controllers have dual core 1.2GHz CPUs\non them, where some slower ones still in produciton are using 333MHz\nsingle core CPUs.  The quality of the firmware, the linux driver (or\nwindows, or bsd) all have a large impact on the performance of a raid\ncontroller.\n\nDefinitely look into the battery backing unit.  But if it's a $250\ncard, then it might not be enough.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 1 Sep 2008 11:46:59 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "\"Scott Carey\" <[email protected]> writes:\n\n> On Raid Controllers and Dev machines:\n>\n> For a dev machine the battery backup is NOT needed.\n>\n> Battery back up makes a _production_ system faster: In production, data\n> integrity is everything, and write-back caching is dangerous without a\n> battery back up.\n>\n> So:\n> Without BBU: Write-through cache = data safe in power failure; Write back\n> cache = not safe in power failure.\n> With BBU : Both modes are safe on power loss.\n\nThis could be read the wrong way. With a BBU it's not that you can run the\ndrives in write-back mode safely. It's that you can cache in the BBU safely.\nThe drives still need to have their write caches off (ie, in write-through\nmode).\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n", "msg_date": "Mon, 01 Sep 2008 20:41:48 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "\nScott Carey wrote:\n> For a development box, just enable write-back caching regardless of the \n> battery back up situation. As long as its not your only copy of \n\nWill have a look at it, the data is not important and can be reproduced \nany time on any machine. The controller I have is a Areca ARC-1220 \nSerial ATA 8 port RAID Controller - PCI-E, SATA II, so I dont know \nexactly what it supports of caching.\n\n\n> On JDBC and COPY:\n> Thanks for the info on the patch to support it -- however the versions \n> posted there are rather old, and the 8.3 version is not even the same as \n> the 8 month old current release -- its 3 releases prior and 8 months \n> older than that. There are quite a few bugfixes between 8.3 - v600 and \n> v603: http://jdbc.postgresql.org/changes.html and that concerns me. Is \n> there a patched version of the latest driver? Or will that have to be \n\nIt was someone on the list who told me about the patch, I dont know the \nsituation of the patch at the current moment. I am using the patch on an \n PG 8.2.7, and it works fine.\n\n> On the performance impact of using COPY instead of INSERT : out of \n> curiosity, were you comparing COPY against raw row-by-row inserts (slow) \n> or JDBC batch inserts (faster) or multi-row inserts: INSERT into X \n> (a,b,c) values (1,2,3) , (4,5,6) , (7,8,9 ) , (10,11,12) .... \n> ?\n\nI tested row by row and jdbc batch, but I dont have the measured numbers \nany more. But I suppose I could recreate the test if, need be.\n\nregards\nthomas\n\n\n", "msg_date": "Mon, 01 Sep 2008 22:32:56 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": ">\n>\n>\n> On the performance impact of using COPY instead of INSERT : out of\n>> curiosity, were you comparing COPY against raw row-by-row inserts (slow) or\n>> JDBC batch inserts (faster) or multi-row inserts: INSERT into X (a,b,c)\n>> values (1,2,3) , (4,5,6) , (7,8,9 ) , (10,11,12) .... ?\n>>\n>\n> I tested row by row and jdbc batch, but I dont have the measured numbers\n> any more. But I suppose I could recreate the test if, need be.\n>\n> regards\n> thomas\n>\n>\n> Don't re-create it, I was just curious.\n\n\n\n\nOn the performance impact of using COPY instead of INSERT :  out of curiosity, were you comparing COPY against raw row-by-row inserts (slow) or JDBC batch inserts (faster) or multi-row inserts: INSERT into X (a,b,c) values (1,2,3) , (4,5,6) , (7,8,9 ) , (10,11,12)  .... ?\n\n\nI tested row by row and jdbc batch, but I dont have the measured numbers any more. But I suppose I could recreate the test if, need be.\n\nregards\nthomas\n\n\nDon't re-create it, I was just curious.", "msg_date": "Mon, 1 Sep 2008 13:40:50 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "On Mon, Sep 1, 2008 at 2:32 PM, Thomas Finneid\n<[email protected]> wrote:\n>\n> Scott Carey wrote:\n>>\n>> For a development box, just enable write-back caching regardless of the\n>> battery back up situation. As long as its not your only copy of\n>\n> Will have a look at it, the data is not important and can be reproduced any\n> time on any machine. The controller I have is a Areca ARC-1220 Serial ATA 8\n> port RAID Controller - PCI-E, SATA II, so I dont know exactly what it\n> supports of caching.\n\nIt's a pretty good card. It should support 1G of cache at least, and\ndefinitely supports battery backup. Have had a pair of 1680 Arecas in\nproduction for a month now and so far I'm very happy with the\nreliability and performance.\n\nThe other Scott is technically right about the lack of need for\nbattery back on a dev machine as long as you go in and change the\nchaching to write back, and I'm sure the card will give you a red\ndialog box saying this is a bad idea. Now, if it would take you a day\nof downtime to get a dev database back in place and running after a\npower loss, then the bbu may be worth the $200 or so.\n\nWhile I like to have a machine kitted out just like prod for testing,\nfor development I do tend to prefer slower machines, so that\ndevelopers might notice if they've just laid a big fat bloaty code\negg that's slower than it should be. Try to optimize for acceptable\nperformance on such server and you're far more likely to get good\nperformance behaviour in production.\n", "msg_date": "Mon, 1 Sep 2008 14:42:07 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "On Mon, Sep 1, 2008 at 2:42 PM, Scott Marlowe <[email protected]> wrote:\n\n> dialog box saying this is a bad idea. Now, if it would take you a day\n> of downtime to get a dev database back in place and running after a\n> power loss, then the bbu may be worth the $200 or so.\n\nI just wanted to comment that depending on how many people depend on\nthe development machine to get their job done the more easy it is to\njustify a battery.\n\nIf 20 people rely on a machine to do their job, just multiply their\nhourly cost to the company times your restore time for a figure that\nwill be several times higher than the cost of the battery.\n", "msg_date": "Mon, 1 Sep 2008 14:49:38 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "On Mon, Sep 1, 2008 at 12:41 PM, Gregory Stark <[email protected]>wrote:\n\n> \"Scott Carey\" <[email protected]> writes:\n>\n> > On Raid Controllers and Dev machines:\n> >\n> > For a dev machine the battery backup is NOT needed.\n> >\n> > Battery back up makes a _production_ system faster: In production, data\n> > integrity is everything, and write-back caching is dangerous without a\n> > battery back up.\n> >\n> > So:\n> > Without BBU: Write-through cache = data safe in power failure; Write\n> back\n> > cache = not safe in power failure.\n> > With BBU : Both modes are safe on power loss.\n>\n> This could be read the wrong way. With a BBU it's not that you can run the\n> drives in write-back mode safely. It's that you can cache in the BBU\n> safely.\n> The drives still need to have their write caches off (ie, in write-through\n> mode).\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL\n> training!\n>\n\nActually, the drive level write-cache does not have to be disabled, the\ncontroller just has to issue a drive write-cache-flush and use write\nbarriers appropriately. They are only a problem if the controller assumes\nthat data that it sent to the drive has gotten to the platters without\nchecking up on it or issuing a cache flush command to validate that things\nare on the platter. The controller, if its any good, should handle this\ndownstream configuration or document that it does not. What is appropriate\nwill vary, see documentation.\nDrive write caches are 100% safe when used appropriately. This is true with\nor without RAID, but in the case of a non-RAID or software raid setup the\nfile system and OS have to do the right thing. It true that many\ncombinations of file system + OS (Linux LVM, for just one example) don't\nnecessarily do the right thing, and some RAID controllers may also behave\nbadly. The safe thing is to turn off drive write back caches if in doubt,\nand the performance degradation caused by disabling it will be less for a\ngood hardware RAID card with a large cache than in other cases.\n\nLikewise, the safe thing is not to bother with write-back cache on the raid\ncontroller as well -- it protects against power failure but NOT various\nhardware failures or raid card death. I've seen the latter, where upon\npower loss and restore, the raid card was broken, and thus it could not\nflush the data it had in RAM (assuming it was still there) to disk.\nLuckily, after getting another card and loading up the db, there was no\ncorruption and we went on our way. Never, ever assume that your raid array\n+ BBU are fail-safe. All that stuff makes failure a lot less likely, but\nnot 0.\n\nOn Mon, Sep 1, 2008 at 12:41 PM, Gregory Stark <[email protected]> wrote:\n\"Scott Carey\" <[email protected]> writes:\n\n> On Raid Controllers and Dev machines:\n>\n> For a dev machine the battery backup is NOT needed.\n>\n> Battery back up makes a _production_ system faster:  In production, data\n> integrity is everything, and write-back caching is dangerous without a\n> battery back up.\n>\n> So:\n> Without BBU:   Write-through cache = data safe in power failure; Write back\n> cache = not safe in power failure.\n> With BBU :   Both modes are safe on power loss.\n\nThis could be read the wrong way. With a BBU it's not that you can run the\ndrives in write-back mode safely. It's that you can cache in the BBU safely.\nThe drives still need to have their write caches off (ie, in write-through\nmode).\n\n--\n  Gregory Stark\n  EnterpriseDB          http://www.enterprisedb.com\n  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\nActually, the drive level write-cache does not have to be disabled, the controller just has to issue a drive write-cache-flush and use write barriers appropriately.  They are only a problem if the controller assumes that data that it sent to the drive has gotten to the platters without checking up on it or issuing a cache flush command to validate that things are on the platter.  The controller, if its any good, should handle this downstream configuration or document that it does not.  What is appropriate will vary, see documentation.  \nDrive write caches are 100% safe when used appropriately.  This is true with or without RAID, but in the case of a non-RAID or software raid setup the file system and OS have to do the right thing.  It true that many combinations of file system + OS (Linux LVM, for just one example) don't necessarily do the right thing, and some RAID controllers may also behave badly.  The safe thing is to turn off drive write back caches if in doubt, and the performance degradation caused by disabling it will be less for a good hardware RAID card with a large cache than in other cases.\nLikewise, the safe thing is not to bother with write-back cache on the raid controller as well -- it protects against power failure but NOT various hardware failures or raid card death.  I've seen the latter, where upon power loss and restore, the raid card was broken, and thus it could not flush the data it had in RAM (assuming it was still there) to disk.  Luckily, after getting another card and loading up the db, there was no corruption and we went on our way.  Never, ever assume that your raid array + BBU are fail-safe.  All that stuff makes failure a lot less likely, but not 0.", "msg_date": "Mon, 1 Sep 2008 14:03:11 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "On Mon, 1 Sep 2008, Thomas Finneid wrote:\n\n> It does have a sata raid controller, but not have the battery pack, because \n> its a develmachine and not a production machine, I thought it was not needed. \n> But if you are saying the battery pack enables a cache which enables faster \n> disk writes I will consider it.\n\nSome controllers will only let you enable a write-back cache if the \nbattery if installed, but those are fairly rare. On a development system, \nyou usually can turn on write caching even if the battery that makes that \nsafe for production isn't there.\n\n> The controller I have is a Areca ARC-1220 Serial ATA 8 port RAID \n> Controller - PCI-E, SATA II, so I dont know exactly what it supports of \n> caching.\n\nOn that card I'm not sure you can even turn off the controller write \ncaching if you wanted to. There's one thing that looks like that though \nbut isn't: go into the BIOS, look at System Configuration, and there will \nbe an option for \"Disk Write Cache Mode\". That actually controls whether \nthe caches on the individual disks are enabled or not, and the default of \n\"Auto\" sets that based on whethere there is a battery installed or not. \nSee http://www.gridpp.rl.ac.uk/blog/2008/02/12/areca-cards/ for a good \ndescription of that. The setting is quite confusing when set to Auto; I'd \nrecommend just setting it to \"Disabled\" and be done with it.\n\nYou can confirm what each drive is actually set to by drilling down into \nthe Physical Drives section, you'll find \"Cache Mode: Write Back\" if the \nindividual disk write caches are on, and \"Write Through\" if they're off.\n\nI'd suggest you take a look at \nhttp://notemagnet.blogspot.com/2008/08/linux-disk-failures-areca-is-not-so.html \nto find out more about the utilities that come with the card you can \naccess under Linux. You may have trouble using them under Ubuntu, I know \nI did. Better to know about that incompatibility before you've got a disk \nfailure.\n\nI note that nobody has talked about your postgresql.conf yet. I assume \nyou've turned autovacuum off because you're not ever deleting things from \nthese tables. You'll still need to run VACUUM ANALYZE periodically to \nkeep good statistics for your tables, but I don't think that's relevant to \nyour question today.\n\nI'd recommend changing all the memory-based parameters to use computer \nunits. Here's what your configuration turned into when I did that:\n\neffective_cache_size = 1000MB\nshared_buffers = 1000MB\nwork_mem = 512MB\nmaintenance_work_mem = 2000MB\nwal_buffers = 256kB\n\nThose are all close enough that I doubt fiddling with them will change \nmuch for your immediate problem. For a system with 8GB of RAM like yours, \nI would suggest replacing the above with the below set instead; see \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for more \ninformation.\n\neffective_cache_size = 7000MB\nshared_buffers = 2000MB\nwork_mem = 512MB\nmaintenance_work_mem = 512MB\nwal_buffers = 1024kB\ncheckpoint_completion_target = 0.9\n\nNote that such a large work_mem setting can run out of memory (which is \nvery bad on Linux) if you have many clients doing sorts at once.\n\n> wal_sync_method = fdatasync\n\nYou should try setting this to open_sync , that can be considerably faster \nfor some write-heavy situations. Make sure to test that throughly though, \nthere are occasional reports of issues with that setting under Linux; \nseems to vary based on kernel version. I haven't had a chance to test the \nUbuntu Hardy heavily in this area yet myself.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 1 Sep 2008 20:46:22 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "Greg Smith wrote:\n> On Mon, 1 Sep 2008, Thomas Finneid wrote:\n\nThanks for all the info on the disk controller, I will have to look \nthrough all that now :)\n\n> I note that nobody has talked about your postgresql.conf yet. I assume \n> you've turned autovacuum off because you're not ever deleting things \n> from these tables. \n\nThat is correct.\n\n> You'll still need to run VACUUM ANALYZE periodically \n> to keep good statistics for your tables, but I don't think that's \n\nwill look at it.\n\n> You should try setting this to open_sync , that can be considerably \n> faster for some write-heavy situations. Make sure to test that \n> throughly though, there are occasional reports of issues with that \n> setting under Linux; seems to vary based on kernel version. I haven't \n> had a chance to test the Ubuntu Hardy heavily in this area yet myself.\n\nThe production machine is Solaris 10 running on a Sun v980. Do you know \nof it has any issues like these?\nAdditionally, would I need to do any config changes when going from \nlinux to solaris?\n\nThe v980 has got lots of memory and a FC disk system, but I don't know \nmuch more about controllers and disk etc. But I suspects its got at \nleast the same features as the disks and controller thats in the devel \nmachine.\n\nregards\n\nthomas\n", "msg_date": "Tue, 02 Sep 2008 08:39:24 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "On Tue, 2 Sep 2008, Thomas Finneid wrote:\n\n>> You should try setting this to open_sync , that can be considerably faster \n>> for some write-heavy situations. Make sure to test that throughly though, \n>> there are occasional reports of issues with that setting under Linux\n>\n> The production machine is Solaris 10 running on a Sun v980. Do you know of it \n> has any issues like these?\n\nOn Solaris you can safely use open_datasync which is a bit better than \nopen_sync. For best results, you need to separate the xlog onto a \nseparate partition and mount it using forcedirectio, because Postgres \ndoesn't know how to use direct I/O directly on Solaris yet.\n\n> Additionally, would I need to do any config changes when going from linux to \n> solaris?\n\nAssuming the same amount of memory, the postgresql.conf should be \nbasically the same, except for the wal_sync_method change mentioned above. \nIf there's more RAM in the production server you can ramp up \nshared_buffers, effective_cache_size, and possibly work_mem \nproportionately. The settings I suggested for maintenance_work_mem and \nwal_buffers are already near the useful upper limits for those parameters.\n\nThere are a few operating system level things you should consider tweaking \non Solaris 10 for better PostgreSQL performance. You need to be a bit \nmore careful about the parameters used for the filesystem than on Linux, \nand the settings there vary considerably depending on whether you're using \nUFS or ZFS. The best intro to that I know of is at \nhttp://blogs.sun.com/jkshah/entry/postgresql_east_2008_talk_best ; I added \nsome clarification to a few points in there and some other Solaris notes \nat http://notemagnet.blogspot.com/2008_04_01_archive.html Those should \nget you started.\n\nI hope you're already looking into some sort of repeatable benchmarking \nthat's representative of your application you can run. You'll go crazy \nplaying with all these settings without something like that.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 2 Sep 2008 03:57:27 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "On Mon, 1 Sep 2008, Scott Carey wrote:\n> Thanks for the info on the patch to support it -- however the versions \n> posted there are rather old...\n\nOver here, we're using an extremely old patched version of the JDBC \ndriver. That's the patch I sent to some mailing list a couple of years \nago. It works very well, but I would be very eager to see the COPY support \nmake it to the mainstream driver with a consistent interface.\n\n> On the performance impact of using COPY instead of INSERT.\n\nOver here, we use the binary form of COPY, and it is *really* fast. It's \nquite easy to saturate the discs.\n\nMatthew\n\n-- \nContrary to popular belief, Unix is user friendly. It just happens to be\nvery selective about who its friends are. -- Kyle Hearn\n", "msg_date": "Tue, 2 Sep 2008 13:29:08 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "\nWhat about filesystem properties?\n\non linux I am using:\n\n ext3(with journal) and auto,rw,async,noatime,nodiratime\n\non disks for data and journal\n\nI am unsure if I need a journal in the fs or if the db covers that \nproblem. With regards to that, do I then need to set some linux setting \nto force inode syncing (dont rememver the name for the filesystem \nstructure in unix memory). The same question can be asked about the \nasync option.\n\nany thoughts?\n\nthomas\n\n\nGreg Smith wrote:\n> On Tue, 2 Sep 2008, Thomas Finneid wrote:\n> \n>>> You should try setting this to open_sync , that can be considerably \n>>> faster for some write-heavy situations. Make sure to test that \n>>> throughly though, there are occasional reports of issues with that \n>>> setting under Linux\n>>\n>> The production machine is Solaris 10 running on a Sun v980. Do you \n>> know of it has any issues like these?\n> \n> On Solaris you can safely use open_datasync which is a bit better than \n> open_sync. For best results, you need to separate the xlog onto a \n> separate partition and mount it using forcedirectio, because Postgres \n> doesn't know how to use direct I/O directly on Solaris yet.\n> \n>> Additionally, would I need to do any config changes when going from \n>> linux to solaris?\n> \n> Assuming the same amount of memory, the postgresql.conf should be \n> basically the same, except for the wal_sync_method change mentioned \n> above. If there's more RAM in the production server you can ramp up \n> shared_buffers, effective_cache_size, and possibly work_mem \n> proportionately. The settings I suggested for maintenance_work_mem and \n> wal_buffers are already near the useful upper limits for those parameters.\n> \n> There are a few operating system level things you should consider \n> tweaking on Solaris 10 for better PostgreSQL performance. You need to \n> be a bit more careful about the parameters used for the filesystem than \n> on Linux, and the settings there vary considerably depending on whether \n> you're using UFS or ZFS. The best intro to that I know of is at \n> http://blogs.sun.com/jkshah/entry/postgresql_east_2008_talk_best ; I \n> added some clarification to a few points in there and some other Solaris \n> notes at http://notemagnet.blogspot.com/2008_04_01_archive.html Those \n> should get you started.\n> \n> I hope you're already looking into some sort of repeatable benchmarking \n> that's representative of your application you can run. You'll go crazy \n> playing with all these settings without something like that.\n> \n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 04 Sep 2008 10:49:28 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "On Thu, 4 Sep 2008, Thomas Finneid wrote:\n\n> I am unsure if I need a journal in the fs or if the db covers that problem.\n\nThere are some theoretical cases where the guarantees of ext3 seems a \nlittle weak unless you've turned the full journal on even in a database \ncontext (we just had a long thread on this last month; see \nhttp://archives.postgresql.org/pgsql-performance/2008-08/msg00136.php for \nthe part that dives into this subject). In practice, the \"ordered\" mode \n(the default for ext3) seems sufficient to prevent database corruption. \nThere is a substantial performance hit to running in full journal mode \nlike you're doing; \nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/ \nshows ordered mode as nearly 3X faster.\n\nYou should always do your own stress testing on your hardware anyway, \nincluding a few rounds of powering off the server abruptly and making sure \nit recovers from that.\n\n> With regards to that, do I then need to set some linux setting to force inode \n> syncing (dont rememver the name for the filesystem structure in unix memory). \n> The same question can be asked about the async option.\n\nIn the default mode, the database speaks to the filesystem in terms of \nwrites followed by fsync, which forces both the data and associated \nmetadata out. It works similarly if you switch to sync writes. \nPostgreSQL is very precise about what data really needs to be written to \ndisk and what can sit in the cache until later, you shouldn't need to \nworry about the sync parts at the filesystem level (as long as said \nfilesystem implementation is sane).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 4 Sep 2008 15:35:36 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update of index during insert/copy" }, { "msg_contents": "\n\nGreg Smith wrote:\n> In practice, the \"ordered\" \n> mode (the default for ext3) seems sufficient to prevent database \n> corruption. There is a substantial performance hit to running in full \n> journal mode like you're doing; \n\nwhere do you see which mode I am running in? I havent specified any \nmodes in any config or commands, so I am assuming its using ext3 \ndefaults, which I am assuming is \"ordered\".\n\nregards\n\n\n", "msg_date": "Fri, 05 Sep 2008 10:09:21 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow update of index during insert/copy" } ]
[ { "msg_contents": "Hi, there.\n\nI have encountered an issue that there are too many \nclog file under the .../pg_clog/ directory. Some of them \nwere even produced one month ago.\n\nMy questions:\n- Does Vacuum delete the old clog files?\n- Can we controll the maximum number of the clog files?\n- When, or in what case is a new clog file produced?\n- Is there a mechanism that the clog files are recycled?\n\n#The version of my postgresql is 8.1\n\nRegards\nDuan\n\n\n", "msg_date": "Mon, 01 Sep 2008 19:22:48 +0800", "msg_from": "Duan Ligong <[email protected]>", "msg_from_op": true, "msg_subject": "too many clog files" }, { "msg_contents": "Duan Ligong wrote:\n> Hi, there.\n> \n> I have encountered an issue that there are too many \n> clog file under the .../pg_clog/ directory. Some of them \n> were even produced one month ago.\n\nIf you're going to repost a question, it is only polite that you link to\nthe answers already provided. Particularly so when some of your\nquestions were already answered.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 1 Sep 2008 10:33:59 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "Alvaro Herrera wrote:\n> > Duan Ligong wrote:\n> > I have encountered an issue that there are too many \n> > clog file under the .../pg_clog/ directory. Some of them \n> > were even produced one month ago.\n> \n> If you're going to repost a question, it is only polite that you link to\n> the answers already provided. Particularly so when some of your\n> questions were already answered.\n\nOK, the following was the answer from you. \nhttp://archives.postgresql.org/pgsql-performance/2008-08/msg00346.php\nI appreciate your reply, but please \ndon't be such instructive except on postgresql.\n\nIn fact , I didn't get a satisfactory answer from you because\n\"Sorry, you ask more questions that I have time to answer right now.\"\n\nSo I tried to make my questions less and simpler as follows:\n- Does Vacuum delete the old clog files?\n- Can we controll the maximum number of the clog files?\n- When, or in what case is a new clog file produced?\n- Is there a mechanism that the clog files are recycled? If there is ,\nwhat is the mechanism?\n\n#The version of my postgresql is 8.1. \nI have encountered an issue that there are too many \nclog file under the .../pg_clog/ directory. Some of them \nwere even produced one month ago.\n\nThanks\nDuan\n\n\n> \n> -- \n> Alvaro Herrera http://www.CommandPrompt.com/\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n", "msg_date": "Tue, 02 Sep 2008 09:38:21 +0800", "msg_from": "Duan Ligong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: too many clog files" }, { "msg_contents": "On Tue, 2 Sep 2008, Duan Ligong wrote:\n\n> - Does Vacuum delete the old clog files?\n\nYes, if those transactions are all done. One possibility here is that \nyou've got some really long-running transaction floating around that is \nkeeping normal clog cleanup from happening. Take a look at the output \nfrom \"select * from pg_stat_activity\" and see if there are any really old \ntransactions floating around.\n\n> - Can we controll the maximum number of the clog files?\n\nThe reference Alvaro suggested at \nhttp://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND \ngoes over that. If you reduce autovacuum_freeze_max_age, that reduces the \nexpected maximum size of the clog files. \"The default, 200 million \ntransactions, translates to about 50MB of pg_clog storage.\" If you've got \nsignificantly more than 50MB of files in there, normally that means that \neither you're not running vacuum to clean things up usefully, or there's \nan old open transaction still floating around.\n\n> - When, or in what case is a new clog file produced?\n\nEvery 32K transactions. See http://wiki.postgresql.org/wiki/Hint_Bits for \nsome clarification about what the clog files actually do. I recently \ncollected some notes from this list and from the source code README files \nto give a high-level view there of what actually goes on under the hood in \nthis area.\n\n> - Is there a mechanism that the clog files are recycled? If there is ,\n> what is the mechanism?\n\nThe old ones should get wiped out by vacuum's freezing mechanism.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 2 Sep 2008 01:32:01 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> On Tue, 2 Sep 2008, Duan Ligong wrote:\n>> - Can we controll the maximum number of the clog files?\n\n> The reference Alvaro suggested at \n> http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND \n> goes over that. If you reduce autovacuum_freeze_max_age, that reduces the \n> expected maximum size of the clog files.\n\nDuan's first post stated that he is running 8.1, which I believe had a\nquite different rule for truncating pg_clog --- it definitely has not\ngot the parameter autovacuum_freeze_max_age.\n\nToo tired to go reread the 8.1 code right now, but don't quote 8.3\ndocs at him, 'cause they're not too relevant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Sep 2008 01:47:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files " }, { "msg_contents": "Greg Smith a �crit :\n> [...]\n>> - When, or in what case is a new clog file produced?\n> \n> Every 32K transactions.\n\nAre you sure about this?\n\ny clog files get up to 262144 bytes. Which means 1000000 transactions'\nstatus: 262144 bytes are 2Mb (mega bits), so if a status is 2 bits, it\nholds 1M transactions' status).\n\nAFAICT, 32K transactions' status are available on a single (8KB) page.\n\nOr am I wrong?\n\n\n-- \nGuillaume.\n http://www.postgresqlfr.org\n http://dalibo.com\n", "msg_date": "Tue, 02 Sep 2008 19:06:33 +0200", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "On Tue, 2 Sep 2008, Guillaume Lelarge wrote:\n\n> AFAICT, 32K transactions' status are available on a single (8KB) page.\n\nYou're right, I had that right on the refered to page but mangled it when \nwriting the e-mail.\n\n> 262144 bytes are 2Mb (mega bits), so if a status is 2 bits, [a clog \n> file] holds 1M transactions' status).\n\nExactly.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 2 Sep 2008 13:28:45 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "Thanks for your reply.\n\nGreg wrote:\n> On Tue, 2 Sep 2008, Duan Ligong wrote:\n> > - Does Vacuum delete the old clog files?\n> \n> Yes, if those transactions are all done. One possibility here is that \n> you've got some really long-running transaction floating around that is \n> keeping normal clog cleanup from happening. Take a look at the output \n> from \"select * from pg_stat_activity\" and see if there are any really old \n> transactions floating around.\n\nWell, we could not wait so long and just moved the old clog files.\nThe postgresql system is running well.\nBut now the size of pg_clog has exceeded 50MB and there \nare 457 clog files.\n- - - -\n[root@set19:AN0101 hydragui]# du -sh pgdata/pg_clog\n117M pgdata/pg_clog\n- - - -\n\nMy question is:\n- How to determine whether there is a long-running transactions\nor not based on the output of \"select * from pg_stat_activity\"?\nIt seems there is not the start time of transactions.\n- How should we deal with it if there is a long-running transactin?\nCan we do something to avoid long-running transactions?\n\nThe following is the output of \"select * from pg_stat_activity\".\n#It seems there are no query_start time information.\n- - - - \nxxxdb=> select * from pg_stat_activity;\n datid | datname | procpid | usesysid | usename | current_query | query_start\n-------+-------------+---------+----------+----------+---------------+-------------\n 92406 | xxxdb | 17856 | 100 | myname | |\n 92406 | xxxdb | 31052 | 100 | myname | |\n(2 rows)\n- - - -\nAfter about 6minutes, I execute it again and the output is\n- - - - \nxxxdb=> select * from pg_stat_activity;\n datid | datname | procpid | usesysid | usename | current_query | query_start\n-------+-------------+---------+----------+----------+---------------+-------------\n 92406 | xxxdb | 5060 | 100 |myname | |\n 92406 | xxxdb | 5626 | 100 |myname | |\n(2 rows)\n\n- - - -\n#my postgresql version is 8.1\n\n\n> > - Can we controll the maximum number of the clog files?\n> \n> The reference Alvaro suggested at \n> http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND \n> goes over that. If you reduce autovacuum_freeze_max_age, that reduces the \n> expected maximum size of the clog files. \"The default, 200 million \n> transactions, translates to about 50MB of pg_clog storage.\" If you've got \n> significantly more than 50MB of files in there, normally that means that \n> either you're not running vacuum to clean things up usefully, or there's \n> an old open transaction still floating around.\n> \n> > - When, or in what case is a new clog file produced?\n> \n> Every 32K transactions. See http://wiki.postgresql.org/wiki/Hint_Bits for \n> some clarification about what the clog files actually do. I recently \n> collected some notes from this list and from the source code README files \n> to give a high-level view there of what actually goes on under the hood in \n> this area.\n\nIt seems that one transaction occupies 2bit and 32K transactions should \noccupy 8K, which is the size of one page.\nThe size of each clog file is 256KB.\nIs there other information which is in clog files except the Hint_bits?\nAnd Do the other information and Hint_bit of 32K transactions occupy \n256KB?\n\n> > - Is there a mechanism that the clog files are recycled? If there is ,\n> > what is the mechanism?\n> \n> The old ones should get wiped out by vacuum's freezing mechanism.\n\nDoes Wiping out clog files has something to do with the configuration \nexcept Vacuum? Does we have to set some parameter to enable\nthe function of Vacuum's wipping out old clog files?\n\nThe following is my postgresql.conf file:\n- - - -\ntcpip_socket = true\nmax_connections = 500\nport = 5432\nshared_buffers = 1000 \nsyslog = 2 \nlog_min_messages = fatal \n- - - -\n#my postgresql database is very small.\n\nThanks\nDuan\n\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n", "msg_date": "Fri, 05 Sep 2008 10:58:00 +0800", "msg_from": "Duan Ligong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: too many clog files" }, { "msg_contents": "On Fri, 5 Sep 2008, Duan Ligong wrote:\n> Well, we could not wait so long and just moved the old clog files.\n\nCongratulations. You have probably just destroyed your database.\n\nMatthew\n\n-- \nLord grant me patience, and I want it NOW!\n", "msg_date": "Fri, 5 Sep 2008 12:49:47 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "Considering a quad core server processor, 4 GBs of RAM memory, disk Sata\n2. \n\nWhat is the recommended setting for the parameters:\n\nmax_connections:70\nmax_prepared_transactions?\nshared_buffers? \nwal_buffers?\nmax_fsm_relations? \nmax_fsm_pages?\n\nAtenciosamente,\nJonas Rodrigo\n\n", "msg_date": "Fri, 5 Sep 2008 08:59:40 -0300", "msg_from": "\"Jonas Pacheco\" <[email protected]>", "msg_from_op": false, "msg_subject": "You may need to increase mas_loks_per_trasaction" }, { "msg_contents": "> Considering a quad core server processor, 4 GBs of RAM memory, disk Sata\n> 2.\n>\n> What is the recommended setting for the parameters:\n>\n> max_connections:70\n\nDepends on how many clients that access the database.\n\n> shared_buffers?\n\nI have mine at 512 MB but I will lower it and see how it affects\nperformance. I have 16 GB in my server.\n\n> max_fsm_relations?\n> max_fsm_pages?\n\nPerform a vacuum analyze verbose and look at the last few lines. This\nwill tell you whether you need to increase max_fsm_*.\n\nConsider lowering random_page_cost so it favoes indexex more often\nthan seq. scans.\n\nBut if you don't get a decent raid-controller your data will move slow\nand tuning will only make a minor difference.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Fri, 5 Sep 2008 14:37:53 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You may need to increase mas_loks_per_trasaction" }, { "msg_contents": "Duan Ligong wrote:\n\n> Greg wrote:\n> > On Tue, 2 Sep 2008, Duan Ligong wrote:\n> > > - Does Vacuum delete the old clog files?\n> > \n> > Yes, if those transactions are all done. One possibility here is that \n> > you've got some really long-running transaction floating around that is \n> > keeping normal clog cleanup from happening. Take a look at the output \n> > from \"select * from pg_stat_activity\" and see if there are any really old \n> > transactions floating around.\n> \n> Well, we could not wait so long and just moved the old clog files.\n> The postgresql system is running well.\n\nMove the old clog files back where they were, and run VACUUM FREEZE in\nall your databases. That should clean up all the old pg_clog files, if\nyou're really that desperate. This is not something that I'd recommend\ndoing on a periodic basis ...\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 5 Sep 2008 12:24:46 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "On Thu, Sep 4, 2008 at 8:58 PM, Duan Ligong <[email protected]> wrote:\n> Thanks for your reply.\n>\n> Greg wrote:\n>> On Tue, 2 Sep 2008, Duan Ligong wrote:\n>> > - Does Vacuum delete the old clog files?\n>>\n>> Yes, if those transactions are all done. One possibility here is that\n>> you've got some really long-running transaction floating around that is\n>> keeping normal clog cleanup from happening. Take a look at the output\n>> from \"select * from pg_stat_activity\" and see if there are any really old\n>> transactions floating around.\n>\n> Well, we could not wait so long and just moved the old clog files.\n> The postgresql system is running well.\n> But now the size of pg_clog has exceeded 50MB and there\n> are 457 clog files.\n\nThat is absolutely not the thing to do. Put them back, and do a\ndump-restore on the database if you need to save a few hundred megs on\nthe drive. Deleting files from underneath postgresql is a great way\nto break your database in new and interesting ways which are often\nfatal to your data.\n", "msg_date": "Fri, 5 Sep 2008 10:39:09 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "On Fri, 5 Sep 2008, Jonas Pacheco wrote:\n\n> max_prepared_transactions?\n\nThis is covered pretty well by the documentation: \nhttp://www.postgresql.org/docs/current/static/runtime-config-resource.html\n\nThere are suggestions for everything else you asked about (and a few more \nthings you should also set) at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 5 Sep 2008 16:43:59 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You may need to increase mas_loks_per_trasaction" }, { "msg_contents": "Rex Mabry wrote:\n> Try this query for looking at queries.\n\nThanks so much for your reply.\n\n> SELECT procpid, usename , (now() - query_start) as age, query_start, l.mode, l.granted\n>    FROM pg_stat_activity a LEFT OUTER JOIN pg_locks l ON (a.procpid = l.pid)\n>    LEFT OUTER JOIN pg_class c ON (l.relation = c.oid)\n>    WHERE current_query NOT IN ('<IDLE>') \n>    and a.usename <> 'postgres'\n>    ORDER BY pid;\n\nI tried it and the output are as follows:\n- - - -\nmydb=> SELECT procpid, usename , (now() - query_start) \nas age, query_start, l.mode, l.granted FROM pg_stat_activity a\nLEFT UTER JOIN pg_locks l ON (a.procpid = l.pid) LEFT \nOUTER JOIN pg_class c N (l.relation = c.oid) WHERE \ncurrent_query NOT IN ('<IDLE>') and .usename <> \n'postgres' ORDER BY pid;\nprocpid | usename | age | query_start | mode | granted\n---------+----------+-----+-------------+-----------------+---------\n 4134 | myname | | | AccessShareLock | t\n 4134 |myname | | | AccessShareLock | t\n 4134 | myname | | | ExclusiveLock | t\n 4134 | myname | | | AccessShareLock | t\n 4134 | mynamei | | | AccessShareLock | t\n 4134 | myname | | | AccessShareLock | t\n 4134 | myname | | | AccessShareLock | t\n(7 rows)\n- - - -\nAre these 7 rows the long-running transactions? \nIf they are, what should we do with them? just killing the processes?\n#It seems that the 7 rows are caused by logining db and executing \nthe select clause. and they are not long-running transactions, aren't\nthey? because the procpid changes when I logout and login again.\n\nThanks\nDuan\n\n> --- On Thu, 9/4/08, Duan Ligong <[email protected]> wrote:\n> \n> From: Duan Ligong <[email protected]>\n> Subject: Re: [PERFORM] too many clog files\n> To: \"Greg Smith\" <[email protected]>, [email protected]\n> Date: Thursday, September 4, 2008, 9:58 PM\n> \n> Thanks for your reply.\n> \n> Greg wrote:\n> > On Tue, 2 Sep 2008, Duan Ligong wrote:\n> > > - Does Vacuum delete the old clog files?\n> > \n> > Yes, if those transactions are all done. One possibility here is that \n> > you've got some really long-running transaction floating around that\n> is \n> > keeping normal clog cleanup from happening. Take a look at the output \n> > from \"select * from pg_stat_activity\" and see if there are any\n> really old \n> > transactions floating around.\n> \n> Well, we could not wait so long and just moved the old clog files.\n> The postgresql system is running well.\n> But now the size of pg_clog has exceeded 50MB and there \n> are 457 clog files.\n> - - - -\n> [root@set19:AN0101 hydragui]# du -sh pgdata/pg_clog\n> 117M pgdata/pg_clog\n> - - - -\n> \n> My question is:\n> - How to determine whether there is a long-running transactions\n> or not based on the output of \"select * from pg_stat_activity\"?\n> It seems there is not the start time of transactions.\n> - How should we deal with it if there is a long-running transactin?\n> Can we do something to avoid long-running transactions?\n> \n> The following is the output of \"select * from pg_stat_activity\".\n> #It seems there are no query_start time information.\n> - - - - \n> xxxdb=> select * from pg_stat_activity;\n> datid | datname | procpid | usesysid | usename | current_query |\n> query_start\n> -------+-------------+---------+----------+----------+---------------+-------------\n> 92406 | xxxdb | 17856 | 100 | myname | |\n> 92406 | xxxdb | 31052 | 100 | myname | |\n> (2 rows)\n> - - - -\n> After about 6minutes, I execute it again and the output is\n> - - - - \n> xxxdb=> select * from pg_stat_activity;\n> datid | datname | procpid | usesysid | usename | current_query |\n> query_start\n> -------+-------------+---------+----------+----------+---------------+-------------\n> 92406 | xxxdb | 5060 | 100 |myname | |\n> 92406 | xxxdb | 5626 | 100 |myname | |\n> (2 rows)\n> \n> - - - -\n> #my postgresql version is 8.1\n> \n> \n> > > - Can we controll the maximum number of the clog files?\n> > \n> > The reference Alvaro suggested at \n> >\n> http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND\n> \n> > goes over that. If you reduce autovacuum_freeze_max_age, that reduces the\n> \n> > expected maximum size of the clog files. \"The default, 200 million \n> > transactions, translates to about 50MB of pg_clog storage.\" If\n> you've got \n> > significantly more than 50MB of files in there, normally that means that \n> > either you're not running vacuum to clean things up usefully, or\n> there's \n> > an old open transaction still floating around.\n> > \n> > > - When, or in what case is a new clog file produced?\n> > \n> > Every 32K transactions. See http://wiki.postgresql.org/wiki/Hint_Bits for\n> \n> > some clarification about what the clog files actually do. I recently \n> > collected some notes from this list and from the source code README files \n> > to give a high-level view there of what actually goes on under the hood in\n> \n> > this area.\n> \n> It seems that one transaction occupies 2bit and 32K transactions should \n> occupy 8K, which is the size of one page.\n> The size of each clog file is 256KB.\n> Is there other information which is in clog files except the Hint_bits?\n> And Do the other information and Hint_bit of 32K transactions occupy \n> 256KB?\n> \n> > > - Is there a mechanism that the clog files are recycled? If there is\n> ,\n> > > what is the mechanism?\n> > \n> > The old ones should get wiped out by vacuum's freezing mechanism.\n> \n> Does Wiping out clog files has something to do with the configuration \n> except Vacuum? Does we have to set some parameter to enable\n> the function of Vacuum's wipping out old clog files?\n> \n> The following is my postgresql.conf file:\n> - - - -\n> tcpip_socket = true\n> max_connections = 500\n> port = 5432\n> shared_buffers = 1000 \n> syslog = 2 \n> log_min_messages = fatal \n> - - - -\n> #my postgresql database is very small.\n> \n> Thanks\n> Duan\n> \n> > --\n> > * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> > \n> > -- \n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> \n\n\n\n", "msg_date": "Mon, 08 Sep 2008 10:45:02 +0800", "msg_from": "Duan Ligong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: too many clog files" }, { "msg_contents": "Alvaro wrote:\n> Move the old clog files back where they were, and run VACUUM FREEZE in\n> all your databases. That should clean up all the old pg_clog files, if\n> you're really that desperate. This is not something that I'd recommend\n> doing on a periodic basis ...\n\nThank you for your suggestions.\nI tried it with VACUUM FREEZE, but it still does not work.\nVACUUM FULL was also be tried, but it doesn't work, either.\nThe old files were not be deleted.\n\nI suspect there are some configuration items which disable vacuum's \ncleaning old clog files, because it seems that vacuum could not \ndelete old clog files at all.\nMy configurations are as follows:\n- - - -\ntcpip_socket = true\nmax_connections = 500\nport = 5432\nshared_buffers = 1000 \nsyslog = 2 \nlog_min_messages = fatal\n#others are default values.\n- - - -\n\nThanks\nDuan\n\n> Duan Ligong wrote:\n> \n> > Greg wrote:\n> > > On Tue, 2 Sep 2008, Duan Ligong wrote:\n> > > > - Does Vacuum delete the old clog files?\n> > > \n> > > Yes, if those transactions are all done. One possibility here is that \n> > > you've got some really long-running transaction floating around that is \n> > > keeping normal clog cleanup from happening. Take a look at the output \n> > > from \"select * from pg_stat_activity\" and see if there are any really old \n> > > transactions floating around.\n> > \n> > Well, we could not wait so long and just moved the old clog files.\n> > The postgresql system is running well.\n> \n> Move the old clog files back where they were, and run VACUUM FREEZE in\n> all your databases. That should clean up all the old pg_clog files, if\n> you're really that desperate. This is not something that I'd recommend\n> doing on a periodic basis ...\n> \n> -- \n> Alvaro Herrera http://www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n", "msg_date": "Mon, 08 Sep 2008 11:07:45 +0800", "msg_from": "Duan Ligong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: too many clog files" } ]
[ { "msg_contents": "Hi,\n\n \n\nI have a single table with about 10 million rows, and two indexes. Index A\nis on a column A with 95% null values. Index B is on a column B with about\n10 values, ie. About a million rows of each value.\n\n \n\nWhen I do a simple query on the table (no joins) with the following\ncondition:\n\nA is null AND\n\nB = '21'\n\n \n\nit uses the correct index, index B. However, when I add a limit clause of\n15, postgres decides to do a sequential scan :s. Looking at the results\nfrom explain:\n\n \n\n\"Limit (cost=0.00..3.69 rows=15 width=128)\"\n\n\" -> Seq Scan on my_table this_ (cost=0.00..252424.24 rows=1025157\nwidth=128)\"\n\n\" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n\n \n\nIt appears that postgres is (very incorrectly) assuming that it will only\nhave to retrieve 15 rows on a sequential scan, and gives a total cost of\n3.69. In reality, it has to scan many rows forward until it finds the\ncorrect value, yielding very poor performance for my table.\n\n \n\nIf I disable sequential scan (set enable_seqscan=false) it then incorrectly\nuses the index A that has 95% null values: it seems to incorrectly apply the\nsame logic again that it will only have to retrieve 15 rows with the limit\nclause, and thinks that the index scan using A is faster than index scan B.\n\n \n\nOnly by deleting the index on A and disabling sequential scan will it use\nthe correct index, which is of course by far the fastest.\n\n \n\nIs there an assumption in the planner that a limit of 15 will mean that\npostgres will only have to read 15 rows? If so is this a bad assumption?\nIf a particular query is faster without a limit, then surely it will also be\nfaster with the limit.\n\n \n\nAny workarounds for this?\n\n \n\nThanks\n\nDavid\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI have a single table with about 10 million rows, and two\nindexes.  Index A is on a column A with 95% null values.  Index B is\non a column B with about 10 values, ie. About a million rows of each value.\n \nWhen I do a simple query on the table (no joins) with the\nfollowing condition:\nA is null AND\nB = ‘21’\n \nit uses the correct index, index B.  However, when I\nadd a limit clause of 15, postgres decides to do a sequential scan :s. \nLooking at the results from explain:\n \n\"Limit  (cost=0.00..3.69 rows=15 width=128)\"\n\"  ->  Seq Scan on my_table this_ \n(cost=0.00..252424.24 rows=1025157 width=128)\"\n\"        Filter: ((A\nIS NULL) AND ((B)::text = '21'::text))\"\n \nIt appears that postgres is (very incorrectly) assuming that\nit will only have to retrieve 15 rows on a sequential scan, and gives a total\ncost of 3.69.  In reality, it has to scan many rows forward until it finds\nthe correct value, yielding very poor performance for my table.\n \nIf I disable sequential scan (set enable_seqscan=false) it then\nincorrectly uses the index A that has 95% null values: it seems to incorrectly\napply the same logic again that it will only have to retrieve 15 rows with the\nlimit clause, and thinks that the index scan using A is faster than index scan\nB.\n \nOnly by deleting the index on A and disabling sequential\nscan will it use the correct index, which is of course by far the fastest.\n \nIs there an assumption in the planner that a limit of 15\nwill mean that postgres will only have to read 15 rows?  If so is this a\nbad assumption?  If a particular query is faster without a limit, then\nsurely it will also be faster with the limit.\n \nAny workarounds for this?\n \nThanks\nDavid", "msg_date": "Mon, 1 Sep 2008 13:18:33 +0100", "msg_from": "\"David West\" <[email protected]>", "msg_from_op": true, "msg_subject": "limit clause breaks query planner?" } ]
[ { "msg_contents": "Hello\n\nyou should partial index\n\ncreate index foo(b) on mytable where a is null;\n\nregards\nPavel Stehule\n\n2008/9/1 David West <[email protected]>:\n> Hi,\n>\n>\n>\n> I have a single table with about 10 million rows, and two indexes. Index A\n> is on a column A with 95% null values. Index B is on a column B with about\n> 10 values, ie. About a million rows of each value.\n>\n>\n>\n> When I do a simple query on the table (no joins) with the following\n> condition:\n>\n> A is null AND\n>\n> B = '21'\n>\n>\n>\n> it uses the correct index, index B. However, when I add a limit clause of\n> 15, postgres decides to do a sequential scan :s. Looking at the results\n> from explain:\n>\n>\n>\n> \"Limit (cost=0.00..3.69 rows=15 width=128)\"\n>\n> \" -> Seq Scan on my_table this_ (cost=0.00..252424.24 rows=1025157\n> width=128)\"\n>\n> \" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n>\n>\n>\n> It appears that postgres is (very incorrectly) assuming that it will only\n> have to retrieve 15 rows on a sequential scan, and gives a total cost of\n> 3.69. In reality, it has to scan many rows forward until it finds the\n> correct value, yielding very poor performance for my table.\n>\n>\n>\n> If I disable sequential scan (set enable_seqscan=false) it then incorrectly\n> uses the index A that has 95% null values: it seems to incorrectly apply the\n> same logic again that it will only have to retrieve 15 rows with the limit\n> clause, and thinks that the index scan using A is faster than index scan B.\n>\n>\n>\n> Only by deleting the index on A and disabling sequential scan will it use\n> the correct index, which is of course by far the fastest.\n>\n>\n>\n> Is there an assumption in the planner that a limit of 15 will mean that\n> postgres will only have to read 15 rows? If so is this a bad assumption?\n> If a particular query is faster without a limit, then surely it will also be\n> faster with the limit.\n>\n>\n>\n> Any workarounds for this?\n>\n>\n>\n> Thanks\n>\n> David\n", "msg_date": "Mon, 1 Sep 2008 14:52:53 +0200", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "Thanks for your suggestion but the result is the same.\n\nHere is the explain analyse output from different queries.\nSelect * from my_table where A is null and B = '21' limit 15\n\n\"Limit (cost=0.00..3.68 rows=15 width=128) (actual time=85837.043..85896.140 rows=15 loops=1)\"\n\" -> Seq Scan on my_table this_ (cost=0.00..258789.88 rows=1055580 width=128) (actual time=85837.038..85896.091 rows=15 loops=1)\"\n\" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n\"Total runtime: 85896.214 ms\"\n\nAs you can see the estimated cost was 3.68: a long way from the true value.\n\nDoing 'set enable_seqscan=false' and repeating the select:\n\"Limit (cost=0.00..5.58 rows=15 width=128) (actual time=4426.438..4426.834 rows=15 loops=1)\"\n\" -> Index Scan using idx_A on my_table this_ (cost=0.00..392956.76 rows=1055970 width=128) (actual time=4426.433..4426.768 rows=15 loops=1)\"\n\" Index Cond: (A IS NULL)\"\n\" Filter: ((B)::text = '21'::text)\"\n\"Total runtime: 4426.910 ms\"\n\nProbably some caching made this query faster, but it's still too slow, and using the wrong index.\n\nDeleting index A gives:\n\"Limit (cost=0.00..56.47 rows=15 width=128) (actual time=10.298..10.668 rows=15 loops=1)\"\n\" -> Index Scan using idx_B on my_table this_ (cost=0.00..3982709.15 rows=1057960 width=128) (actual time=10.293..10.618 rows=15 loops=1)\"\n\" Index Cond: ((B)::text = '21'::text)\"\n\" Filter: (A IS NULL)\"\n\"Total runtime: 10.735 ms\"\nMuch better. However I need index A for another query so I can't just delete it.\n\nLooking at the estimated cost, you can see why it's choosing the order that it is choosing, but it just doesn't seem to reflect reality at all.\n\nNow here's the result of the query, with both indexes in place and sequential scan enabled\nSelect * from my_table where A is null and B = '21'\n\"Bitmap Heap Scan on my_table this_ (cost=20412.89..199754.37 rows=1060529 width=128) (actual time=470.772..7432.062 rows=1020062 loops=1)\"\n\" Recheck Cond: ((B)::text = '21'::text)\"\n\" Filter: (A IS NULL)\"\n\" -> Bitmap Index Scan on idx_B (cost=0.00..20147.76 rows=1089958 width=0) (actual time=466.545..466.545 rows=1020084 loops=1)\"\n\" Index Cond: ((B)::text = '21'::text)\"\n\"Total runtime: 8940.119 ms\"\n\nIn this case it goes for the correct index. It appears that the query planner makes very simplistic assumptions when it comes to LIMIT?\n\nThanks\nDavid\n\n-----Original Message-----\nFrom: Pavel Stehule [mailto:[email protected]] \nSent: 01 September 2008 13:53\nTo: David West\nCc: [email protected]\nSubject: Re: [PERFORM] limit clause breaks query planner?\n\nHello\n\nyou should partial index\n\ncreate index foo(b) on mytable where a is null;\n\nregards\nPavel Stehule\n\n2008/9/1 David West <[email protected]>:\n> Hi,\n>\n>\n>\n> I have a single table with about 10 million rows, and two indexes. Index A\n> is on a column A with 95% null values. Index B is on a column B with about\n> 10 values, ie. About a million rows of each value.\n>\n>\n>\n> When I do a simple query on the table (no joins) with the following\n> condition:\n>\n> A is null AND\n>\n> B = '21'\n>\n>\n>\n> it uses the correct index, index B. However, when I add a limit clause of\n> 15, postgres decides to do a sequential scan :s. Looking at the results\n> from explain:\n>\n>\n>\n> \"Limit (cost=0.00..3.69 rows=15 width=128)\"\n>\n> \" -> Seq Scan on my_table this_ (cost=0.00..252424.24 rows=1025157\n> width=128)\"\n>\n> \" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n>\n>\n>\n> It appears that postgres is (very incorrectly) assuming that it will only\n> have to retrieve 15 rows on a sequential scan, and gives a total cost of\n> 3.69. In reality, it has to scan many rows forward until it finds the\n> correct value, yielding very poor performance for my table.\n>\n>\n>\n> If I disable sequential scan (set enable_seqscan=false) it then incorrectly\n> uses the index A that has 95% null values: it seems to incorrectly apply the\n> same logic again that it will only have to retrieve 15 rows with the limit\n> clause, and thinks that the index scan using A is faster than index scan B.\n>\n>\n>\n> Only by deleting the index on A and disabling sequential scan will it use\n> the correct index, which is of course by far the fastest.\n>\n>\n>\n> Is there an assumption in the planner that a limit of 15 will mean that\n> postgres will only have to read 15 rows? If so is this a bad assumption?\n> If a particular query is faster without a limit, then surely it will also be\n> faster with the limit.\n>\n>\n>\n> Any workarounds for this?\n>\n>\n>\n> Thanks\n>\n> David\n\n", "msg_date": "Mon, 1 Sep 2008 15:44:21 +0100", "msg_from": "\"David West\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "Hello\n\n2008/9/1 David West <[email protected]>:\n> Thanks for your suggestion but the result is the same.\n>\n> Here is the explain analyse output from different queries.\n> Select * from my_table where A is null and B = '21' limit 15\n>\n> \"Limit (cost=0.00..3.68 rows=15 width=128) (actual time=85837.043..85896.140 rows=15 loops=1)\"\n> \" -> Seq Scan on my_table this_ (cost=0.00..258789.88 rows=1055580 width=128) (actual time=85837.038..85896.091 rows=15 loops=1)\"\n> \" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n> \"Total runtime: 85896.214 ms\"\n>\n\nI see it - problem is in statistics - system expect 1055580, but there\nis only 15 values.\n\ntry\na) increase statistics on column a and b - probably there are strong\ndependency between column a nad b, because statistic are totally out\nb) try cursors\ndeclare cursor c as Select * from my_table where A is null and B =\n'21' limit 15;\nfetch forward 15 from c;\n\nhttp://www.postgresql.org/docs/8.2/static/sql-fetch.html\n\nmaybe\n\nselect * from (select * from mytable where b = '21' offset 0) where a\nis null limit 15\n\nregards\nPavel Stehule\n\n\n\n\n> As you can see the estimated cost was 3.68: a long way from the true value.\n>\n> Doing 'set enable_seqscan=false' and repeating the select:\n> \"Limit (cost=0.00..5.58 rows=15 width=128) (actual time=4426.438..4426.834 rows=15 loops=1)\"\n> \" -> Index Scan using idx_A on my_table this_ (cost=0.00..392956.76 rows=1055970 width=128) (actual time=4426.433..4426.768 rows=15 loops=1)\"\n> \" Index Cond: (A IS NULL)\"\n> \" Filter: ((B)::text = '21'::text)\"\n> \"Total runtime: 4426.910 ms\"\n>\n> Probably some caching made this query faster, but it's still too slow, and using the wrong index.\n>\n> Deleting index A gives:\n> \"Limit (cost=0.00..56.47 rows=15 width=128) (actual time=10.298..10.668 rows=15 loops=1)\"\n> \" -> Index Scan using idx_B on my_table this_ (cost=0.00..3982709.15 rows=1057960 width=128) (actual time=10.293..10.618 rows=15 loops=1)\"\n> \" Index Cond: ((B)::text = '21'::text)\"\n> \" Filter: (A IS NULL)\"\n> \"Total runtime: 10.735 ms\"\n> Much better. However I need index A for another query so I can't just delete it.\n>\n> Looking at the estimated cost, you can see why it's choosing the order that it is choosing, but it just doesn't seem to reflect reality at all.\n>\n> Now here's the result of the query, with both indexes in place and sequential scan enabled\n> Select * from my_table where A is null and B = '21'\n> \"Bitmap Heap Scan on my_table this_ (cost=20412.89..199754.37 rows=1060529 width=128) (actual time=470.772..7432.062 rows=1020062 loops=1)\"\n> \" Recheck Cond: ((B)::text = '21'::text)\"\n> \" Filter: (A IS NULL)\"\n> \" -> Bitmap Index Scan on idx_B (cost=0.00..20147.76 rows=1089958 width=0) (actual time=466.545..466.545 rows=1020084 loops=1)\"\n> \" Index Cond: ((B)::text = '21'::text)\"\n> \"Total runtime: 8940.119 ms\"\n>\n> In this case it goes for the correct index. It appears that the query planner makes very simplistic assumptions when it comes to LIMIT?\n>\n> Thanks\n> David\n>\n> -----Original Message-----\n> From: Pavel Stehule [mailto:[email protected]]\n> Sent: 01 September 2008 13:53\n> To: David West\n> Cc: [email protected]\n> Subject: Re: [PERFORM] limit clause breaks query planner?\n>\n> Hello\n>\n> you should partial index\n>\n> create index foo(b) on mytable where a is null;\n>\n> regards\n> Pavel Stehule\n>\n> 2008/9/1 David West <[email protected]>:\n>> Hi,\n>>\n>>\n>>\n>> I have a single table with about 10 million rows, and two indexes. Index A\n>> is on a column A with 95% null values. Index B is on a column B with about\n>> 10 values, ie. About a million rows of each value.\n>>\n>>\n>>\n>> When I do a simple query on the table (no joins) with the following\n>> condition:\n>>\n>> A is null AND\n>>\n>> B = '21'\n>>\n>>\n>>\n>> it uses the correct index, index B. However, when I add a limit clause of\n>> 15, postgres decides to do a sequential scan :s. Looking at the results\n>> from explain:\n>>\n>>\n>>\n>> \"Limit (cost=0.00..3.69 rows=15 width=128)\"\n>>\n>> \" -> Seq Scan on my_table this_ (cost=0.00..252424.24 rows=1025157\n>> width=128)\"\n>>\n>> \" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n>>\n>>\n>>\n>> It appears that postgres is (very incorrectly) assuming that it will only\n>> have to retrieve 15 rows on a sequential scan, and gives a total cost of\n>> 3.69. In reality, it has to scan many rows forward until it finds the\n>> correct value, yielding very poor performance for my table.\n>>\n>>\n>>\n>> If I disable sequential scan (set enable_seqscan=false) it then incorrectly\n>> uses the index A that has 95% null values: it seems to incorrectly apply the\n>> same logic again that it will only have to retrieve 15 rows with the limit\n>> clause, and thinks that the index scan using A is faster than index scan B.\n>>\n>>\n>>\n>> Only by deleting the index on A and disabling sequential scan will it use\n>> the correct index, which is of course by far the fastest.\n>>\n>>\n>>\n>> Is there an assumption in the planner that a limit of 15 will mean that\n>> postgres will only have to read 15 rows? If so is this a bad assumption?\n>> If a particular query is faster without a limit, then surely it will also be\n>> faster with the limit.\n>>\n>>\n>>\n>> Any workarounds for this?\n>>\n>>\n>>\n>> Thanks\n>>\n>> David\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 1 Sep 2008 21:17:17 +0200", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "Pavel Stehule wrote:\n> Hello\n>\n> 2008/9/1 David West <[email protected]>:\n> \n>> Thanks for your suggestion but the result is the same.\n>>\n>> Here is the explain analyse output from different queries.\n>> Select * from my_table where A is null and B = '21' limit 15\n>>\n>> \"Limit (cost=0.00..3.68 rows=15 width=128) (actual time=85837.043..85896.140 rows=15 loops=1)\"\n>> \" -> Seq Scan on my_table this_ (cost=0.00..258789.88 rows=1055580 width=128) (actual time=85837.038..85896.091 rows=15 loops=1)\"\n>> \" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n>> \"Total runtime: 85896.214 ms\"\n>>\n>> \n[snip]\n\nFurther to Pavel's comments;\n\n(actual time=85837.038..85896.091 rows=15 loops=1)\n\nThat's 85 seconds on a sequence scan to return the first tuple. The table is not bloated by any chance is it?\n\nRegards\n\nRussell\n\n\n\n", "msg_date": "Tue, 02 Sep 2008 13:54:53 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "\"Pavel Stehule\" <pavel.stehule 'at' gmail.com> writes:\n\n> Hello\n>\n> 2008/9/1 David West <[email protected]>:\n>> Thanks for your suggestion but the result is the same.\n>>\n>> Here is the explain analyse output from different queries.\n>> Select * from my_table where A is null and B = '21' limit 15\n>>\n>> \"Limit (cost=0.00..3.68 rows=15 width=128) (actual time=85837.043..85896.140 rows=15 loops=1)\"\n>> \" -> Seq Scan on my_table this_ (cost=0.00..258789.88 rows=1055580 width=128) (actual time=85837.038..85896.091 rows=15 loops=1)\"\n>> \" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n>> \"Total runtime: 85896.214 ms\"\n>>\n>\n> I see it - problem is in statistics - system expect 1055580, but there\n> is only 15 values.\n\nAren't you rather seeing the effect of the limit clause?\n\ngc=# create table foo ( bar int );\nCREATE TABLE\ngc=# insert into foo ( select generate_series(0, 10000000) / 1000000 );\nINSERT 0 10000001\ngc=# analyze foo;\nANALYZE\ngc=# explain analyze select * from foo where bar = 8 limit 15;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2.30 rows=15 width=4) (actual time=2379.878..2379.921 rows=15 loops=1)\n -> Seq Scan on foo (cost=0.00..164217.00 rows=1070009 width=4) (actual time=2379.873..2379.888 rows=15 loops=1)\n Filter: (bar = 8)\n Total runtime: 2379.974 ms\n\n(on 8.3.1)\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n", "msg_date": "Tue, 02 Sep 2008 09:44:33 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "2008/9/2 Guillaume Cottenceau <[email protected]>:\n> \"Pavel Stehule\" <pavel.stehule 'at' gmail.com> writes:\n>\n>> Hello\n>>\n>> 2008/9/1 David West <[email protected]>:\n>>> Thanks for your suggestion but the result is the same.\n>>>\n>>> Here is the explain analyse output from different queries.\n>>> Select * from my_table where A is null and B = '21' limit 15\n>>>\n>>> \"Limit (cost=0.00..3.68 rows=15 width=128) (actual time=85837.043..85896.140 rows=15 loops=1)\"\n>>> \" -> Seq Scan on my_table this_ (cost=0.00..258789.88 rows=1055580 width=128) (actual time=85837.038..85896.091 rows=15 loops=1)\"\n>>> \" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n>>> \"Total runtime: 85896.214 ms\"\n>>>\n>>\n>> I see it - problem is in statistics - system expect 1055580, but there\n>> is only 15 values.\n>\n> Aren't you rather seeing the effect of the limit clause?\n\nyes, true, my mistake\n\nPavel\n\n>\n> gc=# create table foo ( bar int );\n> CREATE TABLE\n> gc=# insert into foo ( select generate_series(0, 10000000) / 1000000 );\n> INSERT 0 10000001\n> gc=# analyze foo;\n> ANALYZE\n> gc=# explain analyze select * from foo where bar = 8 limit 15;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..2.30 rows=15 width=4) (actual time=2379.878..2379.921 rows=15 loops=1)\n> -> Seq Scan on foo (cost=0.00..164217.00 rows=1070009 width=4) (actual time=2379.873..2379.888 rows=15 loops=1)\n> Filter: (bar = 8)\n> Total runtime: 2379.974 ms\n>\n> (on 8.3.1)\n>\n> --\n> Guillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\n> Av. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n>\n", "msg_date": "Tue, 2 Sep 2008 09:46:58 +0200", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "Russell Smith <mr-russ 'at' pws.com.au> writes:\n\n> Pavel Stehule wrote:\n>> Hello\n>>\n>> 2008/9/1 David West <[email protected]>:\n>> \n>>> Thanks for your suggestion but the result is the same.\n>>>\n>>> Here is the explain analyse output from different queries.\n>>> Select * from my_table where A is null and B = '21' limit 15\n>>>\n>>> \"Limit (cost=0.00..3.68 rows=15 width=128) (actual time=85837.043..85896.140 rows=15 loops=1)\"\n>>> \" -> Seq Scan on my_table this_ (cost=0.00..258789.88 rows=1055580 width=128) (actual time=85837.038..85896.091 rows=15 loops=1)\"\n>>> \" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n>>> \"Total runtime: 85896.214 ms\"\n>>>\n>>> \n> [snip]\n>\n> Further to Pavel's comments;\n>\n> (actual time=85837.038..85896.091 rows=15 loops=1)\n>\n> That's 85 seconds on a sequence scan to return the first tuple. The table is not bloated by any chance is it?\n\nWouldn't this be e.g. normal if the distribution of values would\nbe uneven, e.g. A IS NULL AND B = '21' not near the beginning of\nthe table data?\n\nBy the way, my newbie eyes on \"pg_stats\" seem to tell me that PG\ndoesn't collect/use statistics about the distribution of the\ndata, am I wrong? E.g. in that situation, when a few A IS NULL\nAND B = '21' rows move from the beginning to the end of the table\ndata, a seqscan becomes a totally different story.. (the\ncorrelation changes, but may not change a lot if only a few rows\nmove).\n\nHowever, I cannot reproduce a similar situation to David's.\n\ngc=# create table foo ( bar int, baz text );\nCREATE TABLE\ngc=# insert into foo ( select generate_series(0, 10000000) / 1000000, case when random() < 0.05 then 'Today Alcatel-Lucent has announced that P******* C**** is appointed non-executive Chairman and B** V******** is appointed Chief Executive Officer.' else null end );\nINSERT 0 10000001\ngc=# create index foobar on foo(bar);\nCREATE INDEX\ngc=# create index foobaz on foo(baz);\nCREATE INDEX\ngc=# explain select * from foo where baz is null and bar = '8';\n QUERY PLAN \n---------------------------------------------------------------------------------\n Bitmap Heap Scan on foo (cost=1297.96..1783.17 rows=250 width=36)\n Recheck Cond: ((bar = 8) AND (baz IS NULL))\n -> BitmapAnd (cost=1297.96..1297.96 rows=250 width=0)\n -> Bitmap Index Scan on foobar (cost=0.00..595.69 rows=50000 width=0)\n Index Cond: (bar = 8)\n -> Bitmap Index Scan on foobaz (cost=0.00..701.90 rows=50000 width=0)\n Index Cond: (baz IS NULL)\n(7 rows)\n\ngc=# analyze foo;\nANALYZE\ngc=# explain select * from foo where baz is null and bar = '8';\n QUERY PLAN \n------------------------------------------------------------------------------\n Index Scan using foobar on foo (cost=0.00..30398.66 rows=1079089 width=154)\n Index Cond: (bar = 8)\n Filter: (baz IS NULL)\n(3 rows)\n\nThis is using pg 8.3.1 and:\n\nrandom_page_cost = 2\neffective_cache_size = 256MB\nshared_buffers = 384MB\n\nDavid, is there relevant information you've forgot to tell:\n\n- any other columns in your table?\n- is table bloated?\n- has table never been analyzed?\n- what version of postgresql? what overriden configuration?\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n", "msg_date": "Tue, 02 Sep 2008 11:06:48 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "\n\n>>>> \"Limit (cost=0.00..3.68 rows=15 width=128) (actual time=85837.043..85896.140 rows=15 loops=1)\"\n>>>> \" -> Seq Scan on my_table this_ (cost=0.00..258789.88 rows=1055580 width=128) (actual time=85837.038..85896.091 rows=15 loops=1)\"\n>>>> \" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n>>>> \"Total runtime: 85896.214 ms\"\n\nPostgres does collect and use statistics about what fraction of the \"A\" column\nis null. It also collects and uses statistics about what fraction of the \"B\"\ncolumn is 21 (using a histogram). And it does take the LIMIT into account.\n\nI think the other poster might well be right about this table being extremely\nbloated. You could test by running and posting the results of:\n\nVACUUM VERBOSE my_table\n\nWhat it doesn't collect is where in the table those records are -- so if there\nare a lot of them then it might use a sequential scan regardless of whether\nthey're at the beginning or end of the table. That seems unlikely to be the\nproblem though.\n\nThe other thing it doesn't collect is how many of the B=21 records have null\nAs. So if a large percentage of the table has A as null then it will assume\nthat's true for the B=21 records and if there are a lot of B=21 records then\nit will assume a sequential scan will find matches quickly. If in fact the two\ncolumns are highly correlated and B=21 records almost never have A null\nwhereas records with other values of B have lots of null values then Postgres\nmight make a bad decision here.\n\nAlso, it only has the statitics for B=21 via a histogram. If the distribution\nof B is highly skewed so that, for example values between 20 and 25 are very\ncommon but B=21 happens to be quite rare then Postgres might get a bad\nestimate here. You could improve this by raising the statistics target for the\nB column and re-analyzing.\n\nThat brings up another question -- when was the last time this table was\nanalyzed?\n\nWhat estimates and actual results does postgres get for simple queries like:\n\nEXPLAIN ANALYZE SELECT count(*) FROM my_table WHERE A IS NULL;\nEXPLAIN ANALYZE SELECT count(*) FROM my_table WHERE B=21;\nEXPLAIN ANALYZE SELECT count(*) FROM my_table WHERE A IS NULL AND B=21;\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Tue, 02 Sep 2008 10:44:54 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "Yes I inserted values in big batches according to a single value of 'B', so\nindeed a sequence scan may have to scan forward many millions of rows before\nfinding the required value.\n\nI have been doing regular analyse commands on my table. I don't think my\ntable is bloated, I haven't been performing updates. However I'm doing a\nvacuum analyse now and I'll see if that makes any difference.\n\nI am using postgres 8.3.1 with a default install on windows - no tweaks to\nthe configuration at all.\n\nThere are many other columns in my table, but none of them are used in this\nquery.\n\nGuillaume in your example you didn't add the limit clause? Postgres chooses\nthe correct index in my case without the limit clause, the problem is with\nthe limit clause. One other difference with your example is both my columns\nare varchar columns, not integer and text, I don't know if that would make a\ndifference.\n\n From looking at the plans, it seems to be postgres is assuming it will only\nhave to sequentially scan 15 rows, which is not true in my case because\ncolumn B is not distributed randomly (nor will it be in production). Would\npostgres not be best to ignore the limit when deciding the best index to use\n- in this simple query wouldn't the best plan to use always be the same\nwith or without a limit?\n\nThanks to all of you for your interest in my problem\nDavid\n\n-----Original Message-----\nFrom: Guillaume Cottenceau [mailto:[email protected]] \nSent: 02 September 2008 10:07\nTo: David West; [email protected]\nSubject: Re: [PERFORM] limit clause breaks query planner?\n\nWouldn't this be e.g. normal if the distribution of values would\nbe uneven, e.g. A IS NULL AND B = '21' not near the beginning of\nthe table data?\n\nBy the way, my newbie eyes on \"pg_stats\" seem to tell me that PG\ndoesn't collect/use statistics about the distribution of the\ndata, am I wrong? E.g. in that situation, when a few A IS NULL\nAND B = '21' rows move from the beginning to the end of the table\ndata, a seqscan becomes a totally different story.. (the\ncorrelation changes, but may not change a lot if only a few rows\nmove).\n\nHowever, I cannot reproduce a similar situation to David's.\n\ngc=# create table foo ( bar int, baz text );\nCREATE TABLE\ngc=# insert into foo ( select generate_series(0, 10000000) / 1000000, case\nwhen random() < 0.05 then 'Today Alcatel-Lucent has announced that P*******\nC**** is appointed non-executive Chairman and B** V******** is appointed\nChief Executive Officer.' else null end );\nINSERT 0 10000001\ngc=# create index foobar on foo(bar);\nCREATE INDEX\ngc=# create index foobaz on foo(baz);\nCREATE INDEX\ngc=# explain select * from foo where baz is null and bar = '8';\n QUERY PLAN\n\n----------------------------------------------------------------------------\n-----\n Bitmap Heap Scan on foo (cost=1297.96..1783.17 rows=250 width=36)\n Recheck Cond: ((bar = 8) AND (baz IS NULL))\n -> BitmapAnd (cost=1297.96..1297.96 rows=250 width=0)\n -> Bitmap Index Scan on foobar (cost=0.00..595.69 rows=50000\nwidth=0)\n Index Cond: (bar = 8)\n -> Bitmap Index Scan on foobaz (cost=0.00..701.90 rows=50000\nwidth=0)\n Index Cond: (baz IS NULL)\n(7 rows)\n\ngc=# analyze foo;\nANALYZE\ngc=# explain select * from foo where baz is null and bar = '8';\n QUERY PLAN\n\n----------------------------------------------------------------------------\n--\n Index Scan using foobar on foo (cost=0.00..30398.66 rows=1079089\nwidth=154)\n Index Cond: (bar = 8)\n Filter: (baz IS NULL)\n(3 rows)\n\nThis is using pg 8.3.1 and:\n\nrandom_page_cost = 2\neffective_cache_size = 256MB\nshared_buffers = 384MB\n\nDavid, is there relevant information you've forgot to tell:\n\n- any other columns in your table?\n- is table bloated?\n- has table never been analyzed?\n- what version of postgresql? what overriden configuration?\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n\n", "msg_date": "Tue, 2 Sep 2008 10:48:07 +0100", "msg_from": "\"David West\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "\"David West\" <david.west 'at' cusppoint.com> writes:\n\n> Yes I inserted values in big batches according to a single value of 'B', so\n> indeed a sequence scan may have to scan forward many millions of rows before\n> finding the required value.\n\nThat may well be why the seqscan is so slow to give your results;\nthat said, it doesn't explain why the indexscane is not\npreferred.\n\n> I have been doing regular analyse commands on my table. I don't think my\n\nLike, recently? Can you post the stats?\n\ngc=# select * from pg_stats where tablename = 'foo';\n\nYou should try to ANALYZE again and see if that makes a\ndifference, to be sure.\n\n> table is bloated, I haven't been performing updates. However I'm doing a\n\nMaybe you've been DELETE'ing then INSERT'ing some? That creates\nbloat too. Btw, don't forget to prefer TRUNCATE to remove\neverything from the table, and ANALYZE after large INSERT's.\n\n> vacuum analyse now and I'll see if that makes any difference.\n\nA single VACUUM may not report how bloated your table is, if it's\nbeen VACUUM'ed some before, but not frequently enough. If you\nhave time for it, and you can afford a full lock on the table,\nonly a VACUUM FULL VERBOSE will tell you the previous bloat (the\n\"table .. truncated to ..\" line IIRC).\n\n> I am using postgres 8.3.1 with a default install on windows - no tweaks to\n> the configuration at all.\n\nWith a default install, the free space map settings may well be\ntoo small for tracking free space on a table as large as 10M\nrows. Performing VACUUM VERBOSE on database 'template1' will show\nyou interesting information about the current and ideal FSM\nsettings, at the end of the output. Something like:\n\n INFO: free space map contains 37709 pages in 276 relations\n DETAIL: A total of 42080 page slots are in use (including overhead).\n 42080 page slots are required to track all free space.\n Current limits are: 204800 page slots, 1000 relations, using 1265 kB.\n\nOf course, this also depends on the frequency of your VACUUMing\n(if autovacuuming is not configured or badly configured) against\nthe frequency of your UPDATE's and DELETE's.\n\n> There are many other columns in my table, but none of them are used in this\n> query.\n\nCan you show us the table definition? I am too ignorant in PG to\nknow if that would make a difference, but it might ring a bell\nfor others.. AFAIK, more column data may mean larger resultsets\nand may change the plan (though \"width=128\" in the log of your\nexplanation wouldn't mean a lot of data are stored per row).\n\n> Guillaume in your example you didn't add the limit clause? Postgres chooses\n> the correct index in my case without the limit clause, the problem is with\n> the limit clause.\n\nDuh, forgot about that, sorry! But I did try it and it was the same.\n\ngc=# explain select * from foo where baz is null and bar = '8' limit 15;\n QUERY PLAN \n------------------------------------------------------------------------------------\n Limit (cost=0.00..0.42 rows=15 width=154)\n -> Index Scan using foobar on foo (cost=0.00..30398.66 rows=1079089 width=154)\n Index Cond: (bar = 8)\n Filter: (baz IS NULL)\n(4 rows)\n\n> One other difference with your example is both my columns are\n> varchar columns, not integer and text, I don't know if that\n> would make a difference.\n\nIt is always useful to know as much about the actual table\ndefinition and data, to isolate a performance problem... I know\nit may clash with privacy :/ but that kind of information\nprobably will not, isn't it?\n\nWith:\n\ngc=# create table foo ( bar varchar(64), baz varchar(256) );\n\nit doesn't make a difference yet:\n\ngc=# explain select * from foo where baz is null and bar = '8';\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Scan using foobar on foo (cost=0.00..27450.05 rows=982092 width=149)\n Index Cond: ((bar)::text = '8'::text)\n Filter: (baz IS NULL)\n(3 rows)\n\ngc=# explain select * from foo where baz is null and bar = '8' limit 15;\n QUERY PLAN \n-----------------------------------------------------------------------------------\n Limit (cost=0.00..0.42 rows=15 width=149)\n -> Index Scan using foobar on foo (cost=0.00..27450.05 rows=982092 width=149)\n Index Cond: ((bar)::text = '8'::text)\n Filter: (baz IS NULL)\n(4 rows)\n\nBtw, it would help if you could reproduce my test scenario and\nsee if PG uses \"correctly\" the indexscan. It is better to try on\nyour installation, to take care of any configuration/whatever\nvariation which may create your problem.\n\n>>From looking at the plans, it seems to be postgres is assuming it will only\n> have to sequentially scan 15 rows, which is not true in my case because\n> column B is not distributed randomly (nor will it be in production). Would\n\nWhy do you say that? The explanation seems to rather tell that it\n(correctly) assumes that the seqscan would bring up about 1M rows\nfor the selected values of A and B, and then it will limit to 15\nrows.\n\n> postgres not be best to ignore the limit when deciding the best index to use\n> - in this simple query wouldn't the best plan to use always be the same\n> with or without a limit?\n\nI am not too sure, but I'd say no: when PG considers the LIMIT,\nthen it knows that (potentially) less rows are to be actually\nused from the inner resultset, so a different plan may be\ndevised.\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n", "msg_date": "Tue, 02 Sep 2008 12:15:57 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "Here is the results of 'vacuum analyse verbose' on my table:\n\nINFO: vacuuming \"public.jbpm_taskinstance\"\nINFO: scanned index \"jbpm_taskinstance_pkey\" to remove 928153 row versions\nDETAIL: CPU 0.70s/2.40u sec elapsed 46.49 sec.\nINFO: scanned index \"idx_tskinst_tminst\" to remove 928153 row versions\nDETAIL: CPU 0.78s/2.34u sec elapsed 88.99 sec.\nINFO: scanned index \"idx_tskinst_slinst\" to remove 928153 row versions\nDETAIL: CPU 0.63s/2.37u sec elapsed 92.54 sec.\nINFO: scanned index \"idx_taskinst_tokn\" to remove 928153 row versions\nDETAIL: CPU 0.99s/2.30u sec elapsed 110.29 sec.\nINFO: scanned index \"idx_taskinst_tsk\" to remove 928153 row versions\nDETAIL: CPU 0.92s/2.63u sec elapsed 89.16 sec.\nINFO: scanned index \"idx_pooled_actor\" to remove 928153 row versions\nDETAIL: CPU 0.32s/1.65u sec elapsed 2.56 sec.\nINFO: scanned index \"idx_task_actorid\" to remove 928153 row versions\nDETAIL: CPU 0.09s/1.88u sec elapsed 2.69 sec.\nINFO: \"jbpm_taskinstance\": removed 928153 row versions in 13685 pages\nDETAIL: CPU 0.84s/0.82u sec elapsed 26.42 sec.\nINFO: index \"jbpm_taskinstance_pkey\" now contains 7555748 row versions in\n62090 pages\nDETAIL: 927985 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.03 sec.\nINFO: index \"idx_tskinst_tminst\" now contains 7555748 row versions in 65767\npages\n\nAfterwards I ran a 'vacuum full verbose'\n\nINFO: vacuuming \"public.jbpm_taskinstance\"\nINFO: \"jbpm_taskinstance\": found 0 removable, 7555748 nonremovable row\nversions in 166156 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 88 to 209 bytes long.\nThere were 8470471 unused item pointers.\nTotal free space (including removable row versions) is 208149116 bytes.\n9445 pages are or will become empty, including 0 at the end of the table.\n119104 pages containing 206008504 free bytes are potential move\ndestinations.\nCPU 2.44s/1.60u sec elapsed 127.89 sec.\nINFO: index \"jbpm_taskinstance_pkey\" now contains 7555748 row versions in\n62090 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.87s/2.16u sec elapsed 120.81 sec.\nINFO: index \"idx_tskinst_tminst\" now contains 7555748 row versions in 65767\npages\nDETAIL: 0 index row versions were removed.\n26024 index pages have been deleted, 26024 are currently reusable.\nCPU 0.79s/1.95u sec elapsed 103.52 sec.\nINFO: index \"idx_tskinst_slinst\" now contains 7555748 row versions in 56031\npages\nDETAIL: 0 index row versions were removed.\n28343 index pages have been deleted, 28343 are currently reusable.\nCPU 0.62s/1.93u sec elapsed 99.21 sec.\nINFO: index \"idx_taskinst_tokn\" now contains 7555748 row versions in 65758\npages\nDETAIL: 0 index row versions were removed.\n26012 index pages have been deleted, 26012 are currently reusable.\nCPU 1.10s/2.18u sec elapsed 108.29 sec.\nINFO: index \"idx_taskinst_tsk\" now contains 7555748 row versions in 64516\npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.01s/1.73u sec elapsed 64.73 sec.\nINFO: index \"idx_pooled_actor\" now contains 7555748 row versions in 20896\npages\nDETAIL: 0 index row versions were removed.\n136 index pages have been deleted, 136 are currently reusable.\nCPU 0.26s/1.57u sec elapsed 3.01 sec.\nINFO: index \"idx_task_actorid\" now contains 7555748 row versions in 20885\npages\nDETAIL: 0 index row versions were removed.\n121 index pages have been deleted, 121 are currently reusable.\nCPU 0.23s/1.52u sec elapsed 2.77 sec.\nINFO: \"jbpm_taskinstance\": moved 1374243 row versions, truncated 166156 to\n140279 pages\nDETAIL: CPU 26.50s/138.35u sec elapsed 735.02 sec.\nINFO: index \"jbpm_taskinstance_pkey\" now contains 7555748 row versions in\n62090 pages\nDETAIL: 1374243 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.04s/1.38u sec elapsed 117.72 sec.\nINFO: index \"idx_tskinst_tminst\" now contains 7555748 row versions in 65767\npages\nDETAIL: 1374243 index row versions were removed.\n26024 index pages have been deleted, 26024 are currently reusable.\nCPU 1.37s/1.01u sec elapsed 123.56 sec.\nINFO: index \"idx_tskinst_slinst\" now contains 7555748 row versions in 56031\npages\nDETAIL: 1374243 index row versions were removed.\n28560 index pages have been deleted, 28560 are currently reusable.\nCPU 1.20s/1.27u sec elapsed 105.67 sec.\nINFO: index \"idx_taskinst_tokn\" now contains 7555748 row versions in 65758\npages\nDETAIL: 1374243 index row versions were removed.\n26012 index pages have been deleted, 26012 are currently reusable.\nCPU 1.29s/0.96u sec elapsed 112.62 sec.\nINFO: index \"idx_taskinst_tsk\" now contains 7555748 row versions in 64516\npages\nDETAIL: 1374243 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.48s/1.12u sec elapsed 70.56 sec.\nINFO: index \"idx_pooled_actor\" now contains 7555748 row versions in 25534\npages\nDETAIL: 1374243 index row versions were removed.\n3769 index pages have been deleted, 3769 are currently reusable.\nCPU 0.48s/0.82u sec elapsed 6.89 sec.\nINFO: index \"idx_task_actorid\" now contains 7555748 row versions in 25545\npages\nDETAIL: 1374243 index row versions were removed.\n3790 index pages have been deleted, 3790 are currently reusable.\nCPU 0.37s/1.24u sec elapsed 7.93 sec.\nINFO: vacuuming \"pg_toast.pg_toast_560501\"\nINFO: \"pg_toast_560501\": found 0 removable, 0 nonremovable row versions in\n0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_560501_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nQuery returned successfully with no result in 1910948 ms.\n\nSo it looks to me like it was bloated by about 25%. Performance seems\nbetter for 'some' values of column B (I believe all the vacuumed rows were\nat the beginning), but I've still done the same query which can take over a\nminute in some cases - depending on caching etc. It is still choosing\nsequential scan.\n\nHere are the queries you requested:\nEXPLAIN ANALYZE SELECT count(*) FROM jbpm_taskinstance WHERE actorid_ IS\nNULL;\n\"Aggregate (cost=234297.69..234297.70 rows=1 width=0) (actual\ntime=109648.328..109648.330 rows=1 loops=1)\"\n\" -> Seq Scan on jbpm_taskinstance (cost=0.00..215836.48 rows=7384484\nwidth=0) (actual time=13.684..99002.279 rows=7315726 loops=1)\"\n\" Filter: (actorid_ IS NULL)\"\n\"Total runtime: 109648.403 ms\"\n\nEXPLAIN ANALYZE SELECT count(*) FROM jbpm_taskinstance WHERE\npooledactor_='21';\n\"Aggregate (cost=180929.77..180929.78 rows=1 width=0) (actual\ntime=6739.215..6739.217 rows=1 loops=1)\"\n\" -> Bitmap Heap Scan on jbpm_taskinstance (cost=23839.23..178127.84\nrows=1120769 width=0) (actual time=633.808..5194.672 rows=1020084 loops=1)\"\n\" Recheck Cond: ((pooledactor_)::text = '21'::text)\"\n\" -> Bitmap Index Scan on idx_pooled_actor (cost=0.00..23559.04\nrows=1120769 width=0) (actual time=612.546..612.546 rows=1020084 loops=1)\"\n\" Index Cond: ((pooledactor_)::text = '21'::text)\"\n\"Total runtime: 6739.354 ms\"\n\nEXPLAIN ANALYZE SELECT count(*) FROM jbpm_taskinstance WHERE actorid_ IS\nNULL AND pooledactor_='21';\n\"Aggregate (cost=180859.91..180859.92 rows=1 width=0) (actual\ntime=4358.316..4358.318 rows=1 loops=1)\"\n\" -> Bitmap Heap Scan on jbpm_taskinstance (cost=23832.88..178121.49\nrows=1095365 width=0) (actual time=377.206..2929.735 rows=1020062 loops=1)\"\n\" Recheck Cond: ((pooledactor_)::text = '21'::text)\"\n\" Filter: (actorid_ IS NULL)\"\n\" -> Bitmap Index Scan on idx_pooled_actor (cost=0.00..23559.04\nrows=1120769 width=0) (actual time=373.160..373.160 rows=1020084 loops=1)\"\n\" Index Cond: ((pooledactor_)::text = '21'::text)\"\n\"Total runtime: 4366.766 ms\"\n\nMany thanks,\nDavid\n\n\n\n\n\n-----Original Message-----\nFrom: Greg Stark [mailto:greg.stark enterprisedb.com] On Behalf Of Gregory\nStark\nSent: 02 September 2008 10:45\nTo: Guillaume Cottenceau\nCc: David West; [email protected]\nSubject: Re: limit clause breaks query planner?\n\n\n\n>>>> \"Limit (cost=0.00..3.68 rows=15 width=128) (actual\ntime=85837.043..85896.140 rows=15 loops=1)\"\n>>>> \" -> Seq Scan on my_table this_ (cost=0.00..258789.88 rows=1055580\nwidth=128) (actual time=85837.038..85896.091 rows=15 loops=1)\"\n>>>> \" Filter: ((A IS NULL) AND ((B)::text = '21'::text))\"\n>>>> \"Total runtime: 85896.214 ms\"\n\nPostgres does collect and use statistics about what fraction of the \"A\"\ncolumn\nis null. It also collects and uses statistics about what fraction of the \"B\"\ncolumn is 21 (using a histogram). And it does take the LIMIT into account.\n\nI think the other poster might well be right about this table being\nextremely\nbloated. You could test by running and posting the results of:\n\nVACUUM VERBOSE my_table\n\nWhat it doesn't collect is where in the table those records are -- so if\nthere\nare a lot of them then it might use a sequential scan regardless of whether\nthey're at the beginning or end of the table. That seems unlikely to be the\nproblem though.\n\nThe other thing it doesn't collect is how many of the B=21 records have null\nAs. So if a large percentage of the table has A as null then it will assume\nthat's true for the B=21 records and if there are a lot of B=21 records then\nit will assume a sequential scan will find matches quickly. If in fact the\ntwo\ncolumns are highly correlated and B=21 records almost never have A null\nwhereas records with other values of B have lots of null values then\nPostgres\nmight make a bad decision here.\n\nAlso, it only has the statitics for B=21 via a histogram. If the\ndistribution\nof B is highly skewed so that, for example values between 20 and 25 are very\ncommon but B=21 happens to be quite rare then Postgres might get a bad\nestimate here. You could improve this by raising the statistics target for\nthe\nB column and re-analyzing.\n\nThat brings up another question -- when was the last time this table was\nanalyzed?\n\nWhat estimates and actual results does postgres get for simple queries like:\n\nEXPLAIN ANALYZE SELECT count(*) FROM my_table WHERE A IS NULL;\nEXPLAIN ANALYZE SELECT count(*) FROM my_table WHERE B=21;\nEXPLAIN ANALYZE SELECT count(*) FROM my_table WHERE A IS NULL AND B=21;\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n\n", "msg_date": "Tue, 2 Sep 2008 12:16:32 +0100", "msg_from": "\"David West\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "\n>A single VACUUM may not report how bloated your table is, if it's\n>been VACUUM'ed some before, but not frequently enough. If you\n>have time for it, and you can afford a full lock on the table,\n>only a VACUUM FULL VERBOSE will tell you the previous bloat (the\n>\"table .. truncated to ..\" line IIRC).\n\nHere's the output of vacuum full verbose (after running a plain vacuum\nverbose)\n\nINFO: vacuuming \"public.jbpm_taskinstance\"\nINFO: \"jbpm_taskinstance\": found 0 removable, 7555748 nonremovable row\nversions in 166156 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 88 to 209 bytes long.\nThere were 8470471 unused item pointers.\nTotal free space (including removable row versions) is 208149116 bytes.\n9445 pages are or will become empty, including 0 at the end of the table.\n119104 pages containing 206008504 free bytes are potential move\ndestinations.\nCPU 2.44s/1.60u sec elapsed 127.89 sec.\nINFO: index \"jbpm_taskinstance_pkey\" now contains 7555748 row versions in\n62090 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.87s/2.16u sec elapsed 120.81 sec.\nINFO: index \"idx_tskinst_tminst\" now contains 7555748 row versions in 65767\npages\nDETAIL: 0 index row versions were removed.\n26024 index pages have been deleted, 26024 are currently reusable.\nCPU 0.79s/1.95u sec elapsed 103.52 sec.\nINFO: index \"idx_tskinst_slinst\" now contains 7555748 row versions in 56031\npages\nDETAIL: 0 index row versions were removed.\n28343 index pages have been deleted, 28343 are currently reusable.\nCPU 0.62s/1.93u sec elapsed 99.21 sec.\nINFO: index \"idx_taskinst_tokn\" now contains 7555748 row versions in 65758\npages\nDETAIL: 0 index row versions were removed.\n26012 index pages have been deleted, 26012 are currently reusable.\nCPU 1.10s/2.18u sec elapsed 108.29 sec.\nINFO: index \"idx_taskinst_tsk\" now contains 7555748 row versions in 64516\npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.01s/1.73u sec elapsed 64.73 sec.\nINFO: index \"idx_pooled_actor\" now contains 7555748 row versions in 20896\npages\nDETAIL: 0 index row versions were removed.\n136 index pages have been deleted, 136 are currently reusable.\nCPU 0.26s/1.57u sec elapsed 3.01 sec.\nINFO: index \"idx_task_actorid\" now contains 7555748 row versions in 20885\npages\nDETAIL: 0 index row versions were removed.\n121 index pages have been deleted, 121 are currently reusable.\nCPU 0.23s/1.52u sec elapsed 2.77 sec.\nINFO: \"jbpm_taskinstance\": moved 1374243 row versions, truncated 166156 to\n140279 pages\nDETAIL: CPU 26.50s/138.35u sec elapsed 735.02 sec.\nINFO: index \"jbpm_taskinstance_pkey\" now contains 7555748 row versions in\n62090 pages\nDETAIL: 1374243 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.04s/1.38u sec elapsed 117.72 sec.\nINFO: index \"idx_tskinst_tminst\" now contains 7555748 row versions in 65767\npages\nDETAIL: 1374243 index row versions were removed.\n26024 index pages have been deleted, 26024 are currently reusable.\nCPU 1.37s/1.01u sec elapsed 123.56 sec.\nINFO: index \"idx_tskinst_slinst\" now contains 7555748 row versions in 56031\npages\nDETAIL: 1374243 index row versions were removed.\n28560 index pages have been deleted, 28560 are currently reusable.\nCPU 1.20s/1.27u sec elapsed 105.67 sec.\nINFO: index \"idx_taskinst_tokn\" now contains 7555748 row versions in 65758\npages\nDETAIL: 1374243 index row versions were removed.\n26012 index pages have been deleted, 26012 are currently reusable.\nCPU 1.29s/0.96u sec elapsed 112.62 sec.\nINFO: index \"idx_taskinst_tsk\" now contains 7555748 row versions in 64516\npages\nDETAIL: 1374243 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.48s/1.12u sec elapsed 70.56 sec.\nINFO: index \"idx_pooled_actor\" now contains 7555748 row versions in 25534\npages\nDETAIL: 1374243 index row versions were removed.\n3769 index pages have been deleted, 3769 are currently reusable.\nCPU 0.48s/0.82u sec elapsed 6.89 sec.\nINFO: index \"idx_task_actorid\" now contains 7555748 row versions in 25545\npages\nDETAIL: 1374243 index row versions were removed.\n3790 index pages have been deleted, 3790 are currently reusable.\nCPU 0.37s/1.24u sec elapsed 7.93 sec.\nINFO: vacuuming \"pg_toast.pg_toast_560501\"\nINFO: \"pg_toast_560501\": found 0 removable, 0 nonremovable row versions in\n0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_560501_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\n>Can you show us the table definition? I am too ignorant in PG to\n>know if that would make a difference, but it might ring a bell\n>for others.. AFAIK, more column data may mean larger resultsets\n>and may change the plan (though \"width=128\" in the log of your\n>explanation wouldn't mean a lot of data are stored per row).\n\nYep, the table is from the jboss jbpm (business process management) schema.\nIt has one modification, I've added the pooledactor_ column (column B as\nI've been referring to it until now) to remove a many-to-many relationship\nto simplify my query. Column A as I've been referring to is the actorid_\ncolumn. Here's the ddl:\n\nCREATE TABLE jbpm_taskinstance\n(\n id_ bigint NOT NULL,\n class_ character(1) NOT NULL,\n version_ integer NOT NULL,\n name_ character varying(255),\n description_ character varying(4000),\n actorid_ character varying(255),\n create_ timestamp without time zone,\n start_ timestamp without time zone,\n end_ timestamp without time zone,\n duedate_ timestamp without time zone,\n priority_ integer,\n iscancelled_ boolean,\n issuspended_ boolean,\n isopen_ boolean,\n issignalling_ boolean,\n isblocking_ boolean,\n task_ bigint,\n token_ bigint,\n procinst_ bigint,\n swimlaninstance_ bigint,\n taskmgmtinstance_ bigint,\n pooledactor_ character varying(255),\n processname_ character varying(255),\n CONSTRAINT jbpm_taskinstance_pkey PRIMARY KEY (id_),\n CONSTRAINT fk_taskinst_slinst FOREIGN KEY (swimlaninstance_)\n REFERENCES jbpm_swimlaneinstance (id_) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_taskinst_task FOREIGN KEY (task_)\n REFERENCES jbpm_task (id_) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_taskinst_tminst FOREIGN KEY (taskmgmtinstance_)\n REFERENCES jbpm_moduleinstance (id_) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_taskinst_token FOREIGN KEY (token_)\n REFERENCES jbpm_token (id_) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_tskins_prcins FOREIGN KEY (procinst_)\n REFERENCES jbpm_processinstance (id_) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (OIDS=FALSE);\n\n-- Index: idx_pooled_actor\n\n-- DROP INDEX idx_pooled_actor;\n\nCREATE INDEX idx_pooled_actor\n ON jbpm_taskinstance\n USING btree\n (pooledactor_);\n\n-- Index: idx_taskinst_tokn\n\n-- DROP INDEX idx_taskinst_tokn;\n\nCREATE INDEX idx_taskinst_tokn\n ON jbpm_taskinstance\n USING btree\n (token_);\n\n-- Index: idx_taskinst_tsk\n\n-- DROP INDEX idx_taskinst_tsk;\n\nCREATE INDEX idx_taskinst_tsk\n ON jbpm_taskinstance\n USING btree\n (task_, procinst_);\n\n-- Index: idx_tskinst_slinst\n\n-- DROP INDEX idx_tskinst_slinst;\n\nCREATE INDEX idx_tskinst_slinst\n ON jbpm_taskinstance\n USING btree\n (swimlaninstance_);\n\n-- Index: idx_tskinst_tminst\n\n-- DROP INDEX idx_tskinst_tminst;\n\nCREATE INDEX idx_tskinst_tminst\n ON jbpm_taskinstance\n USING btree\n (taskmgmtinstance_);\n\n\n\n>Btw, it would help if you could reproduce my test scenario and\n>see if PG uses \"correctly\" the indexscan. It is better to try on\n>your installation, to take care of any configuration/whatever\n>variation which may create your problem.\n\nI have tried your example and I get the same results as you.\n\ndb=# explain select * from foo where baz is null and bar = '8' limit 15;\n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n----\n---\n Limit (cost=0.00..0.53 rows=15 width=154)\n -> Index Scan using foobar on foo (cost=0.00..33159.59 rows=934389\nwidth=15\n4)\n Index Cond: (bar = 8)\n Filter: (baz IS NULL)\n(4 rows)\n\ndb=# drop index foobar;\nDROP INDEX\ndb=# explain select * from foo where baz is null and bar = '8' limit 15;\n\n QUERY PLAN\n---------------------------------------------------------------------\n Limit (cost=0.00..2.87 rows=15 width=154)\n -> Seq Scan on foo (cost=0.00..178593.35 rows=934389 width=154)\n Filter: ((baz IS NULL) AND (bar = 8))\n(3 rows)\n\nIt's choosing the index because of a cost of 0.53 vs a cost of 2.87 for\nsequential scan. I wonder why in my real tables the index scan cost is\nhigher than the sequential scan cost. Perhaps because of the extra width of\nmy rows?\n\n>> From looking at the plans, it seems to be postgres is assuming it will \n>> only\n>> have to sequentially scan 15 rows, which is not true in my case \n>> because column B is not distributed randomly (nor will it be in \n>> production). Would\n>\n>Why do you say that? The explanation seems to rather tell that it\n>(correctly) assumes that the seqscan would bring up about 1M rows for the\nselected values of A and B, and then it will limit to 15 rows.\n\nI say that because the plan gives a really really low number (3.21) for the\nestimated cost after the limit on sequential scan:\n\nSelect * from JBPM_TASKINSTANCE this_ where actorid_ is null and\nthis_.POOLEDACTOR_ in ('21') limit 15\n\"Limit (cost=0.00..3.21 rows=15 width=128) (actual\ntime=84133.211..84187.247 rows=15 loops=1)\"\n\" -> Seq Scan on jbpm_taskinstance this_ (cost=0.00..234725.85\nrows=1095365 width=128) (actual time=84133.205..84187.186 rows=15 loops=1)\"\n\" Filter: ((actorid_ IS NULL) AND ((pooledactor_)::text =\n'21'::text))\"\n\"Total runtime: 84187.335 ms\"\n\nIt just seems to me it is not taking into account at all that it might have\nto scan thousands or millions of rows before it gets the 15 rows it needs.\n\nIf I disable sequence scan, it equally uselessly chooses the wrong index,\nbased on a far too low estimated cost of 5.27:\nSet enable_seqscan=false\n\"Limit (cost=0.00..5.27 rows=15 width=128) (actual\ntime=120183.610..120183.707 rows=15 loops=1)\"\n\" -> Index Scan using idx_task_actorid on jbpm_taskinstance this_\n(cost=0.00..384657.95 rows=1095365 width=128) (actual\ntime=120183.604..120183.653 rows=15 loops=1)\"\n\" Index Cond: (actorid_ IS NULL)\"\n\" Filter: ((pooledactor_)::text = '21'::text)\"\n\"Total runtime: 120183.788 ms\"\n\n\n\n\n\nMany thanks\nDavid\n\n", "msg_date": "Tue, 2 Sep 2008 12:53:02 +0100", "msg_from": "\"David West\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "\"David West\" <david.west 'at' cusppoint.com> writes:\n\n> INFO: \"jbpm_taskinstance\": moved 1374243 row versions, truncated 166156 to\n> 140279 pages\n\nnothing which would explain so much planning off :/\n\n> Yep, the table is from the jboss jbpm (business process management) schema.\n\nI've went to that kind of test then, but it didn't help much:\n\n create table foo ( bar character varying(255), baz character varying(255),\n id_ bigint NOT NULL,\n class_ character(1) NOT NULL,\n version_ integer NOT NULL,\n name_ character varying(255),\n description_ character varying(4000),\n create_ timestamp without time zone,\n start_ timestamp without time zone,\n end_ timestamp without time zone,\n duedate_ timestamp without time zone,\n priority_ integer,\n iscancelled_ boolean,\n issuspended_ boolean,\n isopen_ boolean,\n issignalling_ boolean,\n isblocking_ boolean,\n task_ bigint,\n token_ bigint,\n procinst_ bigint,\n swimlaninstance_ bigint,\n taskmgmtinstance_ bigint,\n processname_ character varying(255) );\n \n insert into foo ( select generate_series(0, 10000000) / 1000000, case when random() < 0.05 then 'Today Alcatel-Lucent has announced that Philippe Camus is appointed non-executive Chairman and Ben Verwaayen is appointed Chief Executive Officer.' else null end, 1, 'a', 1 );\n \n create index foobaz on foo(baz);\n create index foobar on foo(bar);\n analyze foo;\n\nEstimated costs still look correct on my side:\n\n gc=# explain select * from foo where baz is null and bar in ('8') limit 15;\n QUERY PLAN \n ------------------------------------------------------------------------------------\n Limit (cost=0.00..0.46 rows=15 width=1795)\n -> Index Scan using foobar on foo (cost=0.00..26311.70 rows=860238 width=1795)\n Index Cond: ((bar)::text = '8'::text)\n Filter: (baz IS NULL)\n (4 rows)\n \n gc=# set enable_indexscan = off;\n SET\n gc=# explain select * from foo where baz is null and bar in ('8') limit 15;\n QUERY PLAN \n ----------------------------------------------------------------------\n Limit (cost=0.00..3.46 rows=15 width=1795)\n -> Seq Scan on foo (cost=0.00..198396.62 rows=860238 width=1795)\n Filter: ((baz IS NULL) AND ((bar)::text = '8'::text))\n (3 rows)\n\n\n>>Btw, it would help if you could reproduce my test scenario and\n>>see if PG uses \"correctly\" the indexscan. It is better to try on\n>>your installation, to take care of any configuration/whatever\n>>variation which may create your problem.\n>\n> I have tried your example and I get the same results as you.\n>\n> db=# explain select * from foo where baz is null and bar = '8' limit 15;\n>\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------\n> ----\n> ---\n> Limit (cost=0.00..0.53 rows=15 width=154)\n> -> Index Scan using foobar on foo (cost=0.00..33159.59 rows=934389\n> width=15\n> 4)\n> Index Cond: (bar = 8)\n> Filter: (baz IS NULL)\n> (4 rows)\n>\n> db=# drop index foobar;\n> DROP INDEX\n> db=# explain select * from foo where baz is null and bar = '8' limit 15;\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------\n> Limit (cost=0.00..2.87 rows=15 width=154)\n> -> Seq Scan on foo (cost=0.00..178593.35 rows=934389 width=154)\n> Filter: ((baz IS NULL) AND (bar = 8))\n> (3 rows)\n>\n> It's choosing the index because of a cost of 0.53 vs a cost of 2.87 for\n> sequential scan. I wonder why in my real tables the index scan cost is\n> higher than the sequential scan cost. Perhaps because of the extra width of\n> my rows?\n\nYou may try to crosscheck with the new test I've put upper, but\nI'm skeptical :/\n\nI think I've unfortunately more than reached my level of\nincompetence on that subject, sorry I wasn't able to better\nlocate your problem :/\n\n>>> From looking at the plans, it seems to be postgres is assuming it will \n>>> only\n>>> have to sequentially scan 15 rows, which is not true in my case \n>>> because column B is not distributed randomly (nor will it be in \n>>> production). Would\n>>\n>>Why do you say that? The explanation seems to rather tell that it\n>>(correctly) assumes that the seqscan would bring up about 1M rows for the\n> selected values of A and B, and then it will limit to 15 rows.\n>\n> I say that because the plan gives a really really low number (3.21) for the\n> estimated cost after the limit on sequential scan:\n>\n> Select * from JBPM_TASKINSTANCE this_ where actorid_ is null and\n> this_.POOLEDACTOR_ in ('21') limit 15\n> \"Limit (cost=0.00..3.21 rows=15 width=128) (actual\n> time=84133.211..84187.247 rows=15 loops=1)\"\n> \" -> Seq Scan on jbpm_taskinstance this_ (cost=0.00..234725.85\n> rows=1095365 width=128) (actual time=84133.205..84187.186 rows=15 loops=1)\"\n> \" Filter: ((actorid_ IS NULL) AND ((pooledactor_)::text =\n> '21'::text))\"\n> \"Total runtime: 84187.335 ms\"\n>\n> It just seems to me it is not taking into account at all that it might have\n> to scan thousands or millions of rows before it gets the 15 rows it needs.\n\nWell, if your have 95% of NULL actorid_ and 10% for each value of\npooledactor_, then it makes sense to assume it will have to fetch\nabout 150 rows to find the 15 awaited ones...\n\nIn the end, if PG doesn't know about data distribution, its\nbehavior makes total sense to me: 150 rows of width=128 bytes\nneed only 3 disk pages, so it shouldn't be faster than with a\nseqscan, theoretically; however, I am not sure then why on my\nsimple \"foo\" test it isn't using the same decision..\n\n\nBtw, that should not solve your problem, but normally, to help PG\nchoose indexscan often enough, it's good to reduce\nrandom_page_cost which is 4 by default (a high value for nowadays\nservers), increase effective_cache_size to what's available on\nyour machine, and potentially the shared_buffers which normally\nhelps for a good deal of matters, performance-wise.\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n", "msg_date": "Tue, 02 Sep 2008 15:55:33 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "Thanks very much for your help Guillaume, I appreciate you spending time on\nthis.\n\n> Well, if your have 95% of NULL actorid_ and 10% for each value of\n> pooledactor_, then it makes sense to assume it will have to fetch\n> about 150 rows to find the 15 awaited ones...\n\nThis is only true if the data is randomly distributed, which it isn't\nunfortunately.\n\nTo any postgres developers reading this, two words: planning hints :-). In\nthe face of imperfect information it's not possible to write a perfect\nplanner, please give us the ability to use the heuristic information we have\nas developers, and the planner will never know about. Even if we force\npostgres to use queries that give sub-optimal performance some of the time,\nonce we can write sufficiently performant queries, we're happy. In cases\nlike this where postgres gets it very wrong (and this is just a very simple\nquery after all), well, we're screwed.\n\nI'm going to try partitioning my database along the pooledactor_ column to\nsee if I can get reasonable performance for my purposes, even if I can't\nreach 10 million rows.\n\nThanks\nDavid\n\n-----Original Message-----\nFrom: Guillaume Cottenceau [mailto:[email protected]] \nSent: 02 September 2008 14:56\nTo: David West\nCc: [email protected]\nSubject: Re: [PERFORM] limit clause breaks query planner?\n\n\"David West\" <david.west 'at' cusppoint.com> writes:\n\n> INFO: \"jbpm_taskinstance\": moved 1374243 row versions, truncated 166156\nto\n> 140279 pages\n\nnothing which would explain so much planning off :/\n\n> Yep, the table is from the jboss jbpm (business process management)\nschema.\n\nI've went to that kind of test then, but it didn't help much:\n\n create table foo ( bar character varying(255), baz character varying(255),\n id_ bigint NOT NULL,\n class_ character(1) NOT NULL,\n version_ integer NOT NULL,\n name_ character varying(255),\n description_ character varying(4000),\n create_ timestamp without time zone,\n start_ timestamp without time zone,\n end_ timestamp without time zone,\n duedate_ timestamp without time zone,\n priority_ integer,\n iscancelled_ boolean,\n issuspended_ boolean,\n isopen_ boolean,\n issignalling_ boolean,\n isblocking_ boolean,\n task_ bigint,\n token_ bigint,\n procinst_ bigint,\n swimlaninstance_ bigint,\n taskmgmtinstance_ bigint,\n processname_ character varying(255) );\n \n insert into foo ( select generate_series(0, 10000000) / 1000000, case when\nrandom() < 0.05 then 'Today Alcatel-Lucent has announced that Philippe Camus\nis appointed non-executive Chairman and Ben Verwaayen is appointed Chief\nExecutive Officer.' else null end, 1, 'a', 1 );\n \n create index foobaz on foo(baz);\n create index foobar on foo(bar);\n analyze foo;\n\nEstimated costs still look correct on my side:\n\n gc=# explain select * from foo where baz is null and bar in ('8') limit\n15;\n QUERY PLAN\n\n \n----------------------------------------------------------------------------\n--------\n Limit (cost=0.00..0.46 rows=15 width=1795)\n -> Index Scan using foobar on foo (cost=0.00..26311.70 rows=860238\nwidth=1795)\n Index Cond: ((bar)::text = '8'::text)\n Filter: (baz IS NULL)\n (4 rows)\n \n gc=# set enable_indexscan = off;\n SET\n gc=# explain select * from foo where baz is null and bar in ('8') limit\n15;\n QUERY PLAN \n ----------------------------------------------------------------------\n Limit (cost=0.00..3.46 rows=15 width=1795)\n -> Seq Scan on foo (cost=0.00..198396.62 rows=860238 width=1795)\n Filter: ((baz IS NULL) AND ((bar)::text = '8'::text))\n (3 rows)\n\n\n>>Btw, it would help if you could reproduce my test scenario and\n>>see if PG uses \"correctly\" the indexscan. It is better to try on\n>>your installation, to take care of any configuration/whatever\n>>variation which may create your problem.\n>\n> I have tried your example and I get the same results as you.\n>\n> db=# explain select * from foo where baz is null and bar = '8' limit 15;\n>\n> QUERY PLAN\n>\n>\n----------------------------------------------------------------------------\n> ----\n> ---\n> Limit (cost=0.00..0.53 rows=15 width=154)\n> -> Index Scan using foobar on foo (cost=0.00..33159.59 rows=934389\n> width=15\n> 4)\n> Index Cond: (bar = 8)\n> Filter: (baz IS NULL)\n> (4 rows)\n>\n> db=# drop index foobar;\n> DROP INDEX\n> db=# explain select * from foo where baz is null and bar = '8' limit 15;\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------\n> Limit (cost=0.00..2.87 rows=15 width=154)\n> -> Seq Scan on foo (cost=0.00..178593.35 rows=934389 width=154)\n> Filter: ((baz IS NULL) AND (bar = 8))\n> (3 rows)\n>\n> It's choosing the index because of a cost of 0.53 vs a cost of 2.87 for\n> sequential scan. I wonder why in my real tables the index scan cost is\n> higher than the sequential scan cost. Perhaps because of the extra width\nof\n> my rows?\n\nYou may try to crosscheck with the new test I've put upper, but\nI'm skeptical :/\n\nI think I've unfortunately more than reached my level of\nincompetence on that subject, sorry I wasn't able to better\nlocate your problem :/\n\n>>> From looking at the plans, it seems to be postgres is assuming it will \n>>> only\n>>> have to sequentially scan 15 rows, which is not true in my case \n>>> because column B is not distributed randomly (nor will it be in \n>>> production). Would\n>>\n>>Why do you say that? The explanation seems to rather tell that it\n>>(correctly) assumes that the seqscan would bring up about 1M rows for the\n> selected values of A and B, and then it will limit to 15 rows.\n>\n> I say that because the plan gives a really really low number (3.21) for\nthe\n> estimated cost after the limit on sequential scan:\n>\n> Select * from JBPM_TASKINSTANCE this_ where actorid_ is null and\n> this_.POOLEDACTOR_ in ('21') limit 15\n> \"Limit (cost=0.00..3.21 rows=15 width=128) (actual\n> time=84133.211..84187.247 rows=15 loops=1)\"\n> \" -> Seq Scan on jbpm_taskinstance this_ (cost=0.00..234725.85\n> rows=1095365 width=128) (actual time=84133.205..84187.186 rows=15\nloops=1)\"\n> \" Filter: ((actorid_ IS NULL) AND ((pooledactor_)::text =\n> '21'::text))\"\n> \"Total runtime: 84187.335 ms\"\n>\n> It just seems to me it is not taking into account at all that it might\nhave\n> to scan thousands or millions of rows before it gets the 15 rows it needs.\n\nWell, if your have 95% of NULL actorid_ and 10% for each value of\npooledactor_, then it makes sense to assume it will have to fetch\nabout 150 rows to find the 15 awaited ones...\n\nIn the end, if PG doesn't know about data distribution, its\nbehavior makes total sense to me: 150 rows of width=128 bytes\nneed only 3 disk pages, so it shouldn't be faster than with a\nseqscan, theoretically; however, I am not sure then why on my\nsimple \"foo\" test it isn't using the same decision..\n\n\nBtw, that should not solve your problem, but normally, to help PG\nchoose indexscan often enough, it's good to reduce\nrandom_page_cost which is 4 by default (a high value for nowadays\nservers), increase effective_cache_size to what's available on\nyour machine, and potentially the shared_buffers which normally\nhelps for a good deal of matters, performance-wise.\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n\n", "msg_date": "Tue, 2 Sep 2008 15:09:00 +0100", "msg_from": "\"David West\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" } ]
[ { "msg_contents": "for a short test purpose, I would like to see what queries are running and how long each of them takes.....by reconfiguring postgres.conf on the server level.\n\nlog_statement = 'all' is giving me the query statements.. but I don't know where I can turn \"timing\" on just like what I can run from the command line \"\\timing'....to measure how long each of the queries takes to finish...\n\nThanks,\nJessica\n\n\n\n \nfor a short test purpose, I would like to see what queries are running and how long each of them takes.....by reconfiguring postgres.conf on the server level.log_statement = 'all'  is giving me the query statements.. but I don't know where I can turn \"timing\" on just like what I can run from the command line \"\\timing'....to measure how long each of the queries takes to finish...Thanks,Jessica", "msg_date": "Tue, 2 Sep 2008 10:35:54 -0700 (PDT)", "msg_from": "Jessica Richard <[email protected]>", "msg_from_op": true, "msg_subject": "logging options..." }, { "msg_contents": "Jessica Richard a �crit :\n> for a short test purpose, I would like to see what queries are running\n> and how long each of them takes.....by reconfiguring postgres.conf on\n> the server level.\n> \n> log_statement = 'all' is giving me the query statements.. but I don't\n> know where I can turn \"timing\" on just like what I can run from the\n> command line \"\\timing'....to measure how long each of the queries takes\n> to finish...\n> \n\nEither you configure log_statement to all, ddl or mod and log_duration\nto on, either you configure log_min_duration_statement to 0.\n\n\n-- \nGuillaume.\n http://www.postgresqlfr.org\n http://dalibo.com\n", "msg_date": "Tue, 02 Sep 2008 20:55:01 +0200", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: logging options..." } ]
[ { "msg_contents": "Hi David,\n\nEarly in this thread, Pavel suggested:\n\n> you should partial index\n> \n> create index foo(b) on mytable where a is null;\n\nRather, you might try the opposite partial index (where a is NOT null) as a replacement for the original unqualified index on column A. This new index will be ignored by the query you're trying to tune, but it'll be available to the other queries that filter to a non-null value of column A. (Omitting NULL from that index should be ok because you normally wouldn't want to use an index when 95% of the table's rows match the filtered key.)\n\nThen you can temporarily disable Seq Scans in your session for just this one query, as follows:\n\nSQL> create table my_table ( a int, b int ) ;\nCREATE TABLE\n\nSQL> create index idx_a_not_null on my_table ( a ) where a is not null ;\nCREATE INDEX\n\nSQL> create index idx_b on my_table ( b ) ;\nCREATE INDEX\n\nSQL> insert into my_table (a, b)\nselect\n case when random() <= 0.95 then null else i end as a,\n mod(i, 10) as b\nfrom generate_series(1, 10000000) s(i)\n;\nINSERT 0 10000000\n\nSQL> analyze my_table ;\nANALYZE\n\n\nReview the statistics available to the optimizer:\n\nSQL> select attname, null_frac, n_distinct, most_common_vals, most_common_freqs, histogram_bounds, correlation\nfrom pg_stats\nwhere tablename = 'my_table'\norder by attname\n;\n attname | null_frac | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation\n---------+-----------+------------+-----------------------+--------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+-------------\n a | 0.945 | -1 | | | {2771,1301755,2096051,3059786,3680728,4653531,5882434,6737141,8240245,9428702,9875768} | 1\n b | 0 | 10 | {9,4,3,1,2,6,8,5,7,0} | {0.110333,0.104,0.102333,0.100333,0.100333,0.0996667,0.0986667,0.0983333,0.096,0.09} | | 0.127294\n(2 rows)\n\nSQL> select relname, reltuples, relpages from pg_class where relname in ('my_table', 'idx_a_not_null', 'idx_b') order by relname ;\n relname | reltuples | relpages\n----------------+-----------+----------\n idx_a_not_null | 499955 | 1100\n idx_b | 1e+07 | 21946\n my_table | 1e+07 | 39492\n(3 rows)\n\n\nRun the test query, first without disabling Seq Scan to show this example reproduces the plan you're trying to avoid.\n\nSQL> explain analyze select * from my_table where a is null and b = 5 limit 15 ;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2.66 rows=15 width=8) (actual time=0.070..0.263 rows=15 loops=1)\n -> Seq Scan on my_table (cost=0.00..164492.00 rows=929250 width=8) (actual time=0.061..0.159 rows=15 loops=1)\n Filter: ((a IS NULL) AND (b = 5))\n Total runtime: 0.371 ms\n(4 rows)\n\n\nNow run the same query without the Seq Scan option.\n\nSQL> set enable_seqscan = false ;\nSET\n\nSQL> explain analyze select * from my_table where a is null and b = 5 limit 15 ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..46.33 rows=15 width=8) (actual time=0.081..0.232 rows=15 loops=1)\n -> Index Scan using idx_b on my_table (cost=0.00..2869913.63 rows=929250 width=8) (actual time=0.072..0.130 rows=15 loops=1)\n Index Cond: (b = 5)\n Filter: (a IS NULL)\n Total runtime: 0.341 ms\n(5 rows)\n\nSQL> reset enable_seqscan ;\nRESET\n\n\nYes, it's unsavory to temporarily adjust a session-level parameter to tune a single query, but I don't know of a less intrusive way to avoid the SeqScan. Here's why I think it might be your simplest option:\n\nAs far as I can tell, the plan nodes for accessing the table/index are unaware of the LIMIT. The cost of the Limit node is estimated as the cost of its input row-source multiplied by the ratio of requested/returned rows. For example, from the preceding plan output:\n 2869913.63 for \"Index Scan\" upper cost * (15 row limit / 929250 returned rows) = 46.326 upper cost for the \"Limit\" node\nThe underlying plan nodes each assume that all the rows matching their filter predicates will be returned up the pipeline; the cost estimate is only reduced at the Limit node. A Seq Scan and an Index Scan (over a complete index) will both expected the same number of input rows (pg_class.reltuples). They also produce the same estimated result set, since both apply the same filters before outputing rows to the next node. So an Index Scan is always going to have a higher cost estimate than an equivalent Seq Scan returning the same result rows (unless random_page_cost is < 1). That's why I think the planner is always preferring the plan that uses a Seq Scan.\n\nHope this helps!\n\n\n", "msg_date": "Tue, 02 Sep 2008 12:38:19 -0700", "msg_from": "\"Matt Smiley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "\"Matt Smiley\" <[email protected]> writes:\n> So an Index Scan is always going to have a higher cost estimate than\n> an equivalent Seq Scan returning the same result rows (unless\n> random_page_cost is < 1). That's why I think the planner is always\n> preferring the plan that uses a Seq Scan.\n\nIf that were the case, we'd never choose an indexscan at all...\n\nIt's true that a plain indexscan is not preferred for queries that will\nreturn a large fraction of the table. However, it should be willing to\nuse a bitmap scan for this query, given default cost settings (the\ndefault cost settings will cause it to prefer bitmap scan for retrieving\nup to about a third of the table, in my experience). I too am confused\nabout why it doesn't prefer that choice in the OP's example. It would\nbe interesting to alter the random_page_cost setting and see if he gets\ndifferent results.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Sep 2008 17:00:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner? " }, { "msg_contents": "\"Tom Lane\" <[email protected]> writes:\n> \"Matt Smiley\" <[email protected]> writes:\n> > So an Index Scan is always going to have a higher cost estimate than\n> > an equivalent Seq Scan returning the same result rows (unless\n> > random_page_cost is < 1). That's why I think the planner is always\n> > preferring the plan that uses a Seq Scan.\n> \n> If that were the case, we'd never choose an indexscan at all...\n\nYou're right, that was a silly guess.\n\n> It's true that a plain indexscan is not preferred for queries that will\n> return a large fraction of the table. However, it should be willing to\n> use a bitmap scan for this query, given default cost settings (the\n> default cost settings will cause it to prefer bitmap scan for retrieving\n> up to about a third of the table, in my experience). I too am confused\n> about why it doesn't prefer that choice in the OP's example.\n\nIt looks like the bitmap scan has a higher cost estimate because the entire bitmap index must be built before beginning the heap scan and returning rows up the pipeline. The row-count limit can't be pushed lower than the bitmap-heap-scan like it can for the basic index-scan.\n\ntest_8_3_3=# set enable_seqscan = false ;\nSET\n\ntest_8_3_3=# set enable_indexscan = false ;\nSET\n\ntest_8_3_3=# explain analyze select * from my_table where a is null and b = 3 limit 15 ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=17070.22..17071.02 rows=15 width=8) (actual time=606.902..607.086 rows=15 loops=1)\n -> Bitmap Heap Scan on my_table (cost=17070.22..69478.96 rows=988217 width=8) (actual time=606.892..606.983 rows=15 loops=1)\n Recheck Cond: (b = 3)\n Filter: (a IS NULL)\n -> Bitmap Index Scan on idx_b (cost=0.00..16823.17 rows=1033339 width=0) (actual time=592.657..592.657 rows=1000000 loops=1)\n Index Cond: (b = 3)\n Total runtime: 607.340 ms\n(7 rows)\n\n\n> It would be interesting to alter the random_page_cost setting and see if he gets\n> different results.\n\nUsing an unmodified postgresql.conf, the cost estimate for an index-scan were so much higher than for a seqscan that random_page_cost had to be set below 0.2 before the index-scan was preferred. However, it looks like this was mainly because effective_cache_size was too small. The planner thought the cache was only 128 MB, and the size of the complete table+index was 39492 + 21946 pages * 8 KB/block = 330 MB. It makes sense for the cost estimate to be so much higher if blocks are expected to be repeatedly re-fetched from disk. I wonder if David's effective_cache_size is too small.\n\ntest_8_3_3=# reset all ;\nRESET\n\ntest_8_3_3=# explain analyze select * from my_table where a is null and b = 3 limit 15 ;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2.50 rows=15 width=8) (actual time=0.036..0.239 rows=15 loops=1)\n -> Seq Scan on my_table (cost=0.00..164492.74 rows=988217 width=8) (actual time=0.028..0.138 rows=15 loops=1)\n Filter: ((a IS NULL) AND (b = 3))\n Total runtime: 0.338 ms\n(4 rows)\n\ntest_8_3_3=# set enable_seqscan = false ;\nSET\n\ntest_8_3_3=# show random_page_cost ;\n random_page_cost\n------------------\n 4\n(1 row)\n\ntest_8_3_3=# explain analyze select * from my_table where a is null and b = 3 limit 15 ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..45.99 rows=15 width=8) (actual time=0.051..0.200 rows=15 loops=1)\n -> Index Scan using idx_b on my_table (cost=0.00..3029924.36 rows=988217 width=8) (actual time=0.043..0.100 rows=15 loops=1)\n Index Cond: (b = 3)\n Filter: (a IS NULL)\n Total runtime: 0.308 ms\n(5 rows)\n\ntest_8_3_3=# set random_page_cost = 0.19 ;\nSET\ntest_8_3_3=# explain analyze select * from my_table where a is null and b = 3 limit 15 ;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2.45 rows=15 width=8) (actual time=0.050..0.201 rows=15 loops=1)\n -> Index Scan using idx_b on my_table (cost=0.00..161190.65 rows=988217 width=8) (actual time=0.042..0.097 rows=15 loops=1)\n Index Cond: (b = 3)\n Filter: (a IS NULL)\n Total runtime: 0.307 ms\n(5 rows)\n\n\nNow fix effective_cache_size and try again.\n\ntest_8_3_3=# reset all ;\nRESET\n\ntest_8_3_3=# set effective_cache_size = '500MB' ;\nSET\ntest_8_3_3=# set enable_seqscan = false ;\nSET\ntest_8_3_3=# explain analyze select * from my_table where a is null and b = 3 limit 15 ;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2.78 rows=15 width=8) (actual time=0.051..0.204 rows=15 loops=1)\n -> Index Scan using idx_b on my_table (cost=0.00..183361.21 rows=988217 width=8) (actual time=0.043..0.103 rows=15 loops=1)\n Index Cond: (b = 3)\n Filter: (a IS NULL)\n Total runtime: 0.311 ms\n(5 rows)\n\nThat's better, but still not quite low enough cost estimate to beat the seqscan. Try adjusting random_page_cost again.\n\ntest_8_3_3=# set random_page_cost = 3 ;\nSET\ntest_8_3_3=# explain analyze select * from my_table where a is null and b = 3 limit 15 ;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2.16 rows=15 width=8) (actual time=0.052..0.202 rows=15 loops=1)\n -> Index Scan using idx_b on my_table (cost=0.00..142053.51 rows=988217 width=8) (actual time=0.043..0.100 rows=15 loops=1)\n Index Cond: (b = 3)\n Filter: (a IS NULL)\n Total runtime: 0.311 ms\n(5 rows)\n\nThat's enough: index-scan's 142053.51 beats seqscan's 164492.74. We no longer need to set enable_seqscan=false.\n\n\n", "msg_date": "Wed, 03 Sep 2008 23:10:34 -0700", "msg_from": "\"Matt Smiley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "\"Matt Smiley\" <[email protected]> writes:\n> \"Tom Lane\" <[email protected]> writes:\n>> default cost settings will cause it to prefer bitmap scan for retrieving\n>> up to about a third of the table, in my experience). I too am confused\n>> about why it doesn't prefer that choice in the OP's example.\n\n> It looks like the bitmap scan has a higher cost estimate because the\n> entire bitmap index must be built before beginning the heap scan and\n> returning rows up the pipeline.\n\nOh, of course. The LIMIT is small enough to make it look like we can\nget the required rows after scanning only a small part of the table,\nso the bitmap scan will lose out in the cost comparison because of its\nhigh startup cost.\n\nUltimately the only way that we could get the right answer would be if\nthe planner realized that the required rows are concentrated at the end\nof the table instead of being randomly scattered. This isn't something\nthat is considered at all right now in seqscan cost estimates. I'm not\nsure offhand whether the existing correlation stats would be of use for\nit, or whether we'd have to get ANALYZE to gather additional data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Sep 2008 11:32:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner? " }, { "msg_contents": "On Thu, 4 Sep 2008, Tom Lane wrote:\n> Ultimately the only way that we could get the right answer would be if\n> the planner realized that the required rows are concentrated at the end\n> of the table instead of being randomly scattered. This isn't something\n> that is considered at all right now in seqscan cost estimates. I'm not\n> sure offhand whether the existing correlation stats would be of use for\n> it, or whether we'd have to get ANALYZE to gather additional data.\n\nUsing the correlation would help, I think, although it may not be the best \nsolution possible. At least, if the correlation is zero, you could behave \nas currently, and if the correlation is 1, then you know (from the \nhistogram) where in the table the values are.\n\nMatthew\n\n-- \nX's book explains this very well, but, poor bloke, he did the Cambridge Maths \nTripos... -- Computer Science Lecturer\n", "msg_date": "Thu, 4 Sep 2008 16:50:00 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner? " }, { "msg_contents": "Matthew Wakeling <matthew 'at' flymine.org> writes:\n\n> On Thu, 4 Sep 2008, Tom Lane wrote:\n>> Ultimately the only way that we could get the right answer would be if\n>> the planner realized that the required rows are concentrated at the end\n>> of the table instead of being randomly scattered. This isn't something\n>> that is considered at all right now in seqscan cost estimates. I'm not\n>> sure offhand whether the existing correlation stats would be of use for\n>> it, or whether we'd have to get ANALYZE to gather additional data.\n>\n> Using the correlation would help, I think, although it may not be the\n> best solution possible. At least, if the correlation is zero, you\n> could behave as currently, and if the correlation is 1, then you know\n> (from the histogram) where in the table the values are.\n\nIt seems to me that if the correlation is 0.99[1], and you're\nlooking for less than 1% of rows, the expected rows may be at the\nbeginning or at the end of the heap?\n\nRef: \n[1] or even 1, as ANALYZE doesn't sample all the rows?\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n", "msg_date": "Thu, 04 Sep 2008 18:08:11 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "On Thu, 4 Sep 2008, Guillaume Cottenceau wrote:\n> It seems to me that if the correlation is 0.99, and you're\n> looking for less than 1% of rows, the expected rows may be at the\n> beginning or at the end of the heap?\n\nNot necessarily. Imagine for example that you have a table with 1M rows, \nand one of the fields has unique values from 1 to 1M, and the rows are \nordered in the table by that field. So the correlation would be 1. If you \nwere to SELECT from the table WHERE the field = 500000 LIMIT 1, then the \ndatabase should be able to work out that the rows will be right in the \nmiddle of the table, not at the beginning or end. It should set the \nstartup cost of a sequential scan to the amount of time required to \nsequential scan half of the table.\n\nOf course, this does bring up a point - if the matching rows are \nconcentrated at the end of the table, the database could perform a \nsequential scan backwards, or even a scan from the middle of the table \nonwards.\n\nThis improvement of course only actually helps if the query has a LIMIT \nclause, and presumably would muck up simultaneous sequential scans.\n\nMatthew\n\n-- \nPicard: I was just paid a visit from Q.\nRiker: Q! Any idea what he's up to?\nPicard: No. He said he wanted to be \"nice\" to me.\nRiker: I'll alert the crew.\n", "msg_date": "Thu, 4 Sep 2008 17:20:12 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "Guillaume Cottenceau <[email protected]> writes:\n> It seems to me that if the correlation is 0.99[1], and you're\n> looking for less than 1% of rows, the expected rows may be at the\n> beginning or at the end of the heap?\n\nRight, but if you know the value being searched for, you could then\nestimate where it is in the table by consulting the histogram.\n\nActually, an even easier hack (which would have the nice property of not\nneeding to know the exact value being searched for), would simply use\nthe existing cost estimates if the WHERE variables have low correlation\n(meaning the random-locations assumption is probably good), but apply\nsome sort of penalty factor if the correlation is high. This would\namount to assuming that the universe is perverse and high correlation\nwill always mean that the rows we want are at the wrong end of the table\nnot the right one. But any DBA will tell you that the universe is\nindeed perverse ;-)\n\nOTOH, since indexscans get a cost estimate reduction in the presence of\nhigh correlation, we're already biasing the choice in the direction of\nindexscans for high correlation. We may not need to do it twice.\nI don't recall whether the OP ever showed us his statistics for the\ntable in question --- did it even appear to have high correlation?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Sep 2008 13:14:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner? " }, { "msg_contents": "On Thu, Sep 4, 2008 at 10:14 AM, Tom Lane <[email protected]> wrote:\n\n>\n> Actually, an even easier hack (which would have the nice property of not\n> needing to know the exact value being searched for), would simply use\n> the existing cost estimates if the WHERE variables have low correlation\n> (meaning the random-locations assumption is probably good), but apply\n> some sort of penalty factor if the correlation is high. This would\n> amount to assuming that the universe is perverse and high correlation\n> will always mean that the rows we want are at the wrong end of the table\n> not the right one. But any DBA will tell you that the universe is\n> indeed perverse ;-)\n\n\nAs a user, I prefer this solution. For one, statistics get out of date. A\nfew updates/ inserts (maybe 3% of the table) can greatly\naffect an individual value in the histogram, breaking the location\nassumptions and create a worst case result.\nBut when 3% of rows change, the correlation cannot change as drastically as\nthe histogram in the worst case. When deciding how to go about using the\nstatistics for a query plan, its best to assume that they are not perfect,\nand that there has been some change since they were last gathered. This is\nbut one thing that makes the universe perverse :)\nThe safe assumption is that if the distribution is not random, the expected\naverage number of rows to scan goes up -- this is statistically correct.\nWith perfect correlation and unknown locations the average expected value\nnumber of scanned rows would be ~ half the table. Equal likelihood of the\nfirst 15 rows as the last 15. Thus, for perfect correlation the average\npenalty would be half the table scanned, and worst case would be the whole\ntable. In this case, if the statistics are out of date somewhat, the cost\nestimate is not likely to be more than a factor of 2 off. If one were to\nuse the histogram, the cost estimate could be many orders of magnitude off.\n\nI'm fairly sure the penalty function to use for correlation values can be\nderived statistically -- or at least approximated well enough for a look up\ntable.\nIf the histogram is used, the odds of it being wrong or out of date have to\nbe taken into account since the penalty for being incorrect is potentially\nvery large -- its not a gradual increase in cost for a small error, it is a\nbig and uncertain increase. I see the query planner's main goal is to avoid\nthe worst outcomes more than finding the absolute best one. Better to\nproduce 90% optimized queries 100% of the time than make 100% perfect\nplans 90% of the time and then 10% of the time produce very bad plans.\nYour suggestion above would do the best job avoiding bad plans but could\nmiss squeezing out that last few % in rare cases that probably don't matter\nthat much.\n\n\n>\n> OTOH, since indexscans get a cost estimate reduction in the presence of\n> high correlation, we're already biasing the choice in the direction of\n> indexscans for high correlation. We may not need to do it twice.\n\n\nBecause the full scan is compared against more than just index scans\n(potentially), it makes sense to adjust each accordingly and independently.\nAdditionally, the index scan cost reduction function and the full table scan\ncost increase function as correlation changes may have very different,\nnonlinear 'shapes' between 0 and 1.\n\n\n>\n> I don't recall whether the OP ever showed us his statistics for the\n> table in question --- did it even appear to have high correlation?\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Thu, Sep 4, 2008 at 10:14 AM, Tom Lane <[email protected]> wrote:\n\nActually, an even easier hack (which would have the nice property of not\nneeding to know the exact value being searched for), would simply use\nthe existing cost estimates if the WHERE variables have low correlation\n(meaning the random-locations assumption is probably good), but apply\nsome sort of penalty factor if the correlation is high.  This would\namount to assuming that the universe is perverse and high correlation\nwill always mean that the rows we want are at the wrong end of the table\nnot the right one.  But any DBA will tell you that the universe is\nindeed perverse ;-)As a user, I prefer this solution.  For one, statistics get out of date.  A few updates/ inserts (maybe 3% of the table) can greatlyaffect an individual value in the histogram, breaking the location assumptions and create a worst case result.\nBut when 3% of rows change, the correlation cannot change as drastically as the histogram in the worst case.  When deciding how to go about using the statistics for a query plan, its best to assume that they are not perfect, and that there has been some change since they were last gathered.  This is but one thing that makes the universe perverse :)\nThe safe assumption is that if the distribution is not random, the expected average number of rows to scan goes up -- this is statistically correct.  With perfect correlation and unknown locations the average expected value number of scanned rows would be ~ half the table.  Equal likelihood of the first 15 rows as the last 15.  Thus, for perfect correlation the average penalty would be half the table scanned, and worst case would be the whole table.  In this case, if the statistics are out of date somewhat, the cost estimate is not likely to be more than a factor of 2 off.  If one were to use the histogram, the cost estimate could be many orders of magnitude off.  \nI'm fairly sure the penalty function to use for correlation values can be derived statistically -- or at least approximated well enough for a look up table.If the histogram is used, the odds of it being wrong or out of date have to be taken into account since the penalty for being incorrect is potentially very large -- its not a gradual increase in cost for a small error, it is a big and uncertain increase.  I see the query planner's main goal is to avoid the worst outcomes more than finding the absolute best one.  Better to produce 90% optimized queries 100% of the time than make 100%  perfect plans  90% of the time and then 10% of the time produce very bad plans.  Your suggestion above would do the best job avoiding bad plans but could miss squeezing out that last few % in rare cases that probably don't matter that much.\n\n\nOTOH, since indexscans get a cost estimate reduction in the presence of\nhigh correlation, we're already biasing the choice in the direction of\nindexscans for high correlation.  We may not need to do it twice.Because the full scan is compared against more than just index scans (potentially), it makes sense to adjust each accordingly and independently.  Additionally, the index scan cost reduction function and the full table scan cost increase function as correlation changes may have very different, nonlinear 'shapes' between 0 and 1.\n \nI don't recall whether the OP ever showed us his statistics for the\ntable in question --- did it even appear to have high correlation?\n\n                        regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 4 Sep 2008 11:04:29 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "\"Tom Lane\" <[email protected]> writes:\n> I'm not sure offhand whether the existing correlation stats would be of use for\n> it, or whether we'd have to get ANALYZE to gather additional data.\n\nPlease forgive the tangent, but would it be practical to add support for gathering statistics on an arbitrary expression associated with a table, rather than just on materialized columns? For example:\n analyze my_tab for expression 'my_func(my_tab.col)' ;\nIt seems like any time you'd consider using a functional index, this feature would let the planner calculate decent selectivity estimates for the expression's otherwise opaque data distribution. The expression might be treated as a virtual column on the table; not sure if that helps or hurts. Should I post this question on pgsql-hackers?\n\n\n", "msg_date": "Thu, 04 Sep 2008 11:45:37 -0700", "msg_from": "\"Matt Smiley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limit clause breaks query planner?" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Guillaume Cottenceau <[email protected]> writes:\n>> It seems to me that if the correlation is 0.99[1], and you're\n>> looking for less than 1% of rows, the expected rows may be at the\n>> beginning or at the end of the heap?\n>\n> Right, but if you know the value being searched for, you could then\n> estimate where it is in the table by consulting the histogram.\n>\n> Actually, an even easier hack (which would have the nice property of not\n> needing to know the exact value being searched for), would simply use\n> the existing cost estimates if the WHERE variables have low correlation\n> (meaning the random-locations assumption is probably good), but apply\n> some sort of penalty factor if the correlation is high. \n\nFwiw this will have all the same problems our existing uses of the correlation\nhave. That doesn't mean we shouldn't do it but I would expect it to be\nimproved along with the other uses when we find a better metric.\n\nI did happen to speak to a statistician the other day and was given some terms\nto google. I'll investigate and see if I get anything useful.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Thu, 04 Sep 2008 20:54:53 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause breaks query planner?" } ]
[ { "msg_contents": "Hello,\n\nwe had problems with queries via dblink needing more than three times as long as without dblink on the same server. This has been tested with v8.3.3 as well. Is this still an issue?: http://cunha17.cristianoduarte.pro.br/postgresql/snapshots.en_us.php\n\n\"But the two suffer from a severe problem: they bring the whole result from the remote query into the local database server. \"\n\nIs anybody maintaining dblink (who?)?\n\nAre there other solutions for connecting two dbs in the work (like synonyms)?\n\nThere is another \"problem\". It's difficult to compare the performance of the queries because the server is caching the queries and dblink is using the same cached querie results as well. Can you 'flush' the results or prevent the results from being cached for being reused? Is Explain Analyze really 'stable' for comparing purposes?\n\nThank you very much,\n\nPeter\n-- \nIst Ihr Browser Vista-kompatibel? Jetzt die neuesten \nBrowser-Versionen downloaden: http://www.gmx.net/de/go/browser\n", "msg_date": "Wed, 03 Sep 2008 10:58:55 +0200", "msg_from": "\"Jan-Peter Seifert\" <[email protected]>", "msg_from_op": true, "msg_subject": "dblink /synonyms?" } ]
[ { "msg_contents": "Hi !\n\nTo improve our software insert datas performance, we have decided to use\npartition architecture in our database. It works well for the following volume :\n45 main tables and 288 inherited daughter tables for each, that is a total of\n12960 partitions.\n\nI have some trouble now with another customer's database with the following\nvolume : 87 main tables and 288 tables for each, that is 25056 partitions.\n\nIs there some kind of limit in postgresql about the number of partitions ? Do\nyou know some tuning in the conf files to improve postgresql management of so\nmany tables ? I have already used different tablespaces, one for each main table\nand its 288 partitions.\n\nI have many datas to insert into these tables (that is ten or hundred of\nthousands every five minutes for each main group). Do you think it can be done\nwith a hardware raid 0 SATA disk (it's the case today ...) ?\n\nThank you all for help.\n\nBest regards\n\nSylvain Caillet\n", "msg_date": "Wed, 03 Sep 2008 12:00:22 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Partitions number limitation ?" }, { "msg_contents": "On Wed, Sep 3, 2008 at 4:00 AM, <[email protected]> wrote:\n> Hi !\n>\n> To improve our software insert datas performance, we have decided to use\n> partition architecture in our database. It works well for the following volume :\n> 45 main tables and 288 inherited daughter tables for each, that is a total of\n> 12960 partitions.\n>\n> I have some trouble now with another customer's database with the following\n> volume : 87 main tables and 288 tables for each, that is 25056 partitions.\n>\n> Is there some kind of limit in postgresql about the number of partitions ? Do\n> you know some tuning in the conf files to improve postgresql management of so\n> many tables ? I have already used different tablespaces, one for each main table\n> and its 288 partitions.\n\nWhat do you mean PostgreSQL management of the partitions. Triggers,\nrules, application based partitioning? Rules aren't fast enough with\nthis many partitions and if you can easily add partitioning in your\napplication and write directly to the right child table it might be\nmuch faster.\n\nThe size of your disk array depends very much on how quickly you'll be\nsaturating your CPUS, either in the app layer or CPU layer to maintain\nyour partitioning. If an app needs to insert 100,000 rows into ONE\npartition, and it knows which one, it's likely to be way faster to\nhave the app do it instead of pgsql. The app has to think once, the\ndatabase 100,000 times.\n", "msg_date": "Wed, 3 Sep 2008 10:43:46 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitions number limitation ?" }, { "msg_contents": "\n\[email protected] wrote:\n> Is there some kind of limit in postgresql about the number of partitions ? Do\n> you know some tuning in the conf files to improve postgresql management of so\n> many tables ? I have already used different tablespaces, one for each main table\n> and its 288 partitions.\n\nPostgres is not really designed for performance of partitions, so you \nhave to manage that yourself. I am working on a project with a similar \ndesign and found that the super table has its limitations. At some point \nthe db just aborts a query if there are to many partitions. I seem to \nremeber I have worked with up to 100K partitions, but managed them \nindividually instead of through the super table.\n\nJust a tip: if the table gets data inserted once and then mainly read \nafter that, its faster to create the index for the partition after the \ninsert.\nAnother tip: use COPY to insert data instead of INSERT, its about 3-5 \ntimes faster, it is supported by the C driver and a patched JDBC driver\n\nregards\n\ntom\n", "msg_date": "Thu, 04 Sep 2008 18:02:15 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitions number limitation ?" } ]
[ { "msg_contents": "I have the honor to be configuring Postgres to back into a NetApp FAS3020\nvia fiber.\n\nDoes anyone know if the SAN protects me from breakage due to partial page\nwrites?\n\nIf anyone has an SAN specific postgres knowledge, I'd love to hear your\nwords of wisdom.\n\nFor reference:\n[postgres@localhost bonnie]$ ~neverett/bonnie++-1.03a/bonnie++\nWriting with putc()...done\nWriting intelligently...done\nRewriting...done\nReading with getc()...done\nReading intelligently...done\nstart 'em...done...done...done...\nCreate files in sequential order...done.\nStat files in sequential order...done.\nDelete files in sequential order...done.\nCreate files in random order...done.\nStat files in random order...done.\nDelete files in random order...done.\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\nlocalhost.lo 32104M 81299 94 149848 30 42747 8 45465 61 55528 4\n495.5 0\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++\n+++\n\nI have the honor to be configuring Postgres to back into a NetApp FAS3020 via fiber.Does anyone know if the SAN protects me from breakage due to partial page writes?If anyone has an SAN specific postgres knowledge, I'd love to hear your words of wisdom.\nFor reference:[postgres@localhost bonnie]$ ~neverett/bonnie++-1.03a/bonnie++ Writing with putc()...doneWriting intelligently...doneRewriting...doneReading with getc()...doneReading intelligently...done\nstart 'em...done...done...done...Create files in sequential order...done.Stat files in sequential order...done.Delete files in sequential order...done.Create files in random order...done.Stat files in random order...done.\nDelete files in random order...done.Version  1.03       ------Sequential Output------ --Sequential Input- --Random-                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP\nlocalhost.lo 32104M 81299  94 149848  30 42747   8 45465  61 55528   4 495.5   0                    ------Sequential Create------ --------Random Create--------                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++", "msg_date": "Wed, 3 Sep 2008 12:03:45 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "SAN and full_page_writes" }, { "msg_contents": "I seem to have answered my own question. I'm sending the answer to the list\nin case someone else has the same question one day.\n\nAccording to the NetApp documentation, it does protect me from partial page\nwrites. Thus, full_page_writes = off.\n\n\nOn Wed, Sep 3, 2008 at 12:03 PM, Nikolas Everett <[email protected]> wrote:\n\n> I have the honor to be configuring Postgres to back into a NetApp FAS3020\n> via fiber.\n>\n> Does anyone know if the SAN protects me from breakage due to partial page\n> writes?\n>\n> If anyone has an SAN specific postgres knowledge, I'd love to hear your\n> words of wisdom.\n>\n> For reference:\n> [postgres@localhost bonnie]$ ~neverett/bonnie++-1.03a/bonnie++\n> Writing with putc()...done\n> Writing intelligently...done\n> Rewriting...done\n> Reading with getc()...done\n> Reading intelligently...done\n> start 'em...done...done...done...\n> Create files in sequential order...done.\n> Stat files in sequential order...done.\n> Delete files in sequential order...done.\n> Create files in random order...done.\n> Stat files in random order...done.\n> Delete files in random order...done.\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n> %CP\n> localhost.lo 32104M 81299 94 149848 30 42747 8 45465 61 55528 4\n> 495.5 0\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n> %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++\n> +++\n>\n>\n\nI seem to have answered my own question.  I'm sending the answer to the list in case someone else has the same question one day.\n\nAccording to the NetApp documentation, it does protect me from partial page writes.  Thus, full_page_writes = off.On Wed, Sep 3, 2008 at 12:03 PM, Nikolas Everett <[email protected]> wrote:\nI have the honor to be configuring Postgres to back into a NetApp FAS3020 via fiber.\nDoes anyone know if the SAN protects me from breakage due to partial page writes?If anyone has an SAN specific postgres knowledge, I'd love to hear your words of wisdom.\nFor reference:[postgres@localhost bonnie]$ ~neverett/bonnie++-1.03a/bonnie++ Writing with putc()...doneWriting intelligently...doneRewriting...doneReading with getc()...doneReading intelligently...done\n\nstart 'em...done...done...done...Create files in sequential order...done.Stat files in sequential order...done.Delete files in sequential order...done.Create files in random order...done.Stat files in random order...done.\n\nDelete files in random order...done.Version  1.03       ------Sequential Output------ --Sequential Input- --Random-                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP\n\nlocalhost.lo 32104M 81299  94 149848  30 42747   8 45465  61 55528   4 495.5   0                    ------Sequential Create------ --------Random Create--------                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n\n              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++", "msg_date": "Fri, 5 Sep 2008 10:24:00 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN and full_page_writes" }, { "msg_contents": "Nikolas Everett wrote:\n> I seem to have answered my own question. I'm sending the answer to the list\n> in case someone else has the same question one day.\n> \n> According to the NetApp documentation, it does protect me from partial page\n> writes. Thus, full_page_writes = off.\n\nJust for clarification, the NetApp must guarantee that the entire 8k\ngets to disk, not just one of the 512-byte blocks that disks use\ninternally.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Sat, 6 Sep 2008 15:46:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN and full_page_writes" }, { "msg_contents": "Thanks for pointing that out Bruce.\n\nNetApp has a 6 page PDF about NetApp and databases. On page 4:\n\nAs discussed above, reads and writes are unconditionally atomic to 64 KB.\nWhile reads or writes\nmay fail for a number of reasons (out of space, permissions, etc.), the\nfailure is always atomic to\n64 KB. All possible error conditions are fully evaluated prior to committing\nany updates or\nreturning any data to the database.\n\n\n From the sound of it, I can turn of full_page_writes.\n\nThis document can be found at http://www.netapp.com/us/ by searching for\nhosting databases.\n\nThanks,\n\n--Nik\n\nOn Sat, Sep 6, 2008 at 3:46 PM, Bruce Momjian <[email protected]> wrote:\n\n> Nikolas Everett wrote:\n> > I seem to have answered my own question. I'm sending the answer to the\n> list\n> > in case someone else has the same question one day.\n> >\n> > According to the NetApp documentation, it does protect me from partial\n> page\n> > writes. Thus, full_page_writes = off.\n>\n> Just for clarification, the NetApp must guarantee that the entire 8k\n> gets to disk, not just one of the 512-byte blocks that disks use\n> internally.\n>\n> --\n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + If your life is a hard drive, Christ can be your backup. +\n>\n\nThanks for pointing that out Bruce.\nNetApp has a 6 page PDF about NetApp and databases.  On page 4:As discussed above, reads and writes are unconditionally atomic to 64 KB. While reads or writesmay fail for a number of reasons (out of space, permissions, etc.), the failure is always atomic to\n64 KB. All possible error conditions are fully evaluated prior to committing any updates orreturning any data to the database.From the sound of it, I can turn of full_page_writes.This document can be found at http://www.netapp.com/us/ by searching for hosting databases.\nThanks,--NikOn Sat, Sep 6, 2008 at 3:46 PM, Bruce Momjian <[email protected]> wrote:\nNikolas Everett wrote:\n> I seem to have answered my own question.  I'm sending the answer to the list\n> in case someone else has the same question one day.\n>\n> According to the NetApp documentation, it does protect me from partial page\n> writes.  Thus, full_page_writes = off.\n\nJust for clarification, the NetApp must guarantee that the entire 8k\ngets to disk, not just one of the 512-byte blocks that disks use\ninternally.\n\n--\n  Bruce Momjian  <[email protected]>        http://momjian.us\n  EnterpriseDB                             http://enterprisedb.com\n\n  + If your life is a hard drive, Christ can be your backup. +", "msg_date": "Mon, 8 Sep 2008 10:02:27 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN and full_page_writes" }, { "msg_contents": "\n\"Nikolas Everett\" <[email protected]> writes:\n\n> Thanks for pointing that out Bruce.\n>\n> NetApp has a 6 page PDF about NetApp and databases. On page 4:\n\nSkimming through this I think all 6 pages are critical. The sentence you quote\nout of context pertains specifically to the NAS internal organization.\n\nThe previous pages discuss limitations of OSes, filesystems and especially NFS\nclients which you may have to be concerned with as well.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Mon, 08 Sep 2008 15:16:51 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN and full_page_writes" }, { "msg_contents": "Sorry about that. I was having tunnel vision and pulled out the part that\napplied to me. I also figured that the OS and file system information was\nsuperfluous but on second look it may not be. This bit:\n\nTo satisfy the Durability requirement, all write operations must write\nthrough any OS\ncache to stable storage before they are reported as complete or otherwise\nmade visible.\nWrite-back caching behavior is prohibited, and data from failed writes must\nnot appear in\nan OS cache.\nTo satisfy the Serialization requirements, any OS cache must be fully\ncoherent with the\nunderlying storage. For instance, each write must invalidate any OS-cached\ncopies of\nthe data to be overwritten, on any and all hosts, prior to commitment.\nMultiple hosts may\naccess the same storage concurrently under shared-disk clustering, such as\nthat\nimplemented by Oracle RAC and/or ASM.\n\nSounds kind of scary. I think postgres forces the underlying OS and file\nsystem to do that stuff (sans the mutli-host magic) using fsync. Is that\nright?\n\nIt does look like there are some gotchas with NFS.\n\nOn Mon, Sep 8, 2008 at 10:16 AM, Gregory Stark <[email protected]>wrote:\n\n>\n> \"Nikolas Everett\" <[email protected]> writes:\n>\n> > Thanks for pointing that out Bruce.\n> >\n> > NetApp has a 6 page PDF about NetApp and databases. On page 4:\n>\n> Skimming through this I think all 6 pages are critical. The sentence you\n> quote\n> out of context pertains specifically to the NAS internal organization.\n>\n> The previous pages discuss limitations of OSes, filesystems and especially\n> NFS\n> clients which you may have to be concerned with as well.\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's Slony Replication support!\n>\n\nSorry about that.  I was having tunnel vision and pulled out the part that applied to me.  I also figured that the OS and file system information was superfluous but on second look it may not be.  This bit:\nTo satisfy the Durability requirement, all write operations must write through any OScache to stable storage before they are reported as complete or otherwise made visible.Write-back caching behavior is prohibited, and data from failed writes must not appear in\nan OS cache.To satisfy the Serialization requirements, any OS cache must be fully coherent with theunderlying storage. For instance, each write must invalidate any OS-cached copies ofthe data to be overwritten, on any and all hosts, prior to commitment. Multiple hosts may\naccess the same storage concurrently under shared-disk clustering, such as thatimplemented by Oracle RAC and/or ASM.Sounds kind of scary.  I think postgres forces the underlying OS and file system to do that stuff (sans the mutli-host magic) using fsync.  Is that right?\nIt does look like there are some gotchas with NFS.On Mon, Sep 8, 2008 at 10:16 AM, Gregory Stark <[email protected]> wrote:\n\n\"Nikolas Everett\" <[email protected]> writes:\n\n> Thanks for pointing that out Bruce.\n>\n> NetApp has a 6 page PDF about NetApp and databases.  On page 4:\n\nSkimming through this I think all 6 pages are critical. The sentence you quote\nout of context pertains specifically to the NAS internal organization.\n\nThe previous pages discuss limitations of OSes, filesystems and especially NFS\nclients which you may have to be concerned with as well.\n\n--\n  Gregory Stark\n  EnterpriseDB          http://www.enterprisedb.com\n  Ask me about EnterpriseDB's Slony Replication support!", "msg_date": "Mon, 8 Sep 2008 10:48:21 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN and full_page_writes" }, { "msg_contents": "\"Nikolas Everett\" <[email protected]> writes:\n\n> Sounds kind of scary. I think postgres forces the underlying OS and file\n> system to do that stuff (sans the mutli-host magic) using fsync. Is that\n> right?\n\nYes, so you have to make sure that your filesystem really does support fsync\nproperly. I think most NFS implementations do that.\n\nI was more concerned with:\n\n Network Appliance supports a number of NFS client implementations for use\n with databases. These clients provide write atomicity to at least 4 KB,\n and support synchronous writes when requested by the database. Typically,\n atomicity is guaranteed only to one virtual memory page, which may be as\n small as 4 KB. However, if the NFS client supports a direct I/O mode that\n completely bypasses the cache, then atomicity is guaranteed to the size\n specified by the “wsize” mount option, typically 32 KB.\n\n The failure of some NFS clients to assure write atomicity to a full\n database block means that the soft atomicity requirement is not always\n met. Some failures of the host system may result in a fractured database\n block on disk. In practice such failures are rare. When they happen no\n data is lost, but media recovery of the affected database block may be\n required\n\nThat \"media recovery\" it's referring to sounds like precisely our WAL full\npage writes...\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Mon, 08 Sep 2008 15:59:03 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN and full_page_writes" }, { "msg_contents": "On Mon, Sep 8, 2008 at 10:59 AM, Gregory Stark <[email protected]>wrote:\n\n>\n> That \"media recovery\" it's referring to sounds like precisely our WAL full\n> page writes...\n>\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's PostGIS support!\n>\n\nThat sounds right.\n\nSo the take home from this is that NetApp does its best to protect you from\npartial page writes but comes up short on untweaked NFS (see doc to tweak.)\nOtherwise you are protected so long as your OS and file system implement\nfsync properly.\n\n--Nik\n\nOn Mon, Sep 8, 2008 at 10:59 AM, Gregory Stark <[email protected]> wrote:\n\nThat \"media recovery\" it's referring to sounds like precisely our WAL full\npage writes...\n\n\n--\n  Gregory Stark\n  EnterpriseDB          http://www.enterprisedb.com\n  Ask me about EnterpriseDB's PostGIS support!\nThat sounds right.So the take home from this is that NetApp does its best to protect you from partial page writes but comes up short on untweaked NFS (see doc to tweak.)  Otherwise you are protected so long as your OS and file system implement fsync properly.\n--Nik", "msg_date": "Mon, 8 Sep 2008 11:17:41 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN and full_page_writes" }, { "msg_contents": "Nikolas Everett wrote:\n> Thanks for pointing that out Bruce.\n> \n> NetApp has a 6 page PDF about NetApp and databases. On page 4:\n> \n> As discussed above, reads and writes are unconditionally atomic to 64 KB.\n> While reads or writes\n> may fail for a number of reasons (out of space, permissions, etc.), the\n> failure is always atomic to\n> 64 KB. All possible error conditions are fully evaluated prior to committing\n> any updates or\n> returning any data to the database.\n\nWell, that is certainly good news, and it is nice the specified the atomic\nsize.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 8 Sep 2008 18:35:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN and full_page_writes" } ]
[ { "msg_contents": "Hello,\n \n I am running a select on a large table with two where\n conditions.\n Explain analyze shows that the estimated number of rows returned\n (190760) is much more than the actual rows returned (58221),\n which is probably the underlying cause for the poor performance\n I am seeing.\n \n Can someone please tell me how to improve the query planner\n estimate? I did try vacuum analyze. Here are some details:\n \n Explain plan:\n unison@csb-test=> explain analyze select * from paliasorigin a\n where\n a.origin_id=20 and a.tax_id=9606;\n \n \n QUERY PLAN\n --------------------------------------------------------------------------\n Bitmap Heap Scan on paliasorigin a (cost=4901.38..431029.54\n rows=190760 width=118) (actual time=12.447..112.902 rows=58221\n loops=1)\n Recheck Cond: ((origin_id = 20) AND (tax_id = 9606))\n -> Bitmap Index Scan on paliasorigin_search3_idx\n (cost=0.00..4853.69 rows=190760 width=0) (actual\n time=11.407..11.407\n rows=58221 loops=1)\n Index Cond: ((origin_id = 20) AND (tax_id = 9606))\n \n Schema:\n unison@csb-test=> \\d+ paliasorigin\n Column | Type |\n Modifiers | \n -----------+--------------------------+------------\n palias_id | integer | not null\n origin_id | integer | not null \n alias | text | not null\n descr | text |\n tax_id | integer |\n added | timestamp with time zone | not null default\n timenow() \n Indexes:\n \"palias_pkey\" PRIMARY KEY, btree (palias_id)\n \"paliasorigin_alias_unique_in_origin_idx\" UNIQUE, btree\n (origin_id,\n alias)\n \"paliasorigin_alias_casefold_idx\" btree (upper(alias))\n CLUSTER\n \"paliasorigin_alias_idx\" btree (alias)\n \"paliasorigin_o_idx\" btree (origin_id)\n \"paliasorigin_search1_idx\" btree (palias_id, origin_id)\n \"paliasorigin_search3_idx\" btree (origin_id, tax_id,\n palias_id)\n \"paliasorigin_tax_id_idx\" btree (tax_id)\n Foreign-key constraints:\n \"origin_id_exists\" FOREIGN KEY (origin_id) REFERENCES\n origin(origin_id) ON UPDATE CASCADE ON DELETE CASCADE\n Has OIDs: no\n \n \n Number of rows:\n unison@csb-test=> select count(*) from paliasorigin;\n count\n ----------\n 37909009\n (1 row)\n \n Pg version:\n unison@csb-test=> select version();\n version\n --------------------------------------------------------------------------------------------\n PostgreSQL 8.3.3 on x86_64-unknown-linux-gnu, compiled by GCC\n gcc (GCC)\n 4.1.0 (SUSE Linux)\n (1 row)\n \n \n Info from analyze verbose:\n unison@csb-test=> analyze verbose paliasorigin;\n INFO: analyzing \"unison.paliasorigin\"\n INFO: \"paliasorigin\": scanned 300000 of 692947 pages,\n containing\n 16409041 live rows and 0 dead rows; 300000 rows in sample,\n 37901986\n estimated total rows\n ANALYZE\n Time: 21999.506 ms\n \n \n Thank you,\n \n -Kiran Mukhyala\n \n \n\n", "msg_date": "Thu, 04 Sep 2008 11:21:52 -0700", "msg_from": "Kiran Mukhyala <[email protected]>", "msg_from_op": true, "msg_subject": "inaccurate stats on large tables" }, { "msg_contents": "On Thu, Sep 4, 2008 at 2:21 PM, Kiran Mukhyala <[email protected]> wrote:\n\n> Can someone please tell me how to improve the query planner\n> estimate? I did try vacuum analyze. Here are some details:\n\nHave you tried increasing the statistics target for that table (or in general)?\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Thu, 4 Sep 2008 23:25:43 -0400", "msg_from": "\"David Wilson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inaccurate stats on large tables" } ]
[ { "msg_contents": "Hi,\nI have a virtual server with 256 MB of RAM. I am using it as a \nwebserver, mailserver and for postgres. So there is something like 150MB \nleft for postgres.\n\nHere are my configs (I haven't benchmarked...)\nmax_connections = 12 (I think, I will not have more parallel \nconnections, because I only have 10 PHP worker threads)\nshared_buffers = 24MB\nwork_mem = 1MB\nmaintenance_work_mem = 16MB\n\n(effective_cache_size = 80MB)\n\nNormally, the file-cache is part of the free ram. But on my virtual \nserver, it looks like if there is one big file cache for the whole \nhardware node and I do not have my own reserved cached, so it is not \neasy to find a good value for effective_cache_size.\n\nI've also benchmarked the file-cache using dd (100MB file)\n\n1. Read from HDD:\n104857600 bytes (105 MB) copied, 8.38522 seconds, 12.5 MB/s\n2. Read from Cache:\n104857600 bytes (105 MB) copied, 3.48694 seconds, 30.1 MB/s\n\nThat is really really slow (10 times slower than on my other machine).\n\nWhat would you do now? Increasing shared_buffers to 100MB and setting \neffective_cache_size to 0MB? Or increasing effective_cache_size, too?\n\nThanks for help.\n\nRegards,\n-Ulrich\n", "msg_date": "Thu, 04 Sep 2008 21:24:18 +0200", "msg_from": "Ulrich <[email protected]>", "msg_from_op": true, "msg_subject": "More shared_buffers instead of effective_cache_size?" }, { "msg_contents": "On Thu, Sep 4, 2008 at 1:24 PM, Ulrich <[email protected]> wrote:\n> Hi,\n> I have a virtual server with 256 MB of RAM. I am using it as a webserver,\n> mailserver and for postgres. So there is something like 150MB left for\n> postgres.\n>\n> Here are my configs (I haven't benchmarked...)\n> max_connections = 12 (I think, I will not have more parallel connections,\n> because I only have 10 PHP worker threads)\n> shared_buffers = 24MB\n> work_mem = 1MB\n> maintenance_work_mem = 16MB\n>\n> (effective_cache_size = 80MB)\n>\n> Normally, the file-cache is part of the free ram. But on my virtual server,\n> it looks like if there is one big file cache for the whole hardware node and\n> I do not have my own reserved cached, so it is not easy to find a good value\n> for effective_cache_size.\n>\n> I've also benchmarked the file-cache using dd (100MB file)\n>\n> 1. Read from HDD:\n> 104857600 bytes (105 MB) copied, 8.38522 seconds, 12.5 MB/s\n> 2. Read from Cache:\n> 104857600 bytes (105 MB) copied, 3.48694 seconds, 30.1 MB/s\n>\n> That is really really slow (10 times slower than on my other machine).\n>\n> What would you do now? Increasing shared_buffers to 100MB and setting\n> effective_cache_size to 0MB? Or increasing effective_cache_size, too?\n\nStop using a virtual server? I wouldn't set shared_buffers that high\njust because things like vacuum and sorts need memory too.\n\n>\n> Thanks for help.\n>\n> Regards,\n> -Ulrich\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Thu, 4 Sep 2008 13:30:34 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More shared_buffers instead of effective_cache_size?" }, { "msg_contents": "Scott Marlowe wrote:\n> Stop using a virtual server?\nThat is not possible...\n> I wouldn't set shared_buffers that high\n> just because things like vacuum and sorts need memory too\nOkay, I understand that vacuum uses memory, but I thought sorts are done \nin work_mem? I am only sorting the result of one query which will never \nreturn more than 500 rows.\n\n-Ulrich\n", "msg_date": "Thu, 04 Sep 2008 21:39:08 +0200", "msg_from": "Ulrich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More shared_buffers instead of effective_cache_size?" }, { "msg_contents": "On Thu, Sep 4, 2008 at 1:39 PM, Ulrich <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> Stop using a virtual server?\n>\n> That is not possible...\n\nSorry shoulda had a smiley face at the end of that. :) <-- there\n\n>> I wouldn't set shared_buffers that high\n>> just because things like vacuum and sorts need memory too\n>\n> Okay, I understand that vacuum uses memory, but I thought sorts are done in\n> work_mem? I am only sorting the result of one query which will never return\n> more than 500 rows.\n\nYou can probably play with larger shared memory, but I'm betting that\nthe fact that you're running under a VM is gonna weigh eveything down\na great deal, to the point that you're tuning is going to have minimal\neffect.\n", "msg_date": "Thu, 4 Sep 2008 13:49:05 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More shared_buffers instead of effective_cache_size?" }, { "msg_contents": "Scott Marlowe wrote:\n> On Thu, Sep 4, 2008 at 1:39 PM, Ulrich <[email protected]> wrote:\n> \n>>> I wouldn't set shared_buffers that high\n>>> just because things like vacuum and sorts need memory too\n>>> \n>> Okay, I understand that vacuum uses memory, but I thought sorts are done in\n>> work_mem? I am only sorting the result of one query which will never return\n>> more than 500 rows.\n>> \n>\n> You can probably play with larger shared memory, but I'm betting that\n> the fact that you're running under a VM is gonna weigh eveything down\n> a great deal, to the point that you're tuning is going to have minimal\n> effect.\n> \nHmm... Why do you think so? Is there a reason for it or do other people \nhave problems with virtual servers and databases?\nI have reserved cpu power and reserved ram (okay, not much, but it is \nreserved ;-) ), the only thing I dont have is reserved file-cache.\n\n-Ulrich\n", "msg_date": "Thu, 04 Sep 2008 22:01:33 +0200", "msg_from": "Ulrich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More shared_buffers instead of effective_cache_size?" }, { "msg_contents": "On Thu, Sep 4, 2008 at 2:01 PM, Ulrich <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> On Thu, Sep 4, 2008 at 1:39 PM, Ulrich <[email protected]> wrote:\n>>\n>>>>\n>>>> I wouldn't set shared_buffers that high\n>>>> just because things like vacuum and sorts need memory too\n>>>>\n>>>\n>>> Okay, I understand that vacuum uses memory, but I thought sorts are done\n>>> in\n>>> work_mem? I am only sorting the result of one query which will never\n>>> return\n>>> more than 500 rows.\n>>>\n>>\n>> You can probably play with larger shared memory, but I'm betting that\n>> the fact that you're running under a VM is gonna weigh eveything down\n>> a great deal, to the point that you're tuning is going to have minimal\n>> effect.\n>>\n>\n> Hmm... Why do you think so? Is there a reason for it or do other people have\n> problems with virtual servers and databases?\n> I have reserved cpu power and reserved ram (okay, not much, but it is\n> reserved ;-) ), the only thing I dont have is reserved file-cache.\n\nWell, Databases tend to be IO bound, and VMs don't tend to focus on IO\nperformance as much as CPU/Memory performance. Also, things like\nshared memory likely don't get as much attention in a VM either. Just\nguessing, I haven't tested a lot of VMs.\n", "msg_date": "Thu, 4 Sep 2008 14:48:28 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More shared_buffers instead of effective_cache_size?" } ]
[ { "msg_contents": "Hi everyone,\n\nsome erp software requires a change of my pgsql cluster from\n\tlocale C\t\tencoding UTF-8\nto\n\tlocale de_DE.UTF-8\tencoding UTF-8\n\nMost of my databases have only ASCII text data (8 bit UTF8 code range) \nin the text columns.\nDoes the above change influence index performance on such columns?\n\nDoes postmaster keep track on any multibyte characters being inserted \nin such columns, so that the planner can adapt?\n\nWhat other performance impacts can be expected?\n\nAxel\n---\n\n", "msg_date": "Sun, 7 Sep 2008 12:50:30 +0200", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": true, "msg_subject": "performance impact of non-C locale" }, { "msg_contents": "Axel Rau wrote:\n> some erp software requires a change of my pgsql cluster from\n> locale C encoding UTF-8\n> to\n> locale de_DE.UTF-8 encoding UTF-8\n> \n> Most of my databases have only ASCII text data (8 bit UTF8 code range) \n> in the text columns.\n> Does the above change influence index performance on such columns?\n\nYes.\n\n> Does postmaster keep track on any multibyte characters being inserted in \n> such columns, so that the planner can adapt?\n\nNo.\n\n> What other performance impacts can be expected?\n\nThe performance impact is mainly with string comparisons and sorts. I \nsuggest you run your own tests to find out what is acceptable in your \nscenario.\n", "msg_date": "Thu, 11 Sep 2008 12:29:43 +0300", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance impact of non-C locale" }, { "msg_contents": "\nAm 11.09.2008 um 11:29 schrieb Peter Eisentraut:\n\n>>\n>> What other performance impacts can be expected?\n>\n> The performance impact is mainly with string comparisons and sorts. \n> I suggest you run your own tests to find out what is acceptable in \n> your scenario.\nIm not yet convinced to switch to non-C locale. Is the following \nintended behavior:\nWith lc_ctype C: select lower('���'); => ���\nWith lc_ctype en_US.utf8 select lower('���'); => ���\n? (Both have server encoding UTF8)\n\nAxel\n---\n\n", "msg_date": "Thu, 11 Sep 2008 11:57:04 +0200", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance impact of non-C locale" }, { "msg_contents": "Axel Rau wrote:\n> Im not yet convinced to switch to non-C locale. Is the following \n> intended behavior:\n> With lc_ctype C: select lower('���'); => ���\n> With lc_ctype en_US.utf8 select lower('���'); => ���\n> ? (Both have server encoding UTF8)\n\nI would expect exactly that.\n", "msg_date": "Thu, 11 Sep 2008 13:33:41 +0300", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance impact of non-C locale" } ]
[ { "msg_contents": "Hi Kiran,\n\nYou gave great info on your problem.\n\nFirst, is this the query you're actually trying to speed up, or is it a simplified version? It looks like the optimizer has already chosen the best execution plan for the given query. Since the query has no joins, we only have to consider access paths. You're fetching 58221/37909009 = 0.15% of the rows, so a sequential scan is clearly inappropriate. A basic index scan is likely to incur extra scattered I/O, so a bitmap index scan is favored.\n\nTo improve on this query's runtime, you could try any of the following:\n\n - Reorganize the data to reduce this query's scattered I/O (i.e. cluster on \"paliasorigin_search3_idx\" rather than \"paliasorigin_alias_casefold_idx\"). Bear in mind, this may adversely affect other queries.\n\n - Increase the cache hit frequency by ensuring the underlying filesystem cache has plenty of RAM (usually so under Linux) and checking that other concurrent queries aren't polluting the cache. Consider adding RAM if you think the working set of blocks required by most queries is larger than the combined Postgres and filesystem caches. If other processes than the db do I/O on this machine, consider them as resource consumers, too.\n\n - Restructure the table, partitioning along a column that would be useful for pruning whole partitions for your painful queries. In this case, origin_id or tax_id seems like a good bet, but again, consider other queries against this table. 38 million rows probably makes your table around 2 GB (guessing about 55 bytes/row). Depending on the size and growth rate of the table, it may be time to consider partitioning. Out of curiosity, what runtime are you typically seeing from this query? The explain-analyze ran in 113 ms, which I'm guessing is the effect of caching, not the runtime you're trying to improve.\n\n - Rebuild the indexes on this table. Under certain use conditions, btree indexes can get horribly bloated. Rebuilding the indexes returns them to their most compact and balanced form. For example: reindex index \"paliasorigin_search3_idx\"; Apart from the locking and CPU usage during the rebuild, this has no negative consequences, so I'd try this before something drastic like partitioning. First review the current size of the index for comparison: select pg_size_pretty(pg_relation_size('paliasorigin_search3_idx'));\n\nSince you asked specifically about improving the row-count estimate, like the previous responder said, you should consider increasing the statistics target. This will help if individual columns are being underestimated, but not if the overestimate is due to joint variation. In other words, the optimizer has no way to tell if there is there a logical relationship between columns A and B such that certain values in B only occur with certain values of A. Just judging from the names, it sounds like origin_id and tax_id might have a parent-child relationship, so I thought it was worth mentioning.\n\nDo the columns individually have good estimates?\nexplain analyze select * from paliasorigin where origin_id=20;\nexplain analyze select * from paliasorigin where tax_id=9606;\n\nIf not, increase the statistics on that column, reanalyze the table, and recheck the selectivity estimate:\nalter table paliasorigin alter column origin_id set statistics 20;\nanalyze paliasorigin;\nexplain analyze select * from paliasorigin where origin_id=20;\n\nGood luck!\nMatt\n\n\n", "msg_date": "Mon, 08 Sep 2008 09:16:16 -0700", "msg_from": "\"Matt Smiley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: inaccurate stats on large tables" }, { "msg_contents": "On Mon, 2008-09-08 at 09:16 -0700, Matt Smiley wrote:\n> Hi Kiran,\n> \n> You gave great info on your problem.\n> \n> First, is this the query you're actually trying to speed up, or is it a simplified version? It looks like the optimizer has already chosen the best execution plan for the given query. Since the query has no joins, we only have to consider access paths. You're fetching 58221/37909009 = 0.15% of the rows, so a sequential scan is clearly inappropriate. A basic index scan is likely to incur extra scattered I/O, so a bitmap index scan is favored.\n\nThanks for your analysis and sorry for the long silence. \n\nIts a simplified version. I was tackling this part of the original query\nplan since I saw that I got inaccurate stats on one of the tables. \n\n> \n> To improve on this query's runtime, you could try any of the following:\n> \n> - Reorganize the data to reduce this query's scattered I/O (i.e. cluster on \"paliasorigin_search3_idx\" rather than \"paliasorigin_alias_casefold_idx\"). Bear in mind, this may adversely affect other queries.\n\nI applied this on a different table which solved my original problem!\nThe query was hitting statement_timeouts but now runs in reasonable\ntime. I re clustered one of the tables in my actual query on a more\nappropriate index.\n\n> \n> - Increase the cache hit frequency by ensuring the underlying filesystem cache has plenty of RAM (usually so under Linux) and checking that other concurrent queries aren't polluting the cache. Consider adding RAM if you think the working set of blocks required by most queries is larger than the combined Postgres and filesystem caches. If other processes than the db do I/O on this machine, consider them as resource consumers, too.\n> \n> - Restructure the table, partitioning along a column that would be useful for pruning whole partitions for your painful queries. In this case, origin_id or tax_id seems like a good bet, but again, consider other queries against this table. 38 million rows probably makes your table around 2 GB (guessing about 55 bytes/row). Depending on the size and growth rate of the table, it may be time to consider partitioning. Out of curiosity, what runtime are you typically seeing from this query? The explain-analyze ran in 113 ms, which I'm guessing is the effect of caching, not the runtime you're trying to improve.\n\nThis seems inevitable eventually, if my tables keep growing in size.\n\n> - Rebuild the indexes on this table. Under certain use conditions, btree indexes can get horribly bloated. Rebuilding the indexes returns them to their most compact and balanced form. For example: reindex index \"paliasorigin_search3_idx\"; Apart from the locking and CPU usage during the rebuild, this has no negative consequences, so I'd try this before something drastic like partitioning. First review the current size of the index for comparison: select pg_size_pretty(pg_relation_size('paliasorigin_search3_idx'));\n\nThis didn't improve the stats.\n> \n> Since you asked specifically about improving the row-count estimate, like the previous responder said, you should consider increasing the statistics target. This will help if individual columns are being underestimated, but not if the overestimate is due to joint variation. In other words, the optimizer has no way to tell if there is there a logical relationship between columns A and B such that certain values in B only occur with certain values of A. Just judging from the names, it sounds like origin_id and tax_id might have a parent-child relationship, so I thought it was worth mentioning.\n> \n> Do the columns individually have good estimates?\nYes.\n> explain analyze select * from paliasorigin where origin_id=20;\n> explain analyze select * from paliasorigin where tax_id=9606;\n> \n> If not, increase the statistics on that column, reanalyze the table, and recheck the selectivity estimate:\n> alter table paliasorigin alter column origin_id set statistics 20;\n> analyze paliasorigin;\n> explain analyze select * from paliasorigin where origin_id=20;\n\nmy default_statistics_target is set to 1000 but I did set some column\nspecific statistics. But didn't help in this case.\n\nThanks a lot.\n\n-Kiran\n\n", "msg_date": "Fri, 10 Oct 2008 10:48:09 -0700", "msg_from": "Kiran Mukhyala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inaccurate stats on large tables" } ]
[ { "msg_contents": "Preliminaries:\n\npg 8.3.3 on ubuntu 7.10\n4x1.66GHz CPUs\n10G ram\ninteresting postgresql.conf settings:\n\nmax_connections = 300\nshared_buffers = 3000MB\nwork_mem = 32MB\nrandom_page_cost = 1.5\neffective_cache_size = 5000MB\ndefault_statistics_target = 30\nlc_messages = 'en_US.UTF-8'\n\nOK, We're running mnogosearch, and we have many many different \"sites\"\non it. So, there's a dict table with all the words in it, and an url\ntable with all the sites in it. The typical query to pull them in\nlooks like this:\n\nSELECT dict.url_id,dict.intag\nFROM dict, url\nWHERE dict.word='lesson'\n AND url.rec_id=dict.url_id\n AND url.site_id IN ('-259409521');\n\nrandom_page_cost can be anything from 1.5 to 4.0 and I'll get this\nplan most times:\n\n Hash Join (cost=2547.45..11396.26 rows=37 width=8) (actual\ntime=3830.936..16304.325 rows=22 loops=1)\n Hash Cond: (dict.url_id = url.rec_id)\n -> Bitmap Heap Scan on dict (cost=121.63..8939.55 rows=6103\nwidth=8) (actual time=1590.001..16288.708 rows=16172 loops=1)\n Recheck Cond: (word = 'lesson'::text)\n -> Bitmap Index Scan on dict_word_url_id (cost=0.00..120.11\nrows=6103 width=0) (actual time=1587.322..1587.322 rows=16172 loops=1)\n Index Cond: (word = 'lesson'::text)\n -> Hash (cost=2402.07..2402.07 rows=1900 width=4) (actual\ntime=5.583..5.583 rows=1996 loops=1)\n -> Bitmap Heap Scan on url (cost=84.09..2402.07 rows=1900\nwidth=4) (actual time=1.048..4.444 rows=1996 loops=1)\n Recheck Cond: (site_id = (-259409521))\n -> Bitmap Index Scan on url_siteid (cost=0.00..83.61\nrows=1900 width=0) (actual time=0.642..0.642 rows=1996 loops=1)\n Index Cond: (site_id = (-259409521))\n Total runtime: 16304.488 ms\n\nSecond time through, the run time is much faster:\n\n Hash Join (cost=2547.45..11396.26 rows=37 width=8) (actual\ntime=15.690..41.572 rows=22 loops=1)\n Hash Cond: (dict.url_id = url.rec_id)\n -> Bitmap Heap Scan on dict (cost=121.63..8939.55 rows=6103\nwidth=8) (actual time=7.090..29.589 rows=16172 loops=1)\n Recheck Cond: (word = 'lesson'::text)\n -> Bitmap Index Scan on dict_word_url_id (cost=0.00..120.11\nrows=6103 width=0) (actual time=4.677..4.677 rows=16172 loops=1)\n Index Cond: (word = 'lesson'::text)\n -> Hash (cost=2402.07..2402.07 rows=1900 width=4) (actual\ntime=5.535..5.535 rows=1996 loops=1)\n -> Bitmap Heap Scan on url (cost=84.09..2402.07 rows=1900\nwidth=4) (actual time=1.037..4.439 rows=1996 loops=1)\n Recheck Cond: (site_id = (-259409521))\n -> Bitmap Index Scan on url_siteid (cost=0.00..83.61\nrows=1900 width=0) (actual time=0.635..0.635 rows=1996 loops=1)\n Index Cond: (site_id = (-259409521))\n Total runtime: 41.648 ms\n\nNote that big change in the Bitmap Heap Scan near the top. I assume\nthat the first time it's hitting the disk or something?\n\nIf I do this:\n\nset enable_mergejoin=off;\nset enable_hashjon=off;\n\nthis forces the planner to switch away from a bitmap heap scan on\ndict. I now get a plan like this:\n\n Nested Loop (cost=84.09..23997.47 rows=37 width=8) (actual\ntime=241.436..530.531 rows=28 loops=1)\n -> Bitmap Heap Scan on url (cost=84.09..2402.07 rows=1900\nwidth=4) (actual time=0.980..4.557 rows=1996 loops=1)\n Recheck Cond: (site_id = (-259409521))\n -> Bitmap Index Scan on url_siteid (cost=0.00..83.61\nrows=1900 width=0) (actual time=0.577..0.577 rows=1996 loops=1)\n Index Cond: (site_id = (-259409521))\n -> Index Scan using dict_word_url_id on dict (cost=0.00..11.35\nrows=1 width=8) (actual time=0.247..0.263 rows=0 loops=1996)\n Index Cond: ((dict.word = 'assembly'::text) AND (dict.url_id\n= url.rec_id))\n Total runtime: 530.607 ms\n\nNote that there's a different key work, since the old one would be\ncached. Second run looks similar, but down to 50 milliseconds since\nthe data are now cached.\n\nSo, anyone got any hints? I'd prefer not to have to set those two\nmethods off at the db level, and might move them into the search\nengine at least, but I'd much rather change a setting on the server.\nFor now, the load on the server during the day is down from 8 to 15 to\n0.30, so I can live with the two methods turned off for now, but I\nknow that I'll hit something where nested_loop is too slow eventually.\n", "msg_date": "Mon, 8 Sep 2008 10:36:00 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": true, "msg_subject": "bitmap heap scan versus simple index and nested loop" } ]
[ { "msg_contents": "If like me you've been reading all the flash SSD drive reviews that come \nout, you might have also noticed that the performance on write-heavy \nworkloads hasn't been too far ahead of traditional drives. It's typically \nbeen hit or miss as to whether the SDD would really be all that much \nfaster on a real OLTP-ish database workload, compared to a good 10k or 15k \ndrive (WD's Velociraptor is the usual comparison drive).\n\nThat's over as of today: http://techreport.com/articles.x/15433/9\n\nYou can see what I was talking about above in their Database graph: \nunder heavy load, the Velociraptor pulls ahead of even a good performing \nflash product (Samsung's FlashSSD), and the latency curve on the next page \nshows something similar. But the Intel drive is obviously a whole \ndifferent class of SSD implementation altogether. It's not clear yet if \nthat's because of their NCQ support, or maybe the firmware just buffers \nwrites better (they should have tested with NCQ disabled to nail that \ndown).\n\nWith entry-level 64GB Flash drives now available for just under $200 ( \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16820227344 , price is \nso low because they're closing that model out for a better V2 product) \nthis space is really getting interesting.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 8 Sep 2008 19:12:24 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Intel's X25-M SSD" }, { "msg_contents": "On Mon, Sep 8, 2008 at 7:12 PM, Greg Smith <[email protected]> wrote:\n> If like me you've been reading all the flash SSD drive reviews that come\n> out, you might have also noticed that the performance on write-heavy\n> workloads hasn't been too far ahead of traditional drives. It's typically\n> been hit or miss as to whether the SDD would really be all that much faster\n> on a real OLTP-ish database workload, compared to a good 10k or 15k drive\n> (WD's Velociraptor is the usual comparison drive).\n>\n> That's over as of today: http://techreport.com/articles.x/15433/9\n>\n> You can see what I was talking about above in their Database graph: under\n> heavy load, the Velociraptor pulls ahead of even a good performing flash\n> product (Samsung's FlashSSD), and the latency curve on the next page shows\n> something similar. But the Intel drive is obviously a whole different class\n> of SSD implementation altogether. It's not clear yet if that's because of\n> their NCQ support, or maybe the firmware just buffers writes better (they\n> should have tested with NCQ disabled to nail that down).\n\nWhat's interesting about the X25 is that they managed to pull the\nnumbers they got out of a MLC flash product. They managed this with a\nDRAM buffer and the custom controller. Their drive is top dollar for\na MLC product but also provides top notch performance (again, for a\nMLC product).\n\nThe Intel SLC flash products, also due to be out in '08 are what are\nmost likely of interest to database folks. I suspect prices will\nquickly drop and you will start hearing about flash in database\nenvironents increasingly over the next year or two. We are only a\nround or two of price cuts before flash starts looking competitive vs\n15k sas products in light of all the advantages. This will spur\nprice cuts on high margin server product drives, which will also cut\nr&d budgets. I'll stick to the predictions I made several months\nago...flash will quickly replace drives in most environments outside\nof mass storage, with significant market share by 2010. I think the\nSSD manufacturers made a tactical error chasing the notebook market\nwhen they should have been chasing the server market...but in the end\nthe result will be the same.\n\nThis should mean really interesting things to the database world.\n\nmerlin\n", "msg_date": "Mon, 8 Sep 2008 19:59:44 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel's X25-M SSD" }, { "msg_contents": "I have been paying close attention to the recent SSD performance/price\nchanges with a keen eye to server performance on various workloads and\napplications.\n\nThe real barrier is in the controller design, and IP surrounding that. All\nflash products with any amount of wear-leveling map logical addresses to\nphysical flash addresses dynamically. An intelligent controller, with\nenough processing power and RAM (Intel's drive has 16MB of DDR SDRAM) and an\nintelligent design can translate ALL random writes into a sequential\nstream. With enough overprovisioning, the erasing and cleaning that goes on\nin the background will have very minimal impact. One thing many people will\nclaim about a SSD is that the erasing and block management will get slower\nas the drive becomes more full. This is incorrect -- from the point of view\nof any block device it is always 100% full, it is not privy to the file\nsystem notion of 'free space'. Addresses are simply overwritten, which\nmakes blocks that previously mapped to those addresses available for\nwriting. By definition, every write is an overwrite.\n\nThis paper, is very enlightening:\nhttp://research.microsoft.com/users/vijayanp/papers/ssd-usenix08.pdf\n\nGiven Intel's particular strenghts and engineering resources, its not a\nsurprise that they are among the first to make a design like this (FusioIO\nseems to have solved the random write performance issue as well ?). But as\nthe review you provided links to demonstrates, it is this IP that will\nprovide the performance gains necessary for flash performance to be hands\ndown better than all drives, for all workloads, all the time. It is the same\nIP that will provide the most longevity and reliability.\n\nAlso of note for others reading this thread, the review was for Intel's\n\"mainstream\" device, not the \"enterprise\" one. The enterprise one claims\n3300 random 4k writes/sec and over twice the write throughput. I'm sure it\nwill also cost twice as much for less capacity.\n\nOf particular interest in the short term may be using cheaper, read-biased\nflash drives for ZFS L2ARC caches for a database -- it may be like running\nwith a couple hundred extra gigs of RAM, but you can still use slow, big\ndrives for mass storage. The price is prohibitive for putting your whole db\non flash if it is not a small one, but this is not true if you're just\ntalking about cache devices or xlogs or temp space.\nhttp://blogs.sun.com/brendan/entry/test\n\n\n\nOn Mon, Sep 8, 2008 at 4:12 PM, Greg Smith <[email protected]> wrote:\n\n> If like me you've been reading all the flash SSD drive reviews that come\n> out, you might have also noticed that the performance on write-heavy\n> workloads hasn't been too far ahead of traditional drives. It's typically\n> been hit or miss as to whether the SDD would really be all that much faster\n> on a real OLTP-ish database workload, compared to a good 10k or 15k drive\n> (WD's Velociraptor is the usual comparison drive).\n>\n> That's over as of today: http://techreport.com/articles.x/15433/9\n>\n> You can see what I was talking about above in their Database graph: under\n> heavy load, the Velociraptor pulls ahead of even a good performing flash\n> product (Samsung's FlashSSD), and the latency curve on the next page shows\n> something similar. But the Intel drive is obviously a whole different class\n> of SSD implementation altogether. It's not clear yet if that's because of\n> their NCQ support, or maybe the firmware just buffers writes better (they\n> should have tested with NCQ disabled to nail that down).\n>\n> With entry-level 64GB Flash drives now available for just under $200 (\n> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227344 , price is\n> so low because they're closing that model out for a better V2 product) this\n> space is really getting interesting.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI have been paying close attention to the recent SSD performance/price changes with a keen eye to server performance on various workloads and applications.The real barrier is in the controller design, and IP surrounding that.  All flash products with any amount of wear-leveling map logical addresses to physical flash addresses dynamically.  An intelligent controller, with enough processing power and RAM (Intel's drive has 16MB of DDR SDRAM) and an intelligent design can translate ALL random writes into a sequential stream.  With enough overprovisioning, the erasing and cleaning that goes on in the background will have very minimal impact.  One thing many people will claim about a SSD is that the erasing and block management will get slower as the drive becomes more full.  This is incorrect -- from the point of view of any block device it is always 100% full, it is not privy to the file system notion of 'free space'.  Addresses are simply overwritten, which makes blocks that previously mapped to those addresses available for writing.  By definition, every write is an overwrite.\nThis paper, is very enlightening:http://research.microsoft.com/users/vijayanp/papers/ssd-usenix08.pdfGiven Intel's particular strenghts and engineering resources, its not a surprise that they are among the first to make a design like this (FusioIO seems to have solved the random write performance issue as well ?).  But as the review you provided links to demonstrates, it is this IP that will provide the performance gains necessary for flash performance to be hands down better than all drives, for all workloads, all the time. It is the same IP that will provide the most longevity and reliability.\nAlso of note for others reading this thread, the review was for Intel's \"mainstream\" device, not the \"enterprise\" one.  The enterprise one claims 3300 random 4k writes/sec and over twice the write throughput.  I'm sure it will also cost twice as much for less capacity.\nOf particular interest in the short term may be using cheaper, read-biased flash drives for ZFS L2ARC caches for a database -- it may be like running with a couple hundred extra gigs of RAM, but you can still use slow, big drives for mass storage.  The price is prohibitive for putting your whole db on flash if it is not a small one, but this is not true if you're just talking about cache devices or xlogs or temp space.\nhttp://blogs.sun.com/brendan/entry/testOn Mon, Sep 8, 2008 at 4:12 PM, Greg Smith <[email protected]> wrote:\nIf like me you've been reading all the flash SSD drive reviews that come out, you might have also noticed that the performance on write-heavy workloads hasn't been too far ahead of traditional drives.  It's typically been hit or miss as to whether the SDD would really be all that much faster on a real OLTP-ish database workload, compared to a good 10k or 15k drive (WD's Velociraptor is the usual comparison drive).\n\nThat's over as of today:  http://techreport.com/articles.x/15433/9\n\nYou can see what I was talking about above in their Database graph: under heavy load, the Velociraptor pulls ahead of even a good performing flash product (Samsung's FlashSSD), and the latency curve on the next page shows something similar.  But the Intel drive is obviously a whole different class of SSD implementation altogether.  It's not clear yet if that's because of their NCQ support, or maybe the firmware just buffers writes better (they should have tested with NCQ disabled to nail that down).\n\nWith entry-level 64GB Flash drives now available for just under $200 ( http://www.newegg.com/Product/Product.aspx?Item=N82E16820227344 , price is so low because they're closing that model out for a better V2 product) this space is really getting interesting.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 8 Sep 2008 17:19:49 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel's X25-M SSD" }, { "msg_contents": "[email protected] (\"Merlin Moncure\") writes:\n> I think the SSD manufacturers made a tactical error chasing the\n> notebook market when they should have been chasing the server\n> market...\n\nThat's a very good point; I agree totally!\n-- \noutput = reverse(\"moc.enworbbc\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/nonrdbms.html\n\"We are all somehow dreadfully cracked about the head, and sadly need\nmending.\" --/Moby-Dick/, Ch 17 \n", "msg_date": "Tue, 09 Sep 2008 11:03:50 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel's X25-M SSD" }, { "msg_contents": "On Mon, 8 Sep 2008, Merlin Moncure wrote:\n\n> What's interesting about the X25 is that they managed to pull the\n> numbers they got out of a MLC flash product. They managed this with a\n> DRAM buffer and the custom controller.\n\nI finally found a good analysis of what's wrong with most of the cheap MLC \ndrives:\n\nhttp://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=7\n\n240ms random write latency...wow, no wonder I keep hearing so many reports \nof cheap SSD just performing miserably. JMicron is one of those companies \nI really avoid, never seen a design from them that wasn't cheap junk. \nShame their awful part is in so many of the MLC flash products.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 19 Sep 2008 23:23:32 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intel's X25-M SSD" }, { "msg_contents": "\nOn Mon, 2008-09-08 at 19:12 -0400, Greg Smith wrote:\n> If like me you've been reading all the flash SSD drive reviews...\n\nGreat post, thanks for the information.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 23 Sep 2008 05:48:43 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel's X25-M SSD" }, { "msg_contents": "Greg Smith wrote:\n> On Mon, 8 Sep 2008, Merlin Moncure wrote:\n> \n> > What's interesting about the X25 is that they managed to pull the\n> > numbers they got out of a MLC flash product. They managed this with a\n> > DRAM buffer and the custom controller.\n> \n> I finally found a good analysis of what's wrong with most of the cheap MLC \n> drives:\n> \n> http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=7\n> \n> 240ms random write latency...wow, no wonder I keep hearing so many reports \n> of cheap SSD just performing miserably. JMicron is one of those companies \n> I really avoid, never seen a design from them that wasn't cheap junk. \n> Shame their awful part is in so many of the MLC flash products.\n\nI am surprised it too so long to identify the problem.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Tue, 23 Sep 2008 23:24:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel's X25-M SSD" }, { "msg_contents": "A fantastic review on this issue appeared in July:\nhttp://www.alternativerecursion.info/?p=106\nAnd then the same tests on a RiData SSD show that they are the same drive\nwith the same characteristics:\nhttp://www.alternativerecursion.info/?p=276\n\nMost blamed it on MLC being \"slow\" to write compared to SLC. Technically,\nit is slower, but not by a whole lot -- we're talking a low level difference\nof tens of microseconds. A 250ms latency indicates an issue with the\ncontroller chip. SLC and MLC share similar overall performance\ncharacteristics at the millisecond level. The truth is that MLC designs\nwere low cost designs without a lot of investment in the controller chip.\nThe SLC designs were higher cost designs that focused early on on making\nsmarter and more expensive controllers. SLC will always have an advantage,\nbut it isn't going to be by several orders of magnitude like it was before\nIntel's drive appeared. Its going to be by factors of ~2 to 4 on random\nwrites in the long run. However, for all flash based SSD devices, there are\ndesign tradeoffs to make. Maximizing writes sacrifices reads, maximizing\nrandom access performance reduces streaming performance and capacity. We'll\nhave a variety of devices with varying characteristics optimal for different\ntasks.\n\nOn Tue, Sep 23, 2008 at 8:24 PM, Bruce Momjian <[email protected]> wrote:\n\n> Greg Smith wrote:\n> > On Mon, 8 Sep 2008, Merlin Moncure wrote:\n> >\n> > > What's interesting about the X25 is that they managed to pull the\n> > > numbers they got out of a MLC flash product. They managed this with a\n> > > DRAM buffer and the custom controller.\n> >\n> > I finally found a good analysis of what's wrong with most of the cheap\n> MLC\n> > drives:\n> >\n> > http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=7\n> >\n> > 240ms random write latency...wow, no wonder I keep hearing so many\n> reports\n> > of cheap SSD just performing miserably. JMicron is one of those\n> companies\n> > I really avoid, never seen a design from them that wasn't cheap junk.\n> > Shame their awful part is in so many of the MLC flash products.\n>\n> I am surprised it too so long to identify the problem.\n>\n> --\n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + If your life is a hard drive, Christ can be your backup. +\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nA fantastic review on this issue appeared in July:http://www.alternativerecursion.info/?p=106And then the same tests on a RiData SSD show that they are the same drive with the same characteristics:\nhttp://www.alternativerecursion.info/?p=276Most blamed it on MLC being \"slow\" to write compared to SLC.  Technically, it is slower, but not by a whole lot -- we're talking a low level difference of tens of microseconds.  A 250ms latency indicates an issue with the controller chip.  SLC and MLC share similar overall performance characteristics at the millisecond level.  The truth is that MLC designs were low cost designs without a lot of investment in the controller chip.  The SLC designs were higher cost designs that focused early on on making smarter and more expensive controllers.  SLC will always have an advantage, but it isn't going to be by several orders of magnitude like it was before Intel's drive appeared.  Its going to be by factors of ~2 to 4 on random writes in the long run.  However, for all flash based SSD devices, there are design tradeoffs to make.  Maximizing writes sacrifices reads, maximizing random access performance reduces streaming performance and capacity.  We'll have a variety of devices with varying characteristics optimal for different tasks.\nOn Tue, Sep 23, 2008 at 8:24 PM, Bruce Momjian <[email protected]> wrote:\nGreg Smith wrote:\n> On Mon, 8 Sep 2008, Merlin Moncure wrote:\n>\n> > What's interesting about the X25 is that they managed to pull the\n> > numbers they got out of a MLC flash product.  They managed this with a\n> > DRAM buffer and the custom controller.\n>\n> I finally found a good analysis of what's wrong with most of the cheap MLC\n> drives:\n>\n> http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=7\n>\n> 240ms random write latency...wow, no wonder I keep hearing so many reports\n> of cheap SSD just performing miserably.  JMicron is one of those companies\n> I really avoid, never seen a design from them that wasn't cheap junk.\n> Shame their awful part is in so many of the MLC flash products.\n\nI am surprised it too so long to identify the problem.\n\n--\n  Bruce Momjian  <[email protected]>        http://momjian.us\n  EnterpriseDB                             http://enterprisedb.com\n\n  + If your life is a hard drive, Christ can be your backup. +\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 23 Sep 2008 21:25:26 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel's X25-M SSD" }, { "msg_contents": "Scott Carey wrote:\n> A fantastic review on this issue appeared in July:\n> http://www.alternativerecursion.info/?p=106\n> And then the same tests on a RiData SSD show that they are the same \n> drive with the same characteristics:\n> http://www.alternativerecursion.info/?p=276\n> \n> Most blamed it on MLC being \"slow\" to write compared to SLC. \n> Technically, it is slower, but not by a whole lot -- we're talking a low \n> level difference of tens of microseconds. A 250ms latency indicates an \n> issue with the controller chip. SLC and MLC share similar overall \n> performance characteristics at the millisecond level. The truth is that \n> MLC designs were low cost designs without a lot of investment in the \n> controller chip. The SLC designs were higher cost designs that focused \n> early on on making smarter and more expensive controllers. SLC will \n> always have an advantage, but it isn't going to be by several orders of \n> magnitude like it was before Intel's drive appeared. Its going to be by \n> factors of ~2 to 4 on random writes in the long run. However, for all \n> flash based SSD devices, there are design tradeoffs to make. Maximizing \n> writes sacrifices reads, maximizing random access performance reduces \n> streaming performance and capacity. We'll have a variety of devices \n> with varying characteristics optimal for different tasks.\n> \n> On Tue, Sep 23, 2008 at 8:24 PM, Bruce Momjian <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> Greg Smith wrote:\n> > On Mon, 8 Sep 2008, Merlin Moncure wrote:\n> >\n> > > What's interesting about the X25 is that they managed to pull the\n> > > numbers they got out of a MLC flash product. They managed this\n> with a\n> > > DRAM buffer and the custom controller.\n> >\n> > I finally found a good analysis of what's wrong with most of the\n> cheap MLC\n> > drives:\n> >\n> >\n> http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=7\n> <http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=7>\n> >\n> > 240ms random write latency...wow, no wonder I keep hearing so\n> many reports\n> > of cheap SSD just performing miserably. JMicron is one of those\n> companies\n> > I really avoid, never seen a design from them that wasn't cheap junk.\n> > Shame their awful part is in so many of the MLC flash products.\n> \n> I am surprised it too so long to identify the problem.\n> \n> --\n> Bruce Momjian <[email protected] <mailto:[email protected]>> \n> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n> \n> + If your life is a hard drive, Christ can be your backup. +\n> \n> --\n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \nAnybody know of any tests on systems that have specific filesystems for\nflash devices?\n\n", "msg_date": "Wed, 24 Sep 2008 07:11:57 -0400", "msg_from": "Steve Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel's X25-M SSD" } ]
[ { "msg_contents": "I've recently installed another drive in my db server and was wondering what\nthe best use of it is. Some thoughts I have are:\n\n\n1. Move some of the databases to the new drive. If this is a good idea, is\nthere a way to do this without a dump/restore? I'd prefer to move the folder\nif possible since that would be much faster.\n\n\n2. Move some logs to the new drive. Again, if this is recommended I'd be\nhappy to, but some directions on the right procedures would be appreciated.\n\n\n3. Other...any other ideas?\n\n\nThanks,\n\n--Rainer\n\n", "msg_date": "Tue, 9 Sep 2008 11:19:14 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "best use of another drive" }, { "msg_contents": "On Mon, Sep 8, 2008 at 8:19 PM, Rainer Mager <[email protected]> wrote:\n> I've recently installed another drive in my db server and was wondering what\n> the best use of it is. Some thoughts I have are:\n\nBeing a DBA, I'd tend to say make it a mirror of the first drive.\n", "msg_date": "Mon, 8 Sep 2008 22:11:25 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of another drive" }, { "msg_contents": "On Mon, 8 Sep 2008 22:11:25 -0600\n\"Scott Marlowe\" <[email protected]> wrote:\n\n> On Mon, Sep 8, 2008 at 8:19 PM, Rainer Mager <[email protected]>\n> wrote:\n> > I've recently installed another drive in my db server and was\n> > wondering what the best use of it is. Some thoughts I have are:\n> \n> Being a DBA, I'd tend to say make it a mirror of the first drive.\n> \n\n+1\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Mon, 8 Sep 2008 21:16:20 -0700", "msg_from": "Joshua Drake <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of another drive" }, { "msg_contents": "Thanks. I should have mentioned the existing disk and the new one are\nalready both mirrored (not together, though :-p). So we had 2 drives that\nwere mirrored and just added 2 more that are mirrored.\n\n--Rainer\n\n-----Original Message-----\nFrom: Joshua Drake [mailto:[email protected]] \nSent: Tuesday, September 09, 2008 1:16 PM\nTo: Scott Marlowe\nCc: Rainer Mager; [email protected]\nSubject: Re: [PERFORM] best use of another drive\n\nOn Mon, 8 Sep 2008 22:11:25 -0600\n\"Scott Marlowe\" <[email protected]> wrote:\n\n> On Mon, Sep 8, 2008 at 8:19 PM, Rainer Mager <[email protected]>\n> wrote:\n> > I've recently installed another drive in my db server and was\n> > wondering what the best use of it is. Some thoughts I have are:\n> \n> Being a DBA, I'd tend to say make it a mirror of the first drive.\n> \n\n+1\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 9 Sep 2008 13:35:12 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: best use of another drive" }, { "msg_contents": "On Mon, Sep 8, 2008 at 8:19 PM, Rainer Mager <[email protected]> wrote:\n> 1. Move some of the databases to the new drive. If this is a good idea, is\n> there a way to do this without a dump/restore? I'd prefer to move the folder\n> if possible since that would be much faster.\n\nWhat like tablespaces?\nhttp://www.postgresql.org/docs/8.3/static/manage-ag-tablespaces.html\n\n>\n> 2. Move some logs to the new drive. Again, if this is recommended I'd be\n> happy to, but some directions on the right procedures would be appreciated.\n\nBy logs you mean wal logs? Depends on where your bottleneck is....\ndoing lots of write transactions? Sure this might help...\n", "msg_date": "Mon, 8 Sep 2008 23:13:14 -0600", "msg_from": "\"Alex Hunsaker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best use of another drive" } ]
[ { "msg_contents": "Hi Duan,\n\nAs others have said, you should probably attempt to run pg_dump to export your database. If that doesn't work, consider restoring from backup. If the dump does work, you can create a clean PGDATA directory (using initdb like when you setup your original installation), and create a fresh copy of your database using the dump file. Then abandon your potentially damaged PGDATA directory.\n\nFor future reference:\n\n - The \"autovacuum\" parameter in postgresql.conf is off by default under Postgres 8.1. You should probably turn it on to ensure regular vacuuming, unless you have your own cronjob to do the vacuuming.\n\n - About finding old transactions, there are 2 places you have to look for old transactions. The usual place is in pg_stat_activity. The 2nd place is \"pg_prepared_xacts\", where prepared transactions are listed. If there's a prepared transaction in your system, it might explain why your old commit-logs aren't being purged. The following query shows both prepared and normal transactions:\n\nselect\n l.transactionid,\n age(l.transactionid) as age, /* measured in number of other transactions elapsed, not in terms of time */\n l.pid,\n case when l.pid is null then false else true end as is_prepared,\n a.backend_start,\n p.prepared as time_xact_was_prepared,\n p.gid as prepared_name\nfrom\n pg_locks l\n left outer join pg_stat_activity a on l.pid = a.procpid\n left outer join pg_prepared_xacts p on l.transactionid = p.transaction\nwhere\n l.locktype = 'transactionid'\n and l.mode = 'ExclusiveLock'\n and l.granted\norder by age(l.transactionid) desc\n;\n\n transactionid | age | pid | is_prepared | backend_start | time_xact_was_prepared | prepared_name\n---------------+-----+------+-------------+-------------------------------+-------------------------------+--------------------------\n 316645 | 44 | | f | | 2008-09-09 00:31:46.724178-07 | my_prepared_transaction1\n 316689 | 0 | 6093 | t | 2008-09-09 00:40:10.928287-07 | |\n(2 rows)\n\nNote that unless you run this query as a superuser (e.g. \"postgres\"), the columns from pg_stat_activity will only be visible for sessions that belong to you. To rollback this example prepared transaction, you'd type:\n ROLLBACK PREPARED 'my_prepared_transaction1';\n\nHope this helps!\nMatt\n\n\n", "msg_date": "Tue, 09 Sep 2008 01:06:17 -0700", "msg_from": "\"Matt Smiley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: too many clog files" }, { "msg_contents": "Alvaro Herrera wrote:\n> Move the old clog files back where they were, and run VACUUM FREEZE in\n> all your databases. That should clean up all the old pg_clog files, if\n> you're really that desperate.\n\nHas anyone actually seen a CLOG file get removed under 8.2 or 8.3? How about 8.1?\n\nI'm probably missing something, but looking at src/backend/commands/vacuum.c (under 8.2.9 and 8.3.3), it seems like vac_truncate_clog() scans through *all* tuples of pg_database looking for the oldest datfrozenxid. Won't that always be template0, which as far as I know can never be vacuumed (or otherwise connected to)?\n\npostgres=# select datname, datfrozenxid, age(datfrozenxid), datallowconn from pg_database order by age(datfrozenxid), datname ;\n datname | datfrozenxid | age | datallowconn\n------------------+--------------+----------+--------------\n template1 | 36347792 | 3859 | t\n postgres | 36347733 | 3918 | t\n mss_test | 36347436 | 4215 | t\n template0 | 526 | 36351125 | f\n(4 rows)\n\nI looked at several of my 8.2 databases' pg_clog directories, and they all have all the sequentially numbered segments (0000 through current segment). Would it be reasonable for vac_truncate_clog() to skip databases where datallowconn is false (i.e. template0)? Looking back to the 8.1.13 code, it does exactly that:\n if (!dbform->datallowconn)\n continue;\n\nAlso, Duan, if you have lots of files under pg_clog, you may be burning through transactions faster than necessary. Do your applications leave autocommit turned on? And since no one else mentioned it, as a work-around for a small filesystem you can potentially shutdown your database, move the pg_clog directory to a separate filesystem, and create a symlink to it under your PGDATA directory. That's not a solution, just a mitigation.\n\n\n", "msg_date": "Tue, 09 Sep 2008 22:24:43 -0700", "msg_from": "\"Matt Smiley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: too many clog files" }, { "msg_contents": "> \"Matt Smiley\" <[email protected]> wrote: \n> Alvaro Herrera wrote:\n>> Move the old clog files back where they were, and run VACUUM FREEZE\nin\n>> all your databases. That should clean up all the old pg_clog files,\nif\n>> you're really that desperate.\n> \n> Has anyone actually seen a CLOG file get removed under 8.2 or 8.3?\n \nSome of my high-volume databases don't quite go back to 0000, but this\ndoes seem to be a problem. I have confirmed that VACUUM FREEZE on all\nbut template0 (which doesn't allow connections) does not clean them\nup. No long running transactions are present.\n \n-Kevin\n", "msg_date": "Wed, 10 Sep 2008 09:58:45 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "On Wed, Sep 10, 2008 at 8:58 AM, Kevin Grittner\n<[email protected]> wrote:\n>> \"Matt Smiley\" <[email protected]> wrote:\n>> Alvaro Herrera wrote:\n>>> Move the old clog files back where they were, and run VACUUM FREEZE\n> in\n>>> all your databases. That should clean up all the old pg_clog files,\n> if\n>>> you're really that desperate.\n>>\n>> Has anyone actually seen a CLOG file get removed under 8.2 or 8.3?\n>\n> Some of my high-volume databases don't quite go back to 0000, but this\n> does seem to be a problem. I have confirmed that VACUUM FREEZE on all\n> but template0 (which doesn't allow connections) does not clean them\n> up. No long running transactions are present.\n\nI have a pretty high volume server that's been online for one month\nand it had somewhere around 53, going back in order to 0000, and it\nwas recently vacuumdb -az 'ed. Running another one. No long running\ntransactions, etc...\n", "msg_date": "Wed, 10 Sep 2008 11:18:02 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "\"Scott Marlowe\" <[email protected]> writes:\n> On Wed, Sep 10, 2008 at 8:58 AM, Kevin Grittner\n> <[email protected]> wrote:\n>> Some of my high-volume databases don't quite go back to 0000, but this\n>> does seem to be a problem. I have confirmed that VACUUM FREEZE on all\n>> but template0 (which doesn't allow connections) does not clean them\n>> up. No long running transactions are present.\n\n> I have a pretty high volume server that's been online for one month\n> and it had somewhere around 53, going back in order to 0000, and it\n> was recently vacuumdb -az 'ed. Running another one. No long running\n> transactions, etc...\n\nThe expected behavior (in 8.2 and newer) is to maintain about\nautovacuum_freeze_max_age transactions' worth of clog; which is to say\nabout 50MB at the default settings. If you've got significantly more\nthan that then we should look more closely.\n\nI don't remember what the truncation rule was in 8.1, so I can't speak\nto the OP's complaint.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Sep 2008 13:42:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files " }, { "msg_contents": ">>> Tom Lane <[email protected]> wrote: \n \n> The expected behavior (in 8.2 and newer) is to maintain about\n> autovacuum_freeze_max_age transactions' worth of clog; which is to\nsay\n> about 50MB at the default settings.\n \nThe active database I checked, where it didn't go all the way back to\n0000, had 50 MB of files; so I guess it is working as intended.\n \nIt sounds like the advice to the OP that running VACUUM FREEZE on all\ndatabases to clean up the files was off base?\n \n-Kevin\n", "msg_date": "Wed, 10 Sep 2008 12:47:54 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "And potentially to tune down the number kept by modifying the appropriate\nfreeze parameter for 8.1 (I'm not sure of the details), so that it keeps\nperhaps 20MB or so rather than 50MB.\n\nOn Wed, Sep 10, 2008 at 10:47 AM, Kevin Grittner <\[email protected]> wrote:\n\n> >>> Tom Lane <[email protected]> wrote:\n>\n> > The expected behavior (in 8.2 and newer) is to maintain about\n> > autovacuum_freeze_max_age transactions' worth of clog; which is to\n> say\n> > about 50MB at the default settings.\n>\n> The active database I checked, where it didn't go all the way back to\n> 0000, had 50 MB of files; so I guess it is working as intended.\n>\n> It sounds like the advice to the OP that running VACUUM FREEZE on all\n> databases to clean up the files was off base?\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAnd potentially to tune down the number kept by modifying the appropriate freeze parameter for 8.1 (I'm not sure of the details), so that it keeps perhaps 20MB or so rather than 50MB.\nOn Wed, Sep 10, 2008 at 10:47 AM, Kevin Grittner <[email protected]> wrote:\n>>> Tom Lane <[email protected]> wrote:\n\n> The expected behavior (in 8.2 and newer) is to maintain about\n> autovacuum_freeze_max_age transactions' worth of clog; which is to\nsay\n> about 50MB at the default settings.\n\nThe active database I checked, where it didn't go all the way back to\n0000, had 50 MB of files; so I guess it is working as intended.\n\nIt sounds like the advice to the OP that running VACUUM FREEZE on all\ndatabases to clean up the files was off base?\n\n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 10 Sep 2008 10:55:26 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" }, { "msg_contents": "Kevin Grittner escribi�:\n\n> It sounds like the advice to the OP that running VACUUM FREEZE on all\n> databases to clean up the files was off base?\n\nHis responses are not explicit enough to know.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 10 Sep 2008 14:03:45 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: too many clog files" } ]
[ { "msg_contents": "I read something from http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html saying that PostgreSQL can't give the correct result of the some TPC-H queries, I wonder is there any official statements about this, because it will affect our plane of using PostgreSQL as an alternative because it's usability. BTW I don't think PostgreSQL performances worse because the default configuration usually can't use enough resources of the computer, as as memory.\n\n\n\n\nI read something from http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html saying \r\nthat PostgreSQL can't give the correct result of the some TPC-H queries, I \r\nwonder is there any official statements about this, because it will affect our \r\nplane of using PostgreSQL as an alternative because it's usability. BTW I don't \r\nthink PostgreSQL performances worse because the default configuration usually \r\ncan't use enough resources of the computer, as as \r\nmemory.", "msg_date": "Tue, 9 Sep 2008 19:59:49 +0800", "msg_from": "\"Amber\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL TPC-H test result?" }, { "msg_contents": "On Tue, Sep 09, 2008 at 07:59:49PM +0800, Amber wrote:\n\n> I read something from\n> http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html\n\nGiven that the point of that \"study\" is to prove something about\nperformance, one should be leery of any claims based on an \"out of the\nbox\" comparison. Particularly since the \"box\" their own product comes\nout of is \"compiled from CVS checkout\". Their argument seems to be\nthat people can learn how to drive CVS and to compile software under\nactive development, but can't read the manual that comes with Postgres\n(and a release of Postgres well over a year old, at that). \n\nI didn't get any further in reading the claims, because it's obviously\nnothing more than a marketing effort using the principle that deriding\neveryone else will make them look better. Whether they have a good\nproduct is another question entirely.\n\nA\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Tue, 9 Sep 2008 08:39:47 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "Yes, we don't care about the performance results, but we do care about the point that PostgreSQL can't give the correct results of TPC-H queries.\r\n\r\n--------------------------------------------------\r\nFrom: \"Andrew Sullivan\" <[email protected]>\r\nSent: Tuesday, September 09, 2008 8:39 PM\r\nTo: <[email protected]>\r\nSubject: Re: [GENERAL] PostgreSQL TPC-H test result?\r\n\r\n> On Tue, Sep 09, 2008 at 07:59:49PM +0800, Amber wrote:\r\n> \r\n>> I read something from\r\n>> http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html\r\n> \r\n> Given that the point of that \"study\" is to prove something about\r\n> performance, one should be leery of any claims based on an \"out of the\r\n> box\" comparison. Particularly since the \"box\" their own product comes\r\n> out of is \"compiled from CVS checkout\". Their argument seems to be\r\n> that people can learn how to drive CVS and to compile software under\r\n> active development, but can't read the manual that comes with Postgres\r\n> (and a release of Postgres well over a year old, at that). \r\n> \r\n> I didn't get any further in reading the claims, because it's obviously\r\n> nothing more than a marketing effort using the principle that deriding\r\n> everyone else will make them look better. Whether they have a good\r\n> product is another question entirely.\r\n> \r\n> A\r\n> -- \r\n> Andrew Sullivan\r\n> [email protected]\r\n> +1 503 667 4564 x104\r\n> http://www.commandprompt.com/\r\n> \r\n> -- \r\n> Sent via pgsql-general mailing list ([email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-general\r\n> ", "msg_date": "Tue, 9 Sep 2008 22:06:01 +0800", "msg_from": "\"Amber\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "On Tue, Sep 9, 2008 at 7:06 AM, Amber <[email protected]> wrote:\n> Yes, we don't care about the performance results, but we do care about the point that PostgreSQL can't give the correct results of TPC-H queries.\n\nIt would be nice to know about the data, queries, and the expected\nresults of their tests just so we could see this for ourselves.\n\n\n-- \nRegards,\nRichard Broersma Jr.\n\nVisit the Los Angeles PostgreSQL Users Group (LAPUG)\nhttp://pugs.postgresql.org/lapug\n", "msg_date": "Tue, 9 Sep 2008 08:17:33 -0700", "msg_from": "\"Richard Broersma\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "On Tuesday 09 September 2008 10:06:01 Amber wrote:\n> From: \"Andrew Sullivan\" <[email protected]>\n> Sent: Tuesday, September 09, 2008 8:39 PM\n> To: <[email protected]>\n> Subject: Re: [GENERAL] PostgreSQL TPC-H test result?\n>\n> > On Tue, Sep 09, 2008 at 07:59:49PM +0800, Amber wrote:\n> >> I read something from\n> >> http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html\n> >\n> > Given that the point of that \"study\" is to prove something about\n> > performance, one should be leery of any claims based on an \"out of the\n> > box\" comparison. Particularly since the \"box\" their own product comes\n> > out of is \"compiled from CVS checkout\". Their argument seems to be\n> > that people can learn how to drive CVS and to compile software under\n> > active development, but can't read the manual that comes with Postgres\n> > (and a release of Postgres well over a year old, at that).\n> >\n> > I didn't get any further in reading the claims, because it's obviously\n> > nothing more than a marketing effort using the principle that deriding\n> > everyone else will make them look better. Whether they have a good\n> > product is another question entirely.\n> >\n> > >Yes, we don't care about the performance results, but we do care \n> > >about the \n> > > point that PostgreSQL can't give the correct results of TPC-H queries.\n\nGiven the point of those benchmarks is to make other systems look bad, I think \nyou have to take them with a grain of salt. Since we don't know what the \nerrors/results were, and no information is giving, we are left to wonder if \nthis is a problem with the software or the tester. The site would have us \nbelieve the former, but I think I would lean toward the latter... case in \npoint, I did a quick google and turned up this link: \nhttp://www.it.iitb.ac.in/~chetanv/personal/acads/db/report_html/node10.html. \nIt isn't terribly informative, but it doesindicate one thing, someone else \nwas able to run query #6 correctly, while the above site claims it returns an \nerror. Now when I look at query#6 from that site, I notice it shows the \nfollowing syntax:\n\ninterval '1' year. \n\nwhen I saw that, it jumped out at me as something that could be an issue, and \nit is:\n\npagila=# select now() - interval '1' year, now() - interval '1 year';\n ?column? | ?column?\n-------------------------------+-------------------------------\n 2008-09-09 11:28:46.938209-04 | 2007-09-09 11:28:46.938209-04\n(1 row)\n\nNow, I'm not sure if there is an issue that monet supports the first syntax \nand so when they ran thier test on postgres this query produced wrong \nresults, but that seems possible. In this case I would wonder if the first \nsyntax is sql compliant, but it doesn't really matter, the tpc-h allows for \nchanges to queries to support syntax variations between databases; I'm pretty \nsure I could make suttle changes to \"break\" other databases as well. \n\nIncidentally, I poked Mark Wong, who used to work at the OSDL (big linux \nkernel hacking shop), and he noted he has successfully run the tpc-h tests \nbefore on postgres. \n\nIn the end, I can't speak to what the issues are wrt monet and postgres and \nthier tpc-h benchmarks, but personally I don't think they are worth worring \nabout. \n\n-- \nRobert Treat\nhttp://www.omniti.com\nDatabase: Scalability: Consulting:\n", "msg_date": "Tue, 09 Sep 2008 11:29:54 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "On Tue, Sep 9, 2008 at 10:06 AM, Amber <[email protected]> wrote:\n> Yes, we don't care about the performance results, but we do care about the point that PostgreSQL can't give the correct results of TPC-H queries.\n\nPostgreSQL, at least in terms of the open source databases, is\nprobably your best bet if you are all concerned about correctness. Do\nnot give any credence to a vendor published benchmark unless the test\nis published and can be independently verifed.\n\nmerlin\n", "msg_date": "Tue, 9 Sep 2008 11:35:16 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "On Tue, Sep 09, 2008 at 10:06:01PM +0800, Amber wrote:\n> Yes, we don't care about the performance results, but we do care about the point that PostgreSQL can't give the correct results of TPC-H queries.\n> \n\nI have never heard a reputable source claim this. I have grave doubts\nabout their claim: they don't specify what implementation of TPC-H\nthey use. They don't actually have the right, AIUI, to claim they\ntested under TPC-H, since their results aren't listed anywhere on\nhttp://www.tpc.org/tpch/results/tpch_results.asp?orderby=dbms. It\ncould well be that they made up something that kinda does something\nlike TPC-H, tailored to how their database works, and then claimed\nothers can't do the job. That's nice marketing material, but it's not\na meaningful test result.\n\nWithout access to the methodology, you should be wary of accepting any\nof the conclusions.\n\nThere is, I understand, an implementation of something like TPC-H that\nyou could use to test it yourself. http://osdldbt.sourceforge.net/.\nDBT-3 is supposed to be that workload. Please note that the license\ndoes not allow you to publish competitive tests for marketing\nreasons. but you could see for yourself whether the claim is true\nthat way.\n\nA\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Tue, 9 Sep 2008 11:51:13 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> http://www.it.iitb.ac.in/~chetanv/personal/acads/db/report_html/node10.html. \n> It isn't terribly informative, but it doesindicate one thing, someone else \n> was able to run query #6 correctly, while the above site claims it returns an \n> error. Now when I look at query#6 from that site, I notice it shows the \n> following syntax:\n\n> interval '1' year. \n\n> when I saw that, it jumped out at me as something that could be an issue, and \n> it is:\n\nYeah. This is SQL spec syntax, but it's not fully implemented in\nPostgres: the grammar supports it but the info doesn't get propagated to\ninterval_in, and interval_in wouldn't know what to do even if it did\nhave the information that there was a YEAR qualifier after the literal.\n\nThat's probably not good because it *looks* like we support the syntax,\nbut in fact produce non-spec-compliant results. I think it might be\nbetter if we threw an error.\n\nOr someone could try to make it work, but given that no one has taken\nthe slightest interest since Tom Lockhart left the project, I wouldn't\nhold my breath waiting for that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Sep 2008 16:26:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result? " }, { "msg_contents": "On Tue, 9 Sep 2008, Amber wrote:\n\n> I read something from \n> http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html \n> saying that PostgreSQL can't give the correct result of the some TPC-H \n> queries\n\nJignesh Shah at Sun ran into that same problem. It's mentioned briefly in \nhis presentation at \nhttp://blogs.sun.com/jkshah/entry/postgresql_east_2008_talk_postgresql on \npages 26 and 27. 5 of the 22 reference TCP-H queries (4, 5, 6, 10, 14) \nreturned zero rows immediately for his tests. Looks like the MonetDB crew \nis saying it does that on queries 4,5,6,10,12,14,15 and that 20 takes too \nlong to run to generate a result. Maybe 12/15/20 were fixed by changes in \n8.3, or perhaps there were subtle errors there that Jignesh didn't \ncatch--it's not like he did a formal submission run, was just kicking the \ntires. I suspect the difference on 20 was that his hardware and tuning \nwas much better, so it probably did execute fast enough.\n\nWhile some of the MonetDB bashing in this thread was unwarranted, it is \nquite inappropriate that they published performance results here. Would \nbe nice if someone in the community were to grab ahold of the TPC-H \nproblems and try to shake them out.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 9 Sep 2008 17:42:50 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "\n\nGreg Smith wrote:\n> On Tue, 9 Sep 2008, Amber wrote:\n>\n>> I read something from \n>> http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html \n>> saying that PostgreSQL can't give the correct result of the some \n>> TPC-H queries\n>\n> Jignesh Shah at Sun ran into that same problem. It's mentioned \n> briefly in his presentation at \n> http://blogs.sun.com/jkshah/entry/postgresql_east_2008_talk_postgresql \n> on pages 26 and 27. 5 of the 22 reference TCP-H queries (4, 5, 6, 10, \n> 14) returned zero rows immediately for his tests. Looks like the \n> MonetDB crew is saying it does that on queries 4,5,6,10,12,14,15 and \n> that 20 takes too long to run to generate a result. Maybe 12/15/20 \n> were fixed by changes in 8.3, or perhaps there were subtle errors \n> there that Jignesh didn't catch--it's not like he did a formal \n> submission run, was just kicking the tires. I suspect the difference \n> on 20 was that his hardware and tuning was much better, so it probably \n> did execute fast enough.\n>\n> While some of the MonetDB bashing in this thread was unwarranted, it \n> is quite inappropriate that they published performance results here. \n> Would be nice if someone in the community were to grab ahold of the \n> TPC-H problems and try to shake them out.\n>\n\nHmm This is the second time MonetDB has published PostgreSQL numbers. I \nthink I will try to spend few days on TPC-H again on a much smaller \nscale (to match what MonetDB used) and start discussions on solving the \nproblem.. Keep tuned.\n\nRegards,\nJignesh\n\n", "msg_date": "Tue, 09 Sep 2008 18:28:54 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "\nOn Tue, 2008-09-09 at 16:26 -0400, Tom Lane wrote:\n\n> That's probably not good because it *looks* like we support the syntax,\n> but in fact produce non-spec-compliant results. I think it might be\n> better if we threw an error.\n\nDefinitely. If we accept SQL Standard syntax like this but then not do\nwhat we should, it is clearly an ERROR. Our reputation will be damaged\nif we don't, since people will think that we are blase about standards\ncompliance and about query correctness. Please lets move swiftly to plug\nthis hole, as if it were a data loss bug (it is, if it causes wrong\nanswers to queries for unsuspecting users).\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 10 Sep 2008 10:20:32 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "On Tue, Sep 09, 2008 at 05:42:50PM -0400, Greg Smith wrote:\n>\n> While some of the MonetDB bashing in this thread was unwarranted, \n\nWhat bashing? I didn't see any bashing of them. \n\nA\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Wed, 10 Sep 2008 10:08:26 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "Tom Lane wrote:\n>> interval '1' year. \n> \n> ...is SQL spec syntax, but it's not fully implemented in Postgres...\n> \n> Or someone could try to make it work, but given that no one has taken\n> the slightest interest since Tom Lockhart left the project, I wouldn't\n> hold my breath waiting for that.\n\nI have interest. For 5 years I've been maintaining a patch for a client\nthat allows the input of ISO-8601 intervals (like 'P1YT1M') rather than\nthe nonstandard shorthand ('1Y1M') that postgresql supports[1].\n\nI'd be interested in working on this. Especially if supporting SQL\nstandard interval syntax could improve the chances of getting my\nISO-8601-interval-syntax replacing nonstandard-postgres-shorthand-intervals\npatch accepted again, I'd be quite happy work on it.\n\nTom in 2003 said my code looked cleaner than the current code[2], and\nthe patch was accepted[3] for a while before being rejected - I believe\nbecause Peter said he'd like to see the SQL standard intervals first.\nI see it's still a TODO, though.\n\n> the grammar supports it but the info doesn't get propagated to\n> interval_in, and interval_in wouldn't know what to do even if it did\n> have the information that there was a YEAR qualifier after the literal.\n\nAny hints on how best to propagate the needed info from the grammar?\nOr should it be obvious to me from reading the code?\n\n[1] http://archives.postgresql.org/pgsql-patches/2003-09/msg00119.php\n[2] http://archives.postgresql.org/pgsql-patches/2003-09/msg00121.php\n[3] http://archives.postgresql.org/pgsql-patches/2003-12/msg00253.php\n\n Ron Mayer\n (formerly [email protected] who\n posted those ISO-8601 interval patches)\n\n", "msg_date": "Thu, 11 Sep 2008 15:48:54 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "Ron Mayer wrote:\n> Tom Lane wrote:\n>> Or someone could try to make it work, but given that no one has taken\n>> the slightest interest since Tom Lockhart left the project, I wouldn't\n>> hold my breath waiting for that.\n> \n> I have interest. For 5 years I've been maintaining a patch for a client\n\nDoh. Now that I catch up on emails I see Tom has a patch\nin a different thread. I'll follow up there...\n\n", "msg_date": "Thu, 11 Sep 2008 16:09:24 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "Moving this thread to Performance alias as it might make more sense for \nfolks searching on this topic:\n\n\n\nGreg Smith wrote:\n> On Tue, 9 Sep 2008, Amber wrote:\n>\n>> I read something from \n>> http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html \n>> saying that PostgreSQL can't give the correct result of the some \n>> TPC-H queries\n>\n> Jignesh Shah at Sun ran into that same problem. It's mentioned \n> briefly in his presentation at \n> http://blogs.sun.com/jkshah/entry/postgresql_east_2008_talk_postgresql \n> on pages 26 and 27. 5 of the 22 reference TCP-H queries (4, 5, 6, 10, \n> 14) returned zero rows immediately for his tests. Looks like the \n> MonetDB crew is saying it does that on queries 4,5,6,10,12,14,15 and \n> that 20 takes too long to run to generate a result. Maybe 12/15/20 \n> were fixed by changes in 8.3, or perhaps there were subtle errors \n> there that Jignesh didn't catch--it's not like he did a formal \n> submission run, was just kicking the tires. I suspect the difference \n> on 20 was that his hardware and tuning was much better, so it probably \n> did execute fast enough.\n\nI redid a quick test with the same workload on one of my systems with SF \n10 which is about 10GB\n(I hope it comes out properly displayed)\n\n Jignesh From Monet (8.3T/8.2.9)\n\nQ Time PG8.3.3 Time PG8.2.9 Ratio\n\n1 429.01 510 0.84\n\n2 3.65 54 0.07\n\n3 33.49 798 0.04\n\n4 6.53 Empty 35 (E) 0.19\n\n5 8.45 Empty 5.5(E) 1.54\n\n6 32.84 Empty 172 (E) 0.19\n\n7 477.95 439 1.09\n\n8 58.55 251 0.23\n\n9 781.96 2240 0.35\n\n10 9.03 Empty 6.1(E) 1.48\n\n11 3.57 Empty 25 0.14\n\n12 56.11 Empty 179 (E) 0.31\n\n13 61.01 140 0.44\n\n14 30.69 Empty 169 (E) 0.18\n\n15 32.81 Empty 168 (E) 0.2\n\n16 23.98 115 0.21\n\n17 Did not finish Did not finish\n\n18 58.93 882 0.07\n\n19 71.55 218 0.33\n\n20 Did not finish Did not finish\n\n21 550.51 477 1.15\n\n22 6.21 Did not finish \n\n\n\nAll time is in seconds (sub seconds where availabe)\nRatio > 1 means 8.3.3 is slower and <1 means 8.3.3 is faster\n\nMy take on the results:\n\n* I had to tweak the statement of Q1 in order to execute it.\n (TPC-H kit does not directly support POSTGRESQL statements)\n\n* Timings with 8.3.3 and bit of tuning gives much better time overall\n This was expected (Some queries finish in 7% of the time than what\n MonetDB reported. From the queries that worked only Q7 & Q21 seem to\n have regressed)\n\n* However Empty rows results is occuring consistently\n (Infact Q11 also returned empty for me while it worked in their test)\n Queries: 4,5,6,10,11,12,14,15\n (ACTION ITEM: I will start separate threads for each of those queries in\n HACKERS alias to figure out the problem since it looks like Functional\n problem to me and should be interesting to hackers alias)\n\n* Two queries 17,20 looks like will not finish (I let Q17 to run for 18 \nhrs and\n yet it had not completed. As for Q20 I killed it as it was approaching \nan hour.)\n (ACTION ITEM: Not sure whether debugging for these queries will go in \nhackers or\n perform alias but I will start a thread on them too.)\n\n* Looks like bit of tuning is required for Q1, Q7, Q9, Q21 to improve their\n overall time. Specially understanding if PostgreSQL is missing a more \nefficient\n plan for them.\n (ACTION ITEM: I will start separate threads on performance alias to \ndig into\n those queries)\n\n\nI hope to start separate threads for each queries so we can track them \neasier. I hope to provide explain analyze outputs for each one of them \nand lets see if there are any problems.\n\nFeedback welcome on what you want to see for each threads.\n\nRegards,\nJignesh\n\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nSun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Thu, 11 Sep 2008 23:30:39 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL TPC-H test result?" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> * However Empty rows results is occuring consistently\n> (Infact Q11 also returned empty for me while it worked in their test)\n> Queries: 4,5,6,10,11,12,14,15\n> (ACTION ITEM: I will start separate threads for each of those queries in\n> HACKERS alias to figure out the problem since it looks like Functional\n> problem to me and should be interesting to hackers alias)\n\nSee discussion suggesting that this is connected to misinterpretation of\nINTERVAL literals. If TPC-H relies heavily on syntax that we'd get\nwrong, then pretty much every test result has to be under suspicion,\nsince we might be fetching many more or fewer rows than the test\nintends.\n\nI've recently committed fixes that I think would cover this, but you'd\nreally need to compare specific query rowcounts against other DBMSes\nto make sure we're all on the same page.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Sep 2008 23:48:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL TPC-H test result? " }, { "msg_contents": "On Thu, Sep 11, 2008 at 11:30 PM, Jignesh K. Shah <[email protected]> wrote:\n> Moving this thread to Performance alias as it might make more sense for\n> folks searching on this topic:\n\nYou should be using DBT-3. Similarly, a scale factor of 10 is\npointless. How many data warehouses are only 10GB? Also, it's\nwell-known that MonetDB will quickly fall over when you run a test\nlarger than can fit in memory. In the real benchmark, the minimum\nscale factor is 100GB; try it and see what you get.\n\nIf you have the resources and want to compare it to something, compare\nit with Oracle on the exact same system. If tuned properly, Oracle\n10g (Standard Edition with the exact same tables/indexes/queries as\nPostgres) is ~5-10x faster and Enterprise Edition is ~50-100x faster.\nTo be fair, an Oracle Enterprise Edition configuration for TPC-H uses\nadvanced partitioning and materialized views, both of which Postgres\ndoes not support, which makes it an apples-to-oranges comparison. I\nhaven't tried 11g, but I expect it will perform a bit better in this\narea given several of the enhancements. Also, while it's not widely\nknown, if you wanted to compare systems and don't want to set it all\nup yourself, Oracle released Oracle-compatible versions of OSDL's\nDatabase Test Suite for DBT-2 (TPC-C) and DBT-3 (TPC-H) as part of\ntheir Linux Test Kit which can be found at oss.oracle.com. Due to\nOracle's license, I can't give you exact timings, but I have confirmed\nwith several other benchmark professionals that the results mentioned\nabove have been confirmed by others as well.\n\nTo be clear, I'm not trying to bash on PG and I don't want to start a\nflame-war. I just think that people should be aware of where we stand\nin comparison to commercial systems and understand that there's quite\na bit of work to be done in the VLDB area. Specifically, I don't\nthink we should be striving for great TPC-H performance, but I believe\nthere is some areas we could improve based on it. Similarly, this is\nan area where a properly-utilized fadvise may show some benefit.\n\nAs for running the TPC-H on Postgres, you need a\ndefault_statistics_target of at least 250. IIRC, the last time I\nchecked (on 8.3), you really needed a statistics target around\n400-500. For the most part, the planner is choosing a bad plan for\nseveral of the queries. After you resolve that, you'll quickly notice\nthat Postgres' buffer manager design and the lack of a good\nmulti-block read quickly comes into play. The hash join\nimplementation also has a couple issues which I've recently seen\nmentioned in other threads.\n\nUse DBT-3, it will save you quite a few headaches :)\n\n-- \nJonah H. Harris, Senior DBA\nmyYearbook.com\n", "msg_date": "Fri, 12 Sep 2008 00:46:55 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL TPC-H test result?" } ]
[ { "msg_contents": "Hi out there,\n\nI've some little questions, perhaps you can help me...\n\nAt the moment, we're planning our new clustered ERP system which\nconsists of a java application server and a postgresql database. The\nhardware, which is actually used for that system isn't able to handle\nthe workload (2 Processors, load of 6-8 + 12GB Ram), so it is very, very\nslow - and that although we already deactived a lot of stuff we normally\nwant to do, like a logging and something like that...\nWe've already choosen some hardware for the new cluster (2x quadcore\nXeon + 64GB Ram should handle that - also in case of failover when one\nserver has to handle both, applicaton and database! The actual system\ncan't do that anymore...) but I also have to choose the filesystem\nhardware. And that is a problem - we know, that the servers will be fast\nenough, but we don't know, how many I/O performance is needed.\nAt the moment, we're using a scsi based shared storage (HP MSA500G2 -\nwhich contains 10 disks for the database - 8xdata(raid 1+0)+2x\nlogs(raid1) ) and we often have a lot wait I/O when 200 concurrent users\nare working... (when all features we need are activated, that wait I/O\nwill heavy increase, we think...)\nSo in order to get rid of wait I/O (as far as possible), we have to\nincrease the I/O performance. Because of there are a lot storage systems\nout there, we need to know how many I/O's per second we actually need.\n(To decide, whether a storage systems can handle our load or a bigger\nsystem is required. ) Do you have some suggestions, how to measure that?\nDo you have experience with postgres on something like HP MSA2000(10-20\ndisks) or RamSan systems?\n\nBest regards,\nAndre\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 09 Sep 2008 15:59:16 +0200", "msg_from": "Andre Brandt <[email protected]>", "msg_from_op": true, "msg_subject": "How to measure IO performance?" }, { "msg_contents": "On Tue, Sep 9, 2008 at 7:59 AM, Andre Brandt <[email protected]> wrote:\n> Hi out there,\n>\n> I've some little questions, perhaps you can help me...\n>\n> So in order to get rid of wait I/O (as far as possible), we have to\n> increase the I/O performance. Because of there are a lot storage systems\n> out there, we need to know how many I/O's per second we actually need.\n> (To decide, whether a storage systems can handle our load or a bigger\n> system is required. ) Do you have some suggestions, how to measure that?\n> Do you have experience with postgres on something like HP MSA2000(10-20\n> disks) or RamSan systems?\n\nGenerally the best bang for the buck is with Direct Attached Storage\nsystem with a high quality RAID controllers, like the 3Ware or Areca\nor LSI or HPs 800 series. I've heard a few good reports on higher end\nadaptecs, but most adaptec RAID controllers are pretty poor db\nperformers.\n\nTo get an idea of how much I/O you'll need, you need to see how much\nyou use now. A good way to do that is to come up with a realistic\nbenchmark and run it at a low level of concurrency on your current\nsystem, while running iostat and / or vmstat in the background.\npidstat can be pretty useful too. Run a LONG benchmark so it averages\nout, you don't want to rely on a 5 minute benchmark. Once you have\nsome base numbers, increase the scaling factor (i.e. number of threads\nunder test) and measure I/O and CPU etc for that test.\n\nNow, figure out how high a load factor you'd need to run your full\nload and multiply that times your 1x benchmark's I/O numbers, plus a\nfudge factor or 2 to 10 times for overhead.\n\nThe standard way to hand more IO ops per second is to add spindles.\nIt might take more than one RAID controller or external RAID enclosure\nto meet your needs.\n", "msg_date": "Tue, 9 Sep 2008 12:14:37 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to measure IO performance?" } ]
[ { "msg_contents": "Hi all,\n\nI've started to display the effects of changing the Linux block device\nreadahead buffer to the sequential read performance using fio. There\nare lots of raw data buried in the page, but this is what I've\ndistilled thus far. Please have a look and let me know what you\nthink:\n\nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide#Readahead_Buffer_Size\n\nRegards,\nMark\n", "msg_date": "Tue, 9 Sep 2008 23:53:17 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Effects of setting linux block device readahead size" }, { "msg_contents": "On Tue, 9 Sep 2008, Mark Wong wrote:\n\n> I've started to display the effects of changing the Linux block device\n> readahead buffer to the sequential read performance using fio.\n\nAh ha, told you that was your missing tunable. I'd really like to see the \nwhole table of one disk numbers re-run when you get a chance. The \nreversed ratio there on ext2 (59MB read/92MB write) was what tipped me off \nthat something wasn't quite right initially, and until that's fixed it's \nhard to analyze the rest.\n\nBased on your initial data, I'd say that the two useful read-ahead \nsettings for this system are 1024KB (conservative but a big improvement) \nand 8192KB (point of diminishing returns). The one-disk table you've got \n(labeled with what the default read-ahead is) and new tables at those two \nvalues would really flesh out what each disk is capable of.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 10 Sep 2008 10:49:06 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "How does that readahead tunable affect random reads or mixed random /\nsequential situations? In many databases, the worst case scenarios aren't\nwhen you have a bunch of concurrent sequential scans but when there is\nenough random read/write concurrently to slow the whole thing down to a\ncrawl. How the file system behaves under this sort of concurrency\n\nI would be very interested in a mixed fio profile with a \"background writer\"\ndoing moderate, paced random and sequential writes combined with concurrent\nsequential reads and random reads.\n\n-Scott\n\nOn Wed, Sep 10, 2008 at 7:49 AM, Greg Smith <[email protected]> wrote:\n\n> On Tue, 9 Sep 2008, Mark Wong wrote:\n>\n> I've started to display the effects of changing the Linux block device\n>> readahead buffer to the sequential read performance using fio.\n>>\n>\n> Ah ha, told you that was your missing tunable. I'd really like to see the\n> whole table of one disk numbers re-run when you get a chance. The reversed\n> ratio there on ext2 (59MB read/92MB write) was what tipped me off that\n> something wasn't quite right initially, and until that's fixed it's hard to\n> analyze the rest.\n>\n> Based on your initial data, I'd say that the two useful read-ahead settings\n> for this system are 1024KB (conservative but a big improvement) and 8192KB\n> (point of diminishing returns). The one-disk table you've got (labeled with\n> what the default read-ahead is) and new tables at those two values would\n> really flesh out what each disk is capable of.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHow does that readahead tunable affect random reads or mixed random / sequential situations?  In many databases, the worst case scenarios aren't when you have a bunch of concurrent sequential scans but when there is enough random read/write concurrently to slow the whole thing down to a crawl.   How the file system behaves under this sort of concurrency\nI would be very interested in a mixed fio profile with a \"background writer\" doing moderate, paced random and sequential writes combined with concurrent sequential reads and random reads.-Scott\nOn Wed, Sep 10, 2008 at 7:49 AM, Greg Smith <[email protected]> wrote:\nOn Tue, 9 Sep 2008, Mark Wong wrote:\n\n\nI've started to display the effects of changing the Linux block device\nreadahead buffer to the sequential read performance using fio.\n\n\nAh ha, told you that was your missing tunable.  I'd really like to see the whole table of one disk numbers re-run when you get a chance.  The reversed ratio there on ext2 (59MB read/92MB write) was what tipped me off that something wasn't quite right initially, and until that's fixed it's hard to analyze the rest.\n\nBased on your initial data, I'd say that the two useful read-ahead settings for this system are 1024KB (conservative but a big improvement) and 8192KB (point of diminishing returns).  The one-disk table you've got (labeled with what the default read-ahead is) and new tables at those two values would really flesh out what each disk is capable of.\n\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 10 Sep 2008 09:26:25 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Wed, Sep 10, 2008 at 9:26 AM, Scott Carey <[email protected]> wrote:\n> How does that readahead tunable affect random reads or mixed random /\n> sequential situations? In many databases, the worst case scenarios aren't\n> when you have a bunch of concurrent sequential scans but when there is\n> enough random read/write concurrently to slow the whole thing down to a\n> crawl. How the file system behaves under this sort of concurrency\n>\n> I would be very interested in a mixed fio profile with a \"background writer\"\n> doing moderate, paced random and sequential writes combined with concurrent\n> sequential reads and random reads.\n\nThe data for the other fio profiles we've been using are on the wiki,\nif your eyes can take the strain. We are working on presenting the\ndata in a more easily digestible manner. I don't think we'll add any\nmore fio profiles in the interest of moving on to doing some sizing\nexercises with the dbt2 oltp workload. We're just going to wrap up a\ncouple more scenarios first and get through a couple of conference\npresentations. The two conferences in particular are the Linux\nPlumbers Conference, and the PostgreSQL Conference: West 08, which are\nboth in Portland, Oregon.\n\nRegards,\nMark\n", "msg_date": "Wed, 10 Sep 2008 10:38:30 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "I am planning my own I/O tuning exercise for a new DB and am setting up some\nfio profiles. I appreciate the work and will use some of yours as a\nbaseline to move forward. I will be making some mixed mode fio profiles and\nrunning our own application and database as a test as well. I'll focus on\next3 versus xfs (Linux) and zfs (Solaris) however, and expect to be working\nwith sequential transfer rates many times larger than your test and am\ninterested in performance under heavy concurrency -- so the results may\ndiffer quite a bit.\n\nI'll share the info I can.\n\n\nOn Wed, Sep 10, 2008 at 10:38 AM, Mark Wong <[email protected]> wrote:\n\n> On Wed, Sep 10, 2008 at 9:26 AM, Scott Carey <[email protected]>\n> wrote:\n> > How does that readahead tunable affect random reads or mixed random /\n> > sequential situations? In many databases, the worst case scenarios\n> aren't\n> > when you have a bunch of concurrent sequential scans but when there is\n> > enough random read/write concurrently to slow the whole thing down to a\n> > crawl. How the file system behaves under this sort of concurrency\n> >\n> > I would be very interested in a mixed fio profile with a \"background\n> writer\"\n> > doing moderate, paced random and sequential writes combined with\n> concurrent\n> > sequential reads and random reads.\n>\n> The data for the other fio profiles we've been using are on the wiki,\n> if your eyes can take the strain. We are working on presenting the\n> data in a more easily digestible manner. I don't think we'll add any\n> more fio profiles in the interest of moving on to doing some sizing\n> exercises with the dbt2 oltp workload. We're just going to wrap up a\n> couple more scenarios first and get through a couple of conference\n> presentations. The two conferences in particular are the Linux\n> Plumbers Conference, and the PostgreSQL Conference: West 08, which are\n> both in Portland, Oregon.\n>\n> Regards,\n> Mark\n>\n\nI am planning my own I/O tuning exercise for a new DB and am setting up some fio profiles.  I appreciate the work and will use some of yours as a baseline to move forward.  I will be making some mixed mode fio profiles and running our own application and database as a test as well.  I'll focus on ext3 versus xfs (Linux) and zfs (Solaris) however, and expect to be working with sequential transfer rates many times larger than your test and am interested in performance under heavy concurrency -- so the results may differ quite a bit.\nI'll share the info I can.On Wed, Sep 10, 2008 at 10:38 AM, Mark Wong <[email protected]> wrote:\nOn Wed, Sep 10, 2008 at 9:26 AM, Scott Carey <[email protected]> wrote:\n> How does that readahead tunable affect random reads or mixed random /\n> sequential situations?  In many databases, the worst case scenarios aren't\n> when you have a bunch of concurrent sequential scans but when there is\n> enough random read/write concurrently to slow the whole thing down to a\n> crawl.   How the file system behaves under this sort of concurrency\n>\n> I would be very interested in a mixed fio profile with a \"background writer\"\n> doing moderate, paced random and sequential writes combined with concurrent\n> sequential reads and random reads.\n\nThe data for the other fio profiles we've been using are on the wiki,\nif your eyes can take the strain.  We are working on presenting the\ndata in a more easily digestible manner.  I don't think we'll add any\nmore fio profiles in the interest of moving on to doing some sizing\nexercises with the dbt2 oltp workload.  We're just going to wrap up a\ncouple more scenarios first and get through a couple of conference\npresentations.  The two conferences in particular are the Linux\nPlumbers Conference, and the PostgreSQL Conference: West 08, which are\nboth in Portland, Oregon.\n\nRegards,\nMark", "msg_date": "Wed, 10 Sep 2008 11:09:32 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Wed, 10 Sep 2008, Scott Carey wrote:\n\n> How does that readahead tunable affect random reads or mixed random /\n> sequential situations?\n\nIt still helps as long as you don't make the parameter giant. The read \ncache in a typical hard drive noawadays is 8-32MB. If you're seeking a \nlot, you still might as well read the next 1MB or so after the block \nrequested once you've gone to the trouble of moving the disk somewhere. \nSeek-bound workloads will only waste a relatively small amount of the \ndisk's read cache that way--the slow seek rate itself keeps that from \npolluting the buffer cache too fast with those reads--while sequential \nones benefit enormously.\n\nIf you look at Mark's tests, you can see approximately where the readahead \nis filling the disk's internal buffers, because what happens then is the \nsequential read performance improvement levels off. That looks near 8MB \nfor the array he's tested, but I'd like to see a single disk to better \nfeel that out. Basically, once you know that, you back off from there as \nmuch as you can without killing sequential performance completely and that \npoint should still support a mixed workload.\n\nDisks are fairly well understood physical components, and if you think in \nthose terms you can build a gross model easily enough:\n\nAverage seek time: \t4ms\nSeeks/second:\t\t250\nData read/seek:\t\t1MB\t(read-ahead number goes here)\nTotal read bandwidth:\t250MB/s\n\nSince that's around what a typical interface can support, that's why I \nsuggest a 1MB read-ahead shouldn't hurt even seek-only workloads, and it's \npretty close to optimal for sequential as well here (big improvement from \nthe default Linux RA of 256 blocks=128K). If you know your work is biased \nheavily toward sequential scans, you might pick the 8MB read-ahead \ninstead. That value (--setra=16384 -> 8MB) has actually been the standard \n\"start here\" setting 3ware suggests on Linux for a while now: \nhttp://www.3ware.com/kb/Article.aspx?id=11050\n\n> I would be very interested in a mixed fio profile with a \"background writer\"\n> doing moderate, paced random and sequential writes combined with concurrent\n> sequential reads and random reads.\n\nTrying to make disk benchmarks really complicated is a path that leads to \na lot of wasted time. I one made this gigantic design plan for something \nthat worked like the PostgreSQL buffer management system to work as a disk \nbenchmarking tool. I threw it away after confirming I could do better \nwith carefully scripted pgbench tests.\n\nIf you want to benchmark something that looks like a database workload, \nbenchmark a database workload. That will always be better than guessing \nwhat such a workload acts like in a synthetic fashion. The \"seeks/second\" \nnumber bonnie++ spits out is good enough for most purposes at figuring out \nif you've detuned seeks badly.\n\n\"pgbench -S\" run against a giant database gives results that look a lot \nlike seeks/second, and if you mix multiple custom -f tests together it \nwill round-robin between them at random...\n\nIt's really helpful to measure these various disk subsystem parameters \nindividually. Knowing the sequential read/write, seeks/second, and commit \nrate for a disk setup is mainly valuable at making sure you're getting the \nfull performance expected from what you've got. Like in this example, \nwhere something was obviously off on the single disk results because reads \nwere significantly slower than writes. That's not supposed to happen, so \nyou know something basic is wrong before you even get into RAID and such. \nBeyond confirming whether or not you're getting approximately what you \nshould be out of the basic hardware, disk benchmarks are much less useful \nthan application ones.\n\nWith all that, I think I just gave away what the next conference paper \nI've been working on is about.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 10 Sep 2008 15:44:50 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "Great info Greg,\n\nSome follow-up questions and information in-line:\n\nOn Wed, Sep 10, 2008 at 12:44 PM, Greg Smith <[email protected]> wrote:\n\n> On Wed, 10 Sep 2008, Scott Carey wrote:\n>\n> How does that readahead tunable affect random reads or mixed random /\n>> sequential situations?\n>>\n>\n> It still helps as long as you don't make the parameter giant. The read\n> cache in a typical hard drive noawadays is 8-32MB. If you're seeking a lot,\n> you still might as well read the next 1MB or so after the block requested\n> once you've gone to the trouble of moving the disk somewhere. Seek-bound\n> workloads will only waste a relatively small amount of the disk's read cache\n> that way--the slow seek rate itself keeps that from polluting the buffer\n> cache too fast with those reads--while sequential ones benefit enormously.\n>\n> If you look at Mark's tests, you can see approximately where the readahead\n> is filling the disk's internal buffers, because what happens then is the\n> sequential read performance improvement levels off. That looks near 8MB for\n> the array he's tested, but I'd like to see a single disk to better feel that\n> out. Basically, once you know that, you back off from there as much as you\n> can without killing sequential performance completely and that point should\n> still support a mixed workload.\n>\n> Disks are fairly well understood physical components, and if you think in\n> those terms you can build a gross model easily enough:\n>\n> Average seek time: 4ms\n> Seeks/second: 250\n> Data read/seek: 1MB (read-ahead number goes here)\n> Total read bandwidth: 250MB/s\n>\n> Since that's around what a typical interface can support, that's why I\n> suggest a 1MB read-ahead shouldn't hurt even seek-only workloads, and it's\n> pretty close to optimal for sequential as well here (big improvement from\n> the default Linux RA of 256 blocks=128K). If you know your work is biased\n> heavily toward sequential scans, you might pick the 8MB read-ahead instead.\n> That value (--setra=16384 -> 8MB) has actually been the standard \"start\n> here\" setting 3ware suggests on Linux for a while now:\n> http://www.3ware.com/kb/Article.aspx?id=11050\n>\n\nOk, so this is a drive level parameter that affects the data going into the\ndisk cache? Or does it also get pulled over the SATA/SAS link into the OS\npage cache? I've been searching around with google for the answer and can't\nseem to find it.\n\nAdditionally, I would like to know how this works with hardware RAID -- Does\nit set this value per disk? Does it set it at the array level (so that 1MB\nwith an 8 disk stripe is actually 128K per disk)? Is it RAID driver\ndependant? If it is purely the OS, then it is above raid level and affects\nthe whole array -- and is hence almost useless. If it is for the whole\narray, it would have horrendous negative impact on random I/O per second if\nthe total readahead became longer than a stripe width -- if it is a full\nstripe then each I/O, even those less than the size of a stripe, would cause\nan I/O on every drive, dropping the I/O per second to that of a single\ndrive.\nIf it is a drive level setting, then it won't affect i/o per sec by making\ni/o's span multiple drives in a RAID, which is good.\n\nAdditionally, the O/S should have a good heuristic based read-ahead process\nthat should make the drive/device level read-ahead much less important. I\ndon't know how long its going to take for Linux to do this right:\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00491.php\nhttp://kerneltrap.org/node/6642\n\n\nLets expand a bit on your model above for a single disk:\n\nA single disk, with 4ms seeks, and max disk throughput of 125MB/sec. The\ninterface can transfer 300MB/sec.\n250 seeks/sec. Some chunk of data in that seek is free, afterwords it is\nsurely not.\n512KB can be read in 4ms then. A 1MB read-ahead would result in:\n4ms seek, 8ms read. 1MB seeks/sec ~=83 seeks/sec.\nHowever, some chunk of that 1MB is \"free\" with the seek. I'm not sure how\nmuch per drive, but it is likely on the order of 8K - 64K.\n\nI suppose I'll have to experiment in order to find out. But I can't see how\na 1MB read-ahead, which should take 2x as long as seek time to read off the\nplatters, could not have significant impact on random I/O per second on\nsingle drives. For SATA drives the transfer rate to seek time ratio is\nsmaller, and their caches are bigger, so a larger read-ahead will impact\nthings less.\n\n\n\n\n>\n>\n> I would be very interested in a mixed fio profile with a \"background\n>> writer\"\n>> doing moderate, paced random and sequential writes combined with\n>> concurrent\n>> sequential reads and random reads.\n>>\n>\n> Trying to make disk benchmarks really complicated is a path that leads to a\n> lot of wasted time. I one made this gigantic design plan for something that\n> worked like the PostgreSQL buffer management system to work as a disk\n> benchmarking tool. I threw it away after confirming I could do better with\n> carefully scripted pgbench tests.\n>\n> If you want to benchmark something that looks like a database workload,\n> benchmark a database workload. That will always be better than guessing\n> what such a workload acts like in a synthetic fashion. The \"seeks/second\"\n> number bonnie++ spits out is good enough for most purposes at figuring out\n> if you've detuned seeks badly.\n>\n> \"pgbench -S\" run against a giant database gives results that look a lot\n> like seeks/second, and if you mix multiple custom -f tests together it will\n> round-robin between them at random...\n\n\nI suppose I should learn more about pgbench. Most of this depends on how\nmuch time it takes to do one versus the other. In my case, setting up the\nDB will take significantly longer than writing 1 or 2 more fio profiles. I\ncategorize mixed load tests as basic test -- you don't want to uncover\nconfiguration issues after the application test that running a mix of\nread/write and sequential/random could have uncovered with a simple test.\nThis is similar to increasing the concurrency. Some file systems deal with\nconcurrency much better than others.\n\n\n>\n> It's really helpful to measure these various disk subsystem parameters\n> individually. Knowing the sequential read/write, seeks/second, and commit\n> rate for a disk setup is mainly valuable at making sure you're getting the\n> full performance expected from what you've got. Like in this example, where\n> something was obviously off on the single disk results because reads were\n> significantly slower than writes. That's not supposed to happen, so you\n> know something basic is wrong before you even get into RAID and such. Beyond\n> confirming whether or not you're getting approximately what you should be\n> out of the basic hardware, disk benchmarks are much less useful than\n> application ones.\n>\n\nAbsolutely -- its critical to run the synthetic tests, and the random\nread/write and sequential read/write are critical. These should be tuned\nand understood before going on to more complicated things.\nHowever, once you actually go and set up a database test, there are tons of\nquestions -- what type of database? what type of query load? what type of\nmix? how big? In my case, the answer is, our database, our queries, and\nbig. That takes a lot of setup effort, and redoing it for each new file\nsystem will take a long time in my case -- pg_restore takes a day+.\nTherefore, I'd like to know ahead of time what file system + configuration\ncombinations are a waste of time because they don't perform under\nconcurrency with mixed workload. Thats my admiteddly greedy need for the\nextra test results.\n\n\n\n> With all that, I think I just gave away what the next conference paper I've\n> been working on is about.\n>\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n\nLooking forward to it!\n\nGreat info Greg,Some follow-up questions and information in-line:On Wed, Sep 10, 2008 at 12:44 PM, Greg Smith <[email protected]> wrote:\nOn Wed, 10 Sep 2008, Scott Carey wrote:\n\n\nHow does that readahead tunable affect random reads or mixed random /\nsequential situations?\n\n\nIt still helps as long as you don't make the parameter giant.  The read cache in a typical hard drive noawadays is 8-32MB.  If you're seeking a lot, you still might as well read the next 1MB or so after the block requested once you've gone to the trouble of moving the disk somewhere. Seek-bound workloads will only waste a relatively small amount of the disk's read cache that way--the slow seek rate itself keeps that from polluting the buffer cache too fast with those reads--while sequential ones benefit enormously.\n\nIf you look at Mark's tests, you can see approximately where the readahead is filling the disk's internal buffers, because what happens then is the sequential read performance improvement levels off.  That looks near 8MB for the array he's tested, but I'd like to see a single disk to better feel that out.  Basically, once you know that, you back off from there as much as you can without killing sequential performance completely and that point should still support a mixed workload.\n\nDisks are fairly well understood physical components, and if you think in those terms you can build a gross model easily enough:\n\nAverage seek time:      4ms\nSeeks/second:           250\nData read/seek:         1MB     (read-ahead number goes here)\nTotal read bandwidth:   250MB/s\n\nSince that's around what a typical interface can support, that's why I suggest a 1MB read-ahead shouldn't hurt even seek-only workloads, and it's pretty close to optimal for sequential as well here (big improvement from the default Linux RA of 256 blocks=128K).  If you know your work is biased heavily toward sequential scans, you might pick the 8MB read-ahead instead.  That value (--setra=16384 -> 8MB) has actually been the standard \"start here\" setting 3ware suggests on Linux for a while now: http://www.3ware.com/kb/Article.aspx?id=11050\nOk, so this is a drive level parameter that affects the data going into the disk cache?  Or does it also get pulled over the SATA/SAS link into the OS page cache?  I've been searching around with google for the answer and can't seem to find it.\nAdditionally, I would like to know how this works with hardware RAID -- Does it set this value per disk?  Does it set it at the array level (so that 1MB with an 8 disk stripe is actually 128K per disk)?  Is it RAID driver dependant?  If it is purely the OS, then it is above raid level and affects the whole array -- and is hence almost useless.  If it is for the whole array, it would have horrendous negative impact on random I/O per second if the total readahead became longer than a stripe width -- if it is a full stripe then each I/O, even those less than the size of a stripe, would cause an I/O on every drive, dropping the I/O per second to that of a single drive.\nIf it is a drive level setting, then it won't affect i/o per sec by making i/o's span multiple drives in a RAID, which is good.Additionally, the O/S should have a good heuristic based read-ahead process that should make the drive/device level read-ahead much less important.  I don't know how long its going to take for Linux to do this right:\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00491.phphttp://kerneltrap.org/node/6642\nLets expand a bit on your model above for a single disk:A single disk, with 4ms seeks, and max disk throughput of 125MB/sec.  The interface can transfer 300MB/sec.250 seeks/sec. Some chunk of data in that seek is free, afterwords it is surely not.\n512KB can be read in 4ms then.  A 1MB read-ahead would result in:4ms seek, 8ms read.   1MB seeks/sec ~=83 seeks/sec.However, some chunk of that 1MB is \"free\" with the seek.  I'm not sure how much per drive, but it is likely on the order of 8K - 64K.\nI suppose I'll have to experiment in order to find out.  But I can't see how a 1MB read-ahead, which should take 2x as long as seek time to read off the platters, could not have significant impact on random I/O per second on single drives.   For SATA drives the transfer rate to seek time ratio is smaller, and their caches are bigger, so a larger read-ahead will impact things less.\n \n\n\nI would be very interested in a mixed fio profile with a \"background writer\"\ndoing moderate, paced random and sequential writes combined with concurrent\nsequential reads and random reads.\n\n\nTrying to make disk benchmarks really complicated is a path that leads to a lot of wasted time.  I one made this gigantic design plan for something that worked like the PostgreSQL buffer management system to work as a disk benchmarking tool.  I threw it away after confirming I could do better with carefully scripted pgbench tests.\n\nIf you want to benchmark something that looks like a database workload, benchmark a database workload.  That will always be better than guessing what such a workload acts like in a synthetic fashion.  The \"seeks/second\" number bonnie++ spits out is good enough for most purposes at figuring out if you've detuned seeks badly.\n\n\"pgbench -S\" run against a giant database gives results that look a lot like seeks/second, and if you mix multiple custom -f tests together it will round-robin between them at random...I suppose I should learn more about pgbench.  Most of this depends on how much time it takes to do one versus the other.  In my case, setting up the DB will take significantly longer than writing 1 or 2 more fio profiles.  I categorize mixed load tests as basic test -- you don't want to uncover configuration issues after the application test that running a mix of read/write and sequential/random could have uncovered with a simple test.  This is similar to increasing the concurrency.  Some file systems deal with concurrency much better than others.  \n \nIt's really helpful to measure these various disk subsystem parameters individually.  Knowing the sequential read/write, seeks/second, and commit rate for a disk setup is mainly valuable at making sure you're getting the full performance expected from what you've got.  Like in this example, where something was obviously off on the single disk results because reads were significantly slower than writes.  That's not supposed to happen, so you know something basic is wrong before you even get into RAID and such. Beyond confirming whether or not you're getting approximately what you should be out of the basic hardware, disk benchmarks are much less useful than application ones.\nAbsolutely -- its critical to run the synthetic tests, and the random read/write and sequential read/write are critical.  These should be tuned and understood before going on to more complicated things.  \nHowever, once you actually go and set up a database test, there are tons of questions -- what type of database? what type of query load?  what type of mix? how big?  In my case, the answer is, our database, our queries, and big.  That takes a lot of setup effort, and redoing it for each new file system will take a long time in my case -- pg_restore takes a day+.  Therefore, I'd like to know ahead of time what file system + configuration combinations are a waste of time because they don't perform under concurrency with mixed workload.  Thats my admiteddly greedy need for the extra test results.\n \nWith all that, I think I just gave away what the next conference paper I've been working on is about.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\nLooking forward to it!", "msg_date": "Wed, 10 Sep 2008 14:17:28 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Wed, 10 Sep 2008, Scott Carey wrote:\n\n> Ok, so this is a drive level parameter that affects the data going into the\n> disk cache? Or does it also get pulled over the SATA/SAS link into the OS\n> page cache?\n\nIt's at the disk block driver level in Linux, so I believe that's all \ngoing into the OS page cache. They've been rewriting that section a bit \nand I haven't checked it since that change (see below).\n\n> Additionally, I would like to know how this works with hardware RAID -- Does\n> it set this value per disk?\n\nHardware RAID controllers usually have their own read-ahead policies that \nmay or may not impact whether the OS-level read-ahead is helpful. Since \nMark's tests are going straight into the RAID controller, that's why it's \nhelpful here, and why many people don't ever have to adjust this \nparameter. For example, it doesn't give a dramatic gain on my Areca card \neven in JBOD mode, because that thing has its own cache to manage with its \nown agenda.\n\nOnce you start fiddling with RAID stripe sizes as well the complexity \nexplodes, and next thing you know you're busy moving the partition table \naround to make the logical sectors line up with the stripes better and \nsimilar exciting work.\n\n> Additionally, the O/S should have a good heuristic based read-ahead process\n> that should make the drive/device level read-ahead much less important. I\n> don't know how long its going to take for Linux to do this right:\n> http://archives.postgresql.org/pgsql-performance/2006-04/msg00491.php\n> http://kerneltrap.org/node/6642\n\nThat was committed in 2.6.23:\n\nhttp://kernelnewbies.org/Linux_2_6_23#head-102af265937262a7a21766ae58fddc1a29a5d8d7\n\nbut clearly some larger minimum hints still helps, as the system we've \nbeen staring at benchmarks has that feature.\n\n> Some chunk of data in that seek is free, afterwords it is surely not...\n\nYou can do a basic model of the drive to get a ballpark estimate on these \nthings like I threw out, but trying to break down every little bit gets \nhairy. In most estimation cases you see, where 128kB is the amount being \nread, the actual read time is so small compared to the rest of the numbers \nthat it just gets ignored.\n\nI was actually being optimistic about how much cache can get filled by \nseeks. If the disk is spinning at 15000RPM, that's 4ms to do a full \nrotation. That means that on average you'll also wait 2ms just to get the \nheads lined up to read that one sector on top of the 4ms seek to get in \nthe area; now we're at 6ms before you've read anything, topping seeks out \nat under 167/second. That number--average seek time plus half a \nrotation--is what a lot of people call the IOPS for the drive. There, \ntypically the time spent actually reading data once you've gone through \nall that doesn't factor in. IOPS is not very well defined, some people \n*do* include the reading time once you're there; one reason I don't like \nto use it. There's a nice chart showing some typical computations here at \nhttp://www.dbasupport.com/oracle/ora10g/disk_IO_02.shtml if anybody wants \nto see how this works for other classes of disk. The other reason I don't \nlike focusing too much on IOPS (some people act like it's the only \nmeasurement that matters) is that it tells you nothing about the \nsequential read rate, and you have to consider both at once to get a clear \npicture--particularly when there are adjustments that impact those two \noppositely, like read-ahead.\n\nAs far as the internal transfer speed of the heads to the drive's cache \nonce it's lined up, those are creeping up toward the 200MB/s range for the \nkind of faster drives the rest of these stats come from. So the default \nof 128kB is going to take 0.6ms, while a full 1MB might take 5ms. You're \nabsolutely right to question how hard that will degrade seek performance; \nthese slightly more accurate numbers suggest that might be as bad as going \nfrom 6.6ms to 11ms per seek, or from 150 IOPS to 91 IOPS. It also points \nout how outrageously large the really big read-ahead numbers are once \nyou're seeking instead of sequentially reading.\n\nOne thing it's hard to know is how much read-ahead the drive was going to \ndo on its own, no matter what you told it, anyway as part of its caching \nalgorithm.\n\n> I suppose I should learn more about pgbench.\n\nMost people use it as just a simple benchmark that includes a mixed \nread/update/insert workload. But that's internally done using a little \ncommand substition \"language\" that let's you easily write things like \n\"generate a random number between 1 and 1M, read the record from this \ntable, and then update this associated record\" that scale based on how big \nthe data set you've given it is. You an write your own scripts in that \nform too. And if you specify several scripts like that at a time, it will \nswitch between them at random, and you can analyze the average execution \ntime broken down per type if you save the latency logs. Makes it real easy \nto adjust the number of clients and the mix of things you have them do.\n\nThe main problem: it doesn't scale to large numbers of clients very well. \nBut it can easily simulate 50-100 banging away at a time which is usually \nenough to rank filesystem concurrency capabilities, for example. It's \ncertainly way easier to throw together a benchmark using it that is \nsimilar to an abstract application than it is to try and model multi-user \ndatabase I/O using fio.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 11 Sep 2008 00:12:58 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "Greg Smith wrote:\n> Average seek time: 4ms\n> Seeks/second: 250\n> Data read/seek: 1MB (read-ahead number goes here)\n> Total read bandwidth: 250MB/s\n>\nMost spinning disks now are nearer to 100MB/s streaming. You've talked \nyourself into twice that, random access!\n\nJames\n\n", "msg_date": "Thu, 11 Sep 2008 06:21:17 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Wed, Sep 10, 2008 at 11:21 PM, James Mansion\n<[email protected]> wrote:\n> Greg Smith wrote:\n>>\n>> Average seek time: 4ms\n>> Seeks/second: 250\n>> Data read/seek: 1MB (read-ahead number goes here)\n>> Total read bandwidth: 250MB/s\n>>\n> Most spinning disks now are nearer to 100MB/s streaming. You've talked\n> yourself into twice that, random access!\n\nThe fastest cheetahs on this page hit 171MB/second:\n\nhttp://www.seagate.com/www/en-us/products/servers/cheetah/\n\nAre there any drives that have a faster sequential transfer rate out there?\n\nChecked out hitachi's global storage site and they're fastest drive\nseems just a tad slower.\n", "msg_date": "Wed, 10 Sep 2008 23:37:09 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Thu, 11 Sep 2008, James Mansion wrote:\n\n> Most spinning disks now are nearer to 100MB/s streaming. You've talked \n> yourself into twice that, random access!\n\nThe point I was trying to make there is that even under impossibly optimal \ncircumstances, you'd be hard pressed to blow out the disk's read cache \nwith seek-dominated data even if you read a lot at each seek point. That \nidea didn't make it from my head into writing very well though.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 11 Sep 2008 01:56:40 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "Hmm, I would expect this tunable to potentially be rather file system\ndependent, and potentially raid controller dependant. The test was using\next2, perhaps the others automatically prefetch or read ahead? Does it\nvary by RAID controller?\n\nWell I went and found out, using ext3 and xfs. I have about 120+ data\npoints but here are a few interesting ones before I compile the rest and\nanswer a few other questions of my own.\n\n1: readahead does not affect \"pure\" random I/O -- there seems to be a\nheuristic trigger -- a single process or file probably has to request a\nsequence of linear I/O of some size to trigger it. I set it to over 64MB of\nread-ahead and random iops remained the same to prove this.\n2: File system matters more than you would expect. XFS sequential\ntransfers when readahead was tuned had TWICE the sequential throughput of\next3, both for a single reader and 8 concurrent readers on 8 different\nfiles.\n3: The RAID controller and its configuration make a pretty significant\ndifference as well.\n\nHardware:\n12 7200RPM SATA (Seagate) in raid 10 on 3Ware 9650 (only ext3)\n12 7200RPM SATA ('nearline SAS' : Seagate ES.2) on PERC 6 in raid 10 (ext3,\nxfs)\nI also have some results with PERC raid 10 with 4x 15K SAS, not reporting in\nthis message though\n\n\nTesting process:\nAll tests begin with\n#sync; echo 3 > /proc/sys/vm/drop_caches;\nfollowed by\n#blockdev --setra XXX /dev/sdb\nEven though FIO claims that it issues reads that don't go to cache, the\nread-ahead DOES go to the file system cache, and so one must drop them to\nget consistent results unless you disable the read-ahead. Even if you are\nreading more than 2x the physical RAM, that first half of the test is\ndistorted. By flushing the cache first my results became consistent within\nabout +-2%.\n\nTests\n-- fio, read 8 files concurrently, sequential read profile, one process per\nfile:\n[seq-read8]\nrw=read\n; this will be total of all individual files per process\nsize=8g\ndirectory=/data/test\nfadvise_hint=0\nblocksize=8k\ndirect=0\nioengine=sync\niodepth=1\nnumjobs=8\n; this is number of files total per process\nnrfiles=1\nruntime=1m\n\n-- fio, read one large file sequentially with one process\n[seq-read]\nrw=read\n; this will be total of all individual files per process\nsize=64g\ndirectory=/data/test\nfadvise_hint=0\nblocksize=8k\ndirect=0\nioengine=sync\niodepth=1\nnumjobs=1\n; this is number of files total per process\nnrfiles=1\nruntime=1m\n\n-- 'dd' in a few ways:\nMeasure direct to partition / disk read rate at the start of the disk:\n'dd if=/dev/sdb of=/dev/null ibs=24M obs=64K'\nMeasure direct to partition / disk read rate near the end of the disk:\n'dd if=/dev/sdb1 of=/dev/null ibs=24M obs=64K skip=160K'\nMeasure direct read of the large file used by the FIO one sequential file\ntest:\n'dd if=/data/test/seq-read.1.0 of=/dev/null ibs=32K obs=32K'\n\nthe dd paramters for block sizes were chosen with much experimentation to\nget the best result.\n\n\nResults:\nI've got a lot of results, I'm only going to put a few of them here for now\nwhile I investigate a few other things (see the end of this message)\nPreliminary summary:\n\nPERC 6, ext3, full partition.\ndd beginning of disk : 642MB/sec\ndd end of disk: 432MB/sec\ndd large file (readahead 49152): 312MB/sec\n-- maximum expected sequential capabilities above?\n\nfio: 8 concurrent readers and 1 concurrent reader results\nreadahead is in 512 byte blocks, sequential transfer rate in MiB/sec as\nreported by fio.\n\nreadahead | 8 conc read rate | 1 conc read rate\n49152 | 311 | 314\n16384 | 312 | 312\n12288 | 304 | 309\n 8192 | 292 |\n 4096 | 264 |\n 2048 | 211 |\n 1024 | 162 | 302\n 512 | 108 |\n 256 | 81 | 300\n 8 | 38 |\n\nConclusion, on this array going up to 12288 (6MB) readahead makes a huge\nimpact on concurrent sequential reads. That is 1MB per raid slice (6, 12\ndisks raid 10). Sequential read performance under concurrent. It has\nalmost no impact at all on one sequential read alone, the OS or the RAID\ncontroller are dealing with that case just fine.\n\nBut, how much of the above effect is ext3? How much is it the RAID card?\nAt the top end, the sequential rate for both concurrent and single\nsequential access is in line with what dd can get going through ext3. But\nit is not even close to what you can get going right to the device and\nbypassing the file system.\n\nLets try a different RAID card first. The disks aren't exactly the same,\nand there is no guarantee that the file is positioned near the beginning or\nend, but I've got another 12 disk RAID 10, using a 3Ware 9650 card.\n\nResults, as above -- don't conclude this card is faster, the files may have\njust been closer to the front of the partition.\ndd, beginning of disk: 522MB/sec\ndd, end of disk array: 412MB/sec\ndd, file read via file system (readahead 49152): 391MB/sec\n\nreadahead | 8 conc read rate | 1 conc read rate\n49152 | 343 | 392\n16384 | 349 | 379\n12288 | 348 | 387\n 8192 | 344 |\n 6144 | | 376\n 4096 | 340 |\n 2048 | 319 |\n 1024 | 284 | 371\n 512 | 239 | 376\n 256 | 204 | 377\n 128 | 169 | 386\n 8 | 47 | 382\n\nConclusion, this RAID controller definitely behaves differently: It is much\nless sensitive to the readahead. Perhaps it has a larger stripe size? Most\nlikely, this one is set up with a 256K stripe, the other one I do not know,\nthough the PERC 6 default is 64K which may be likely.\n\n\nOk, so the next question is how file systems play into this.\nFirst, I ran a bunch of tests with xfs, and the results were rather odd.\nThat is when I realized that the platter speeds at the start and end of the\narrays is significantly different, and xfs and ext3 will both make different\ndecisions on where to put the files on an empty partition (xfs will spread\nthem evenly, ext3 more close together but still somewhat random on the\nactual position).\n\nso, i created a partition that was roughly 10% the size of the whole thing,\nat the beginning of the array.\n\nUsing the PERC 6 setup, this leads to:\ndd, against partition: 660MB/sec max result, 450MB/sec min -- not a reliable\ntest for some reason\ndd, against file on the partition (ext3): 359MB/sec\n\next3 (default settings):\nreadahead | 8 conc read rate | 1 conc read rate\n49152 | 363 |\n12288 | 359 |\n 6144 | 319 |\n 1024 | 176 |\n 256 | |\n\nAnalysis: I only have 8 concurrent read results here, as these are the most\ninteresting based on the results from the whole disk tests above. I also\ndid not collect a lot of data points.\nWhat is clear, is that the partition at the front does make a difference,\ncompared to the whole partition results we have about 15% more throughput on\nthe 8 concurrent read test, meaning that ext3 probably put the files in the\nwhole disk case near the middle of the drive geometry.\nThe 8 concurrent read test has the same \"break point\" at about 6MB read\nahead buffer, which is also consistent.\n\nAnd now, for XFS, a full result set and VERY surprising results. I dare\nsay, the benchmarks that led me to do these tests are not complete without\nXFS tests:\n\nxfs (default settings):\nreadahead | 8 conc read rate | 1 conc read rate\n98304 | 651 | 640\n65536 | 636 | 609\n49152 | 621 | 595\n32768 | 602 | 565\n24576 | 595 | 548\n16384 | 560 | 518\n12288 | 505 | 480\n 8192 | 437 | 394\n 6144 | 412 | 415 *\n 4096 | 357 | 281 *\n 3072 | 329 | 338\n 2048 | 259 | 383\n 1536 | 230 | 445\n 1280 | 207 | 542\n 1024 | 182 | 605 *\n 896 | 167 | 523\n 768 | 148 | 456\n 512 | 119 | 354\n 256 | 88 | 303\n 64 | 60 | 171\n 8 | 36 | 55\n\n* these local max and mins for the sequential transfer were tested several\ntimes to validate. They may have something to do with me not tuning the\ninode layout for an array using the xfs stripe unit and stripe width\nparameters.\n\ndd, on the file used in the single reader sequential read test:\n660MB/sec. One other result for the sequential transfer, using a gigantic\n393216 (192MB) readahead:\n672 MB/sec.\n\nAnalysis:\nXFS gets significantly higher sequential (read) transfer rates than ext3.\nIt had higher write results but I've only done one of those.\nBoth ext3 and xfs can be tuned a bit more, mainly with noatime and some\nparameters so they know about the geometry of the raid array.\n\n\nOther misc results:\n I used the deadline scheduler, it didn't impact the results here.\n I ran some tests to \"feel out\" the sequential transfer rate sensitivity to\nreadahead for a 4x 15K RPM SAS raid setup -- it is less sensitive:\n ext3, 8 concurrent reads -- readahead = 256, 195MB/sec; readahead = 3072,\n200MB/sec; readahead = 32768, 210MB/sec; readahead =64, 120MB/sec\nOn the 3ware setup, with ext3, postgres was installed and a select count(1)\nfrom table reported between 300 and 320 MB/sec against tables larger than\n5GB, and disk utilization was about 88%. dd can get 390 with the settings\nused (readahead 12288).\nSetting the readahead back to the default, postgres gets about 220MB/sec at\n100% disk util on similar tables. I will be testing out xfs on this same\ndata eventually, and expect it to provide significant gains there.\n\nRemaining questions:\nReadahead does NOT activate for pure random requests, which is a good\nthing. The question is, when does it activate? I'll have to write some\ncustom fio tests to find out. I suspect that when the OS detects either: X\nnumber of sequential requests on the same file (or from the same process),\nit activates. OR after sequential acces of at least Y bytes. I'll report\nresults once I know, to construct some worst case scenarios of using a large\nreadahead.\nI will also measure its affect when mixed random access and streaming reads\noccur.\n\n\nOn Wed, Sep 10, 2008 at 7:49 AM, Greg Smith <[email protected]> wrote:\n\n> On Tue, 9 Sep 2008, Mark Wong wrote:\n>\n> I've started to display the effects of changing the Linux block device\n>> readahead buffer to the sequential read performance using fio.\n>>\n>\n> Ah ha, told you that was your missing tunable. I'd really like to see the\n> whole table of one disk numbers re-run when you get a chance. The reversed\n> ratio there on ext2 (59MB read/92MB write) was what tipped me off that\n> something wasn't quite right initially, and until that's fixed it's hard to\n> analyze the rest.\n>\n> Based on your initial data, I'd say that the two useful read-ahead settings\n> for this system are 1024KB (conservative but a big improvement) and 8192KB\n> (point of diminishing returns). The one-disk table you've got (labeled with\n> what the default read-ahead is) and new tables at those two values would\n> really flesh out what each disk is capable of.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHmm, I would expect this tunable to potentially be rather file system dependent, and potentially raid controller dependant.  The test was using ext2, perhaps the others automatically prefetch or read ahead?   Does it vary by RAID controller?\nWell I went and found out, using ext3 and xfs.  I have about 120+ data points but here are a few interesting ones before I compile the rest and answer a few other questions of my own.1:  readahead does not affect \"pure\" random I/O -- there seems to be a heuristic trigger -- a single process or file probably has to request a sequence of linear I/O of some size to trigger it.  I set it to over 64MB of read-ahead and random iops remained the same to prove this.\n\n2:  File system matters more than you would expect.  XFS sequential transfers when readahead was tuned had TWICE the sequential throughput of ext3, both for a single reader and 8 concurrent readers on 8 different files.\n3:  The RAID controller and its configuration make a pretty significant difference as well.Hardware:12 7200RPM SATA (Seagate) in raid 10 on 3Ware 9650 (only ext3)12 7200RPM SATA ('nearline SAS' : Seagate ES.2) on PERC 6 in raid 10 (ext3, xfs)\nI also have some results with PERC raid 10 with 4x 15K SAS, not reporting in this message thoughTesting process:All tests begin with #sync; echo 3 > /proc/sys/vm/drop_caches;followed by #blockdev --setra XXX /dev/sdb\nEven though FIO claims that it issues reads that don't go to cache, the read-ahead DOES go to the file system cache, and so one must drop them to get consistent results unless you disable the read-ahead.  Even if you are reading more than 2x the physical RAM, that first half of the test is distorted.  By flushing the cache first my results became consistent within about +-2%.\nTests-- fio, read 8 files concurrently, sequential read profile, one process per file:[seq-read8]rw=read; this will be total of all individual files per processsize=8gdirectory=/data/testfadvise_hint=0\nblocksize=8kdirect=0ioengine=synciodepth=1numjobs=8; this is number of files total per processnrfiles=1runtime=1m-- fio, read one large file sequentially with one process[seq-read]\nrw=read; this will be total of all individual files per processsize=64gdirectory=/data/testfadvise_hint=0blocksize=8kdirect=0ioengine=synciodepth=1numjobs=1; this is number of files total per process\nnrfiles=1runtime=1m-- 'dd' in a few ways:Measure direct to partition / disk read rate at the start of the disk:'dd if=/dev/sdb of=/dev/null ibs=24M obs=64K' Measure direct to partition / disk read rate near the end of the disk:\n'dd if=/dev/sdb1 of=/dev/null ibs=24M obs=64K skip=160K'Measure direct read of the large file used by the FIO one sequential file test:'dd if=/data/test/seq-read.1.0 of=/dev/null ibs=32K obs=32K'\nthe dd paramters for block sizes were chosen with much experimentation to get the best result.Results: I've got a lot of results, I'm only going to put a few of them here for now while I investigate a few other things (see the end of this message)\nPreliminary summary:PERC 6, ext3, full partition.dd beginning of disk :  642MB/secdd end of disk: 432MB/secdd large file (readahead 49152): 312MB/sec-- maximum expected sequential capabilities above?\nfio: 8 concurrent readers and 1 concurrent reader resultsreadahead is in 512 byte blocks, sequential transfer rate in MiB/sec as reported by fio.readahead  |  8 conc read rate  |  1 conc read rate49152  |  311  |  314\n16384  |  312  |  31212288  |  304  |  309 8192  |  292  | 4096  |  264  | 2048  |  211  | 1024  |  162  |  302  512  |  108  |  256  |  81  | 300    8  |  38  |Conclusion, on this array going up to 12288 (6MB) readahead makes a huge impact on concurrent sequential reads.  That is 1MB per raid slice (6, 12 disks raid 10).  Sequential read performance under concurrent.  It has almost no impact at all on one sequential read alone, the OS or the RAID controller are dealing with that case just fine.\nBut, how much of the above effect is ext3?  How much is it the RAID card?  At the top end, the sequential rate for both concurrent and single sequential access is in line with what dd can get going through ext3.  But it is not even close to what you can get going right to the device and bypassing the file system.\nLets try a different RAID card first.  The disks aren't exactly the same, and there is no guarantee that the file is positioned near the beginning or end, but I've got another 12 disk RAID 10, using a 3Ware 9650 card.\nResults, as above -- don't conclude this card is faster, the files may have just been closer to the front of the partition.dd, beginning of disk: 522MB/secdd, end of disk array: 412MB/secdd, file read via file system (readahead 49152): 391MB/sec\nreadahead  |  8 conc read rate  |  1 conc read rate\n49152  |  343  |  392\n16384  |  349  |  379\n12288  |  348  |  387\n 8192  |  344  | 6144  |      |  376\n 4096  |  340  |\n 2048  |  319  |\n 1024  |  284  |  371\n  512  |  239  |  376\n  256  |  204  |  377  128  |  169  |  386\n    8  |  47  |  382Conclusion, this RAID controller definitely behaves differently:  It is much less sensitive to the readahead.  Perhaps it has a larger stripe size?  Most likely, this one is set up with a 256K stripe, the other one I do not know, though the PERC 6 default is 64K which may be likely.\n Ok, so the next question is how file systems play into this.First, I ran a bunch of tests with xfs, and the results were rather odd.  That is when I realized that the platter speeds at the start and end of the arrays is significantly different, and xfs and ext3 will both make different decisions on where to put the files on an empty partition (xfs will spread them evenly, ext3 more close together but still somewhat random on the actual position).\nso, i created a partition that was roughly 10% the size of the whole thing, at the beginning of the array.Using the PERC 6 setup, this leads to:dd, against partition: 660MB/sec max result, 450MB/sec min -- not a reliable test for some reason\ndd, against file on the partition (ext3): 359MB/secext3 (default settings):readahead  |  8 conc read rate  |  1 conc read rate\n49152  |  363  |  12288  |  359  |   \n6144  |  319  |   \n1024  |  176  | \n   256  |      |  Analysis:  I only have 8 concurrent read results here, as these are the most interesting based on the results from the whole disk tests above.  I also did not collect a lot of data points.What is clear, is that the partition at the front does make a difference, compared to the whole partition results we have about 15% more throughput on the 8 concurrent read test, meaning that ext3 probably put the files in the whole disk case near the middle of the drive geometry.\nThe 8 concurrent read test has the same \"break point\" at about 6MB read ahead buffer, which is also consistent.And now, for XFS, a full result set and VERY surprising results.  I dare say, the benchmarks that led me to do these tests are not complete without XFS tests:\n\nxfs (default settings):readahead  |  8 conc read rate  |  1 conc read rate98304  |  651  |  64065536  |  636  |  60949152  |  621  |  59532768  |  602  |  56524576  |  595  |  54816384  |  560  |  518\n\n12288  |  505  |  480\n 8192  |  437  |  394\n 6144  |  412  |  415 *\n 4096  |  357  |  281 * 3072  |  329  |  338 2048  |  259  |  383 1536  |  230  |  445 1280  |  207  |  542 1024  |  182  |  605  *  896  |  167  |  523\n  768  |  148  |  456\n\n  512  |  119  |  354\n  256  |   88   |  303\n   64  |   60   | 171\n    8  |  36  |  55* these local max and mins for the sequential transfer were tested several times to validate.  They may have something to do with me not tuning the inode layout for an array using the xfs stripe unit and stripe width parameters.\ndd, on the file used in the single reader sequential read test:660MB/sec.   One other result for the sequential transfer, using a gigantic 393216 (192MB) readahead:672 MB/sec.Analysis: XFS gets significantly higher sequential (read) transfer rates than ext3.  It had higher write results but I've only done one of those.\nBoth ext3 and xfs can be tuned a bit more, mainly with noatime and some parameters so they know about the geometry of the raid array.Other misc results: I used the deadline scheduler, it didn't impact the results here.\n I ran some tests to \"feel out\" the sequential transfer rate sensitivity to readahead for a 4x 15K RPM SAS raid setup -- it is less sensitive:  ext3, 8 concurrent reads -- readahead = 256, 195MB/sec;  readahead = 3072, 200MB/sec; readahead = 32768, 210MB/sec; readahead =64, 120MB/sec\nOn the 3ware setup, with ext3, postgres was installed and a select count(1) from table reported between 300 and 320 MB/sec against tables larger than 5GB, and disk utilization was about 88%.  dd can get 390 with the settings used (readahead 12288).\nSetting the readahead back to the default, postgres gets about 220MB/sec at 100% disk util on similar tables.  I will be testing out xfs on this same data eventually, and expect it to provide significant gains there.\nRemaining questions:Readahead does NOT activate for pure random requests, which is a good thing.   The question is, when does it activate?  I'll have to write some custom fio tests to find out.  I suspect that when the OS detects either:  X number of sequential requests on the same file (or from the same process), it activates.  OR after sequential acces of at least Y bytes.  I'll report results once I know, to construct some worst case scenarios of using a large readahead.   \nI will also measure its affect when mixed random access and streaming reads occur.On Wed, Sep 10, 2008 at 7:49 AM, Greg Smith <[email protected]> wrote:\n\nOn Tue, 9 Sep 2008, Mark Wong wrote:\n\n\nI've started to display the effects of changing the Linux block device\nreadahead buffer to the sequential read performance using fio.\n\n\nAh ha, told you that was your missing tunable.  I'd really like to see the whole table of one disk numbers re-run when you get a chance.  The reversed ratio there on ext2 (59MB read/92MB write) was what tipped me off that something wasn't quite right initially, and until that's fixed it's hard to analyze the rest.\n\nBased on your initial data, I'd say that the two useful read-ahead settings for this system are 1024KB (conservative but a big improvement) and 8192KB (point of diminishing returns).  The one-disk table you've got (labeled with what the default read-ahead is) and new tables at those two values would really flesh out what each disk is capable of.\n\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 11 Sep 2008 12:07:25 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "Greg Smith wrote:\n> The point I was trying to make there is that even under impossibly \n> optimal circumstances, you'd be hard pressed to blow out the disk's \n> read cache with seek-dominated data even if you read a lot at each \n> seek point. That idea didn't make it from my head into writing very \n> well though.\n>\nIsn't there a bigger danger in blowing out the cache on the controller \nand causing premature pageout of its dirty pages?\n\nIf you could get the readahead to work on the drive and not return data \nto the controller, that might be dandy, but I'm sceptical.\n\nJames\n\n", "msg_date": "Thu, 11 Sep 2008 20:54:59 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "Drives have their own read-ahead in the firmware. Many can keep track of 2\nor 4 concurrent file accesses. A few can keep track of more. This also\nplays in with the NCQ or SCSI command queuing implementation.\n\nConsumer drives will often read-ahead much more than server drives optimized\nfor i/o per second.\nThe difference in read-ahead sensitivity between the two setups I tested may\nbe due to one setup using nearline-SAS (SATA, tuned for io-per sec using SAS\nfirmware) and the other used consumer SATA.\nFor example, here is one \"nearline SAS\" style server tuned drive versus a\nconsumer tuned one:\nhttp://www.storagereview.com/php/benchmark/suite_v4.php?typeID=10&testbedID=4&osID=6&raidconfigID=1&numDrives=1&devID_0=354&devID_1=348&devCnt=2\n\nThe Linux readahead setting is _definitely_ in the kernel, definitely uses\nand fills the page cache, and from what I can gather, simply issues extra\nI/O's to the hardware beyond the last one requested by an app in certain\nsituations. It does not make your I/O request larger, it just queues an\nextra I/O following your request.\n\nOn Thu, Sep 11, 2008 at 12:54 PM, James Mansion <\[email protected]> wrote:\n\n> Greg Smith wrote:\n>\n>> The point I was trying to make there is that even under impossibly optimal\n>> circumstances, you'd be hard pressed to blow out the disk's read cache with\n>> seek-dominated data even if you read a lot at each seek point. That idea\n>> didn't make it from my head into writing very well though.\n>>\n>> Isn't there a bigger danger in blowing out the cache on the controller\n> and causing premature pageout of its dirty pages?\n>\n> If you could get the readahead to work on the drive and not return data to\n> the controller, that might be dandy, but I'm sceptical.\n>\n> James\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nDrives have their own read-ahead in the firmware.  Many can keep track of 2 or 4 concurrent file accesses.  A few can keep track of more.  This also plays in with the NCQ or SCSI command queuing implementation.\nConsumer drives will often read-ahead much more than server drives optimized for i/o per second.The difference in read-ahead sensitivity between the two setups I tested may be due to one setup using nearline-SAS (SATA, tuned for io-per sec using SAS firmware) and the other used consumer SATA.  \nFor example, here is one \"nearline SAS\" style server tuned drive versus a consumer tuned one:http://www.storagereview.com/php/benchmark/suite_v4.php?typeID=10&testbedID=4&osID=6&raidconfigID=1&numDrives=1&devID_0=354&devID_1=348&devCnt=2\nThe Linux readahead setting is _definitely_ in the kernel, definitely uses and fills the page cache, and from what I can gather, simply issues extra I/O's to the hardware beyond the last one requested by an app in certain situations.  It does not make your I/O request larger, it just queues an extra I/O following your request.\nOn Thu, Sep 11, 2008 at 12:54 PM, James Mansion <[email protected]> wrote:\nGreg Smith wrote:\n\nThe point I was trying to make there is that even under impossibly optimal circumstances, you'd be hard pressed to blow out the disk's read cache with seek-dominated data even if you read a lot at each seek point.  That idea didn't make it from my head into writing very well though.\n\n\nIsn't there a bigger danger in blowing out the cache on the controller and causing premature pageout of its dirty pages?\n\nIf you could get the readahead to work on the drive and not return data to the controller, that might be dandy, but I'm sceptical.\n\nJames\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 11 Sep 2008 13:36:17 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "Sorry, I forgot to mention the Linux kernel version I'm using, etc:\n\n2.6.18-92.1.10.el5 #1 SMP x86_64\nCentOS 5.2.\n\nThe \"adaptive\" read-ahead, as well as other enhancements in the kernel, are\ntaking place or coming soon in the most recent stuff. Some distributions\noffer the adaptive read-ahead as an add-on (Debian, for example). This is\nan area where much can be improved in Linux http://kerneltrap.org/node/6642\nhttp://kernelnewbies.org/Linux_2_6_23#head-102af265937262a7a21766ae58fddc1a29a5d8d7\n\nI obviously did not test how the new read-ahead stuff impacts these sorts of\ntests.\n\nOn Thu, Sep 11, 2008 at 12:07 PM, Scott Carey <[email protected]>wrote:\n\n> Hmm, I would expect this tunable to potentially be rather file system\n> dependent, and potentially raid controller dependant. The test was using\n> ext2, perhaps the others automatically prefetch or read ahead? Does it\n> vary by RAID controller?\n>\n> Well I went and found out, using ext3 and xfs. I have about 120+ data\n> points but here are a few interesting ones before I compile the rest and\n> answer a few other questions of my own.\n>\n> 1: readahead does not affect \"pure\" random I/O -- there seems to be a\n> heuristic trigger -- a single process or file probably has to request a\n> sequence of linear I/O of some size to trigger it. I set it to over 64MB of\n> read-ahead and random iops remained the same to prove this.\n> 2: File system matters more than you would expect. XFS sequential\n> transfers when readahead was tuned had TWICE the sequential throughput of\n> ext3, both for a single reader and 8 concurrent readers on 8 different\n> files.\n> 3: The RAID controller and its configuration make a pretty significant\n> difference as well.\n>\n> Hardware:\n> 12 7200RPM SATA (Seagate) in raid 10 on 3Ware 9650 (only ext3)\n> 12 7200RPM SATA ('nearline SAS' : Seagate ES.2) on PERC 6 in raid 10 (ext3,\n> xfs)\n> I also have some results with PERC raid 10 with 4x 15K SAS, not reporting\n> in this message though\n>\n> . . . {snip}\n\nSorry, I forgot to mention the Linux kernel version I'm using, etc:2.6.18-92.1.10.el5 #1 SMP x86_64CentOS 5.2.The \"adaptive\" read-ahead, as well as other enhancements in the kernel, are taking place or coming soon in the most recent stuff.  Some distributions offer the adaptive read-ahead as an add-on (Debian, for example).  This is an area where much can be improved in Linux http://kerneltrap.org/node/6642\nhttp://kernelnewbies.org/Linux_2_6_23#head-102af265937262a7a21766ae58fddc1a29a5d8d7I obviously did not test how the new read-ahead stuff impacts these sorts of tests.\nOn Thu, Sep 11, 2008 at 12:07 PM, Scott Carey <[email protected]> wrote:\nHmm, I would expect this tunable to potentially be rather file system dependent, and potentially raid controller dependant.  The test was using ext2, perhaps the others automatically prefetch or read ahead?   Does it vary by RAID controller?\nWell I went and found out, using ext3 and xfs.  I have about 120+ data points but here are a few interesting ones before I compile the rest and answer a few other questions of my own.1:  readahead does not affect \"pure\" random I/O -- there seems to be a heuristic trigger -- a single process or file probably has to request a sequence of linear I/O of some size to trigger it.  I set it to over 64MB of read-ahead and random iops remained the same to prove this.\n\n\n2:  File system matters more than you would expect.  XFS sequential transfers when readahead was tuned had TWICE the sequential throughput of ext3, both for a single reader and 8 concurrent readers on 8 different files.\n\n3:  The RAID controller and its configuration make a pretty significant difference as well.Hardware:12 7200RPM SATA (Seagate) in raid 10 on 3Ware 9650 (only ext3)12 7200RPM SATA ('nearline SAS' : Seagate ES.2) on PERC 6 in raid 10 (ext3, xfs)\n\nI also have some results with PERC raid 10 with 4x 15K SAS, not reporting in this message though  . . . {snip}", "msg_date": "Thu, 11 Sep 2008 13:44:40 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Thu, 11 Sep 2008, Scott Carey wrote:\n\n> Drives have their own read-ahead in the firmware. Many can keep track of 2\n> or 4 concurrent file accesses. A few can keep track of more. This also\n> plays in with the NCQ or SCSI command queuing implementation.\n>\n> Consumer drives will often read-ahead much more than server drives optimized\n> for i/o per second.\n> The difference in read-ahead sensitivity between the two setups I tested may\n> be due to one setup using nearline-SAS (SATA, tuned for io-per sec using SAS\n> firmware) and the other used consumer SATA.\n> For example, here is one \"nearline SAS\" style server tuned drive versus a\n> consumer tuned one:\n> http://www.storagereview.com/php/benchmark/suite_v4.php?typeID=10&testbedID=4&osID=6&raidconfigID=1&numDrives=1&devID_0=354&devID_1=348&devCnt=2\n>\n> The Linux readahead setting is _definitely_ in the kernel, definitely uses\n> and fills the page cache, and from what I can gather, simply issues extra\n> I/O's to the hardware beyond the last one requested by an app in certain\n> situations. It does not make your I/O request larger, it just queues an\n> extra I/O following your request.\n\nthat extra I/O will be merged with your request by the I/O scheduler code \nso that by the time it gets to the drive it will be a single request.\n\nby even if it didn't, most modern drives read the entire cylinder into \ntheir buffer so any additional requests to the drive will be satisfied \nfrom this buffer and not have to wait for the disk itself.\n\nDavid Lang\n\n> On Thu, Sep 11, 2008 at 12:54 PM, James Mansion <\n> [email protected]> wrote:\n>\n>> Greg Smith wrote:\n>>\n>>> The point I was trying to make there is that even under impossibly optimal\n>>> circumstances, you'd be hard pressed to blow out the disk's read cache with\n>>> seek-dominated data even if you read a lot at each seek point. That idea\n>>> didn't make it from my head into writing very well though.\n>>>\n>>> Isn't there a bigger danger in blowing out the cache on the controller\n>> and causing premature pageout of its dirty pages?\n>>\n>> If you could get the readahead to work on the drive and not return data to\n>> the controller, that might be dandy, but I'm sceptical.\n>>\n>> James\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n", "msg_date": "Thu, 11 Sep 2008 14:36:32 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Thu, Sep 11, 2008 at 3:36 PM, <[email protected]> wrote:\n> On Thu, 11 Sep 2008, Scott Carey wrote:\n>\n>> Drives have their own read-ahead in the firmware. Many can keep track of\n>> 2\n>> or 4 concurrent file accesses. A few can keep track of more. This also\n>> plays in with the NCQ or SCSI command queuing implementation.\n>>\n>> Consumer drives will often read-ahead much more than server drives\n>> optimized\n>> for i/o per second.\n>> The difference in read-ahead sensitivity between the two setups I tested\n>> may\n>> be due to one setup using nearline-SAS (SATA, tuned for io-per sec using\n>> SAS\n>> firmware) and the other used consumer SATA.\n>> For example, here is one \"nearline SAS\" style server tuned drive versus a\n>> consumer tuned one:\n>>\n>> http://www.storagereview.com/php/benchmark/suite_v4.php?typeID=10&testbedID=4&osID=6&raidconfigID=1&numDrives=1&devID_0=354&devID_1=348&devCnt=2\n>>\n>> The Linux readahead setting is _definitely_ in the kernel, definitely uses\n>> and fills the page cache, and from what I can gather, simply issues extra\n>> I/O's to the hardware beyond the last one requested by an app in certain\n>> situations. It does not make your I/O request larger, it just queues an\n>> extra I/O following your request.\n>\n> that extra I/O will be merged with your request by the I/O scheduler code so\n> that by the time it gets to the drive it will be a single request.\n>\n> by even if it didn't, most modern drives read the entire cylinder into their\n> buffer so any additional requests to the drive will be satisfied from this\n> buffer and not have to wait for the disk itself.\n\nGenerally speaking I agree, but I would still make a separate logical\npartition for pg_xlog so that if the OS fills up the /var/log dir or\nsomething, it doesn't impact the db.\n", "msg_date": "Thu, 11 Sep 2008 15:40:15 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Thu, 11 Sep 2008, Scott Marlowe wrote:\n\n> On Thu, Sep 11, 2008 at 3:36 PM, <[email protected]> wrote:\n>> by even if it didn't, most modern drives read the entire cylinder into their\n>> buffer so any additional requests to the drive will be satisfied from this\n>> buffer and not have to wait for the disk itself.\n>\n> Generally speaking I agree, but I would still make a separate logical\n> partition for pg_xlog so that if the OS fills up the /var/log dir or\n> something, it doesn't impact the db.\n\nthis is a completely different discussion :-)\n\nwhile I agree with you in theory, in practice I've seen multiple \npartitions cause far more problems than they have prevented (due to the \npartitions ending up not being large enough and having to be resized after \nthey fill up, etc) so I tend to go in the direction of a few large \npartitions.\n\nthe only reason I do multiple partitions (besides when the hardware or \nperformance considerations require it) is when I can identify that there \nis some data that I would not want to touch on a OS upgrade. I try to make \nit so that an OS upgrade can wipe the OS partitions if nessasary.\n\nDavid Lang\n\n", "msg_date": "Thu, 11 Sep 2008 15:33:53 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Thursday 11 September 2008, [email protected] wrote:\n> while I agree with you in theory, in practice I've seen multiple\n> partitions cause far more problems than they have prevented (due to the\n> partitions ending up not being large enough and having to be resized\n> after they fill up, etc) so I tend to go in the direction of a few large\n> partitions.\n\nI used to feel this way until LVM became usable. LVM plus online resizable \nfilesystems really makes multiple partitions manageable.\n\n\n-- \nAlan\n", "msg_date": "Thu, 11 Sep 2008 15:41:55 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Thu, 11 Sep 2008, Alan Hodgson wrote:\n\n> On Thursday 11 September 2008, [email protected] wrote:\n>> while I agree with you in theory, in practice I've seen multiple\n>> partitions cause far more problems than they have prevented (due to the\n>> partitions ending up not being large enough and having to be resized\n>> after they fill up, etc) so I tend to go in the direction of a few large\n>> partitions.\n>\n> I used to feel this way until LVM became usable. LVM plus online resizable\n> filesystems really makes multiple partitions manageable.\n\nwon't the fragmentation of your filesystem across the different LVM \nsegments hurt you?\n\nDavid Lang\n", "msg_date": "Thu, 11 Sep 2008 16:07:44 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "I also thought that LVM is unsafe for WAL logs and file system journals with\ndisk write cache -- it doesn't flush the disk write caches correctly and\nbuild write barriers.\n\nAs pointed out here:\nhttp://groups.google.com/group/pgsql.performance/browse_thread/thread/9dc43991c1887129\nby Greg Smith\nhttp://lwn.net/Articles/283161/\n\n\n\nOn Thu, Sep 11, 2008 at 3:41 PM, Alan Hodgson <[email protected]> wrote:\n\n> On Thursday 11 September 2008, [email protected] wrote:\n> > while I agree with you in theory, in practice I've seen multiple\n> > partitions cause far more problems than they have prevented (due to the\n> > partitions ending up not being large enough and having to be resized\n> > after they fill up, etc) so I tend to go in the direction of a few large\n> > partitions.\n>\n> I used to feel this way until LVM became usable. LVM plus online resizable\n> filesystems really makes multiple partitions manageable.\n>\n>\n> --\n> Alan\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI also thought that LVM is unsafe for WAL logs and file system journals with disk write cache -- it doesn't flush the disk write caches correctly and build write barriers. As pointed out here:\nhttp://groups.google.com/group/pgsql.performance/browse_thread/thread/9dc43991c1887129by Greg Smithhttp://lwn.net/Articles/283161/\nOn Thu, Sep 11, 2008 at 3:41 PM, Alan Hodgson <[email protected]> wrote:\nOn Thursday 11 September 2008, [email protected] wrote:\n> while I agree with you in theory, in practice I've seen multiple\n> partitions cause far more problems than they have prevented (due to the\n> partitions ending up not being large enough and having to be resized\n> after they fill up, etc) so I tend to go in the direction of a few large\n> partitions.\n\nI used to feel this way until LVM became usable. LVM plus online resizable\nfilesystems really makes multiple partitions manageable.\n\n\n--\nAlan\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 11 Sep 2008 16:20:19 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Thu, Sep 11, 2008 at 4:33 PM, <[email protected]> wrote:\n> On Thu, 11 Sep 2008, Scott Marlowe wrote:\n>\n>> On Thu, Sep 11, 2008 at 3:36 PM, <[email protected]> wrote:\n>>>\n>>> by even if it didn't, most modern drives read the entire cylinder into\n>>> their\n>>> buffer so any additional requests to the drive will be satisfied from\n>>> this\n>>> buffer and not have to wait for the disk itself.\n>>\n>> Generally speaking I agree, but I would still make a separate logical\n>> partition for pg_xlog so that if the OS fills up the /var/log dir or\n>> something, it doesn't impact the db.\n>\n> this is a completely different discussion :-)\n>\n> while I agree with you in theory, in practice I've seen multiple partitions\n> cause far more problems than they have prevented (due to the partitions\n> ending up not being large enough and having to be resized after they fill\n> up, etc) so I tend to go in the direction of a few large partitions.\n\nI've never had that problem. I've always made the big enough. I\ncan't imagine building a server where /var/log shared space with my\ndb. It's not like every root level dir gets its own partition, but\nseriously, logs should never go anywhere that another application is\nwriting to.\n\n> the only reason I do multiple partitions (besides when the hardware or\n> performance considerations require it) is when I can identify that there is\n> some data that I would not want to touch on a OS upgrade. I try to make it\n> so that an OS upgrade can wipe the OS partitions if nessasary.\n\nit's quite handy to have /home on a separate partition I agree. But\non most servers /home should be empty. A few others like /opt or\n/usr/local I tend to make a separate one for the reasons you mention\nas well.\n", "msg_date": "Thu, 11 Sep 2008 20:30:38 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Thu, 11 Sep 2008, Alan Hodgson wrote:\n\n> LVM plus online resizable filesystems really makes multiple partitions \n> manageable.\n\nI've seen so many reports blaming Linux's LVM for performance issues that \nits managability benefits don't seem too compelling.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 12 Sep 2008 00:09:21 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "Scott Carey wrote:\n> Consumer drives will often read-ahead much more than server drives \n> optimized for i/o per second.\n...\n> The Linux readahead setting is _definitely_ in the kernel, definitely \n> uses and fills the page cache, and from what I can gather, simply \n> issues extra I/O's to the hardware beyond the last one requested by an \n> app in certain situations. It does not make your I/O request larger, \n> it just queues an extra I/O following your request.\nSo ... fiddling with settings in Linux is going to force read-ahead, but \nthe read-ahead data will hit the controller cache and the system buffers.\n\nAnd the drives use their caches for cyclinder caching implicitly (maybe \nthe SATA drives appear to preread more because the storage density per \ncylinder is higher?)..\n\nBut is there any way for an OS or application to (portably) ask SATA, \nSAS or SCSI drives to read ahead more (or less) than their default and \nNOT return the data to the controller?\n\nI've never heard of such a thing, but I'm no expert in the command sets \nfor any of this stuff.\n\nJames\n\n>\n> On Thu, Sep 11, 2008 at 12:54 PM, James Mansion \n> <[email protected] <mailto:[email protected]>> \n> wrote:\n>\n> Greg Smith wrote:\n>\n> The point I was trying to make there is that even under\n> impossibly optimal circumstances, you'd be hard pressed to\n> blow out the disk's read cache with seek-dominated data even\n> if you read a lot at each seek point. That idea didn't make\n> it from my head into writing very well though.\n>\n> Isn't there a bigger danger in blowing out the cache on the\n> controller and causing premature pageout of its dirty pages?\n>\n> If you could get the readahead to work on the drive and not return\n> data to the controller, that might be dandy, but I'm sceptical.\n>\n> James\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n", "msg_date": "Fri, 12 Sep 2008 10:05:49 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Fri, 12 Sep 2008, James Mansion wrote:\n\n> Scott Carey wrote:\n>> Consumer drives will often read-ahead much more than server drives \n>> optimized for i/o per second.\n> ...\n>> The Linux readahead setting is _definitely_ in the kernel, definitely uses \n>> and fills the page cache, and from what I can gather, simply issues extra \n>> I/O's to the hardware beyond the last one requested by an app in certain \n>> situations. It does not make your I/O request larger, it just queues an \n>> extra I/O following your request.\n> So ... fiddling with settings in Linux is going to force read-ahead, but the \n> read-ahead data will hit the controller cache and the system buffers.\n>\n> And the drives use their caches for cyclinder caching implicitly (maybe the \n> SATA drives appear to preread more because the storage density per cylinder \n> is higher?)..\n>\n> But is there any way for an OS or application to (portably) ask SATA, SAS or \n> SCSI drives to read ahead more (or less) than their default and NOT return \n> the data to the controller?\n>\n> I've never heard of such a thing, but I'm no expert in the command sets for \n> any of this stuff.\n\nI'm pretty sure that's not possible. the OS isn't supposed to even know \nthe internals of the drive.\n\nDavid Lang\n\n> James\n>\n>> \n>> On Thu, Sep 11, 2008 at 12:54 PM, James Mansion \n>> <[email protected] <mailto:[email protected]>> wrote:\n>>\n>> Greg Smith wrote:\n>>\n>> The point I was trying to make there is that even under\n>> impossibly optimal circumstances, you'd be hard pressed to\n>> blow out the disk's read cache with seek-dominated data even\n>> if you read a lot at each seek point. That idea didn't make\n>> it from my head into writing very well though.\n>>\n>> Isn't there a bigger danger in blowing out the cache on the\n>> controller and causing premature pageout of its dirty pages?\n>>\n>> If you could get the readahead to work on the drive and not return\n>> data to the controller, that might be dandy, but I'm sceptical.\n>>\n>> James\n>> \n>> \n>>\n>> -- Sent via pgsql-performance mailing list\n>> ([email protected]\n>> <mailto:[email protected]>)\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n>> \n>\n>\n>\n", "msg_date": "Sat, 13 Sep 2008 14:21:51 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "On Thu, 11 Sep 2008, Scott Carey wrote:\n> Preliminary summary:\n> \n> readahead  |  8 conc read rate  |  1 conc read rate\n> 49152  |  311  |  314\n> 16384  |  312  |  312\n> 12288  |  304  |  309\n>  8192  |  292  |\n>  4096  |  264  |\n>  2048  |  211  |\n>  1024  |  162  |  302\n>   512  |  108  |\n>   256  |  81  | 300\n>     8  |  38  |\n\nWhat io scheduler are you using? The anticipatory scheduler is meant to \nprevent this slowdown with multiple concurrent reads.\n\nMatthew\n\n\n-- \nAnd the lexer will say \"Oh look, there's a null string. Oooh, there's \nanother. And another.\", and will fall over spectacularly when it realises\nthere are actually rather a lot.\n - Computer Science Lecturer (edited)", "msg_date": "Mon, 15 Sep 2008 17:18:38 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" }, { "msg_contents": "Good question. I'm in the process of completing more exhaustive tests with\nthe various disk i/o schedulers.\n\nBasic findings so far: it depends on what type of concurrency is going on.\nDeadline has the best performance over a range of readahead values compared\nto cfq or anticipatory with concurrent sequential reads with xfs. However,\nmixing random and sequential reads puts cfq ahead with low readahead values\nand deadline ahead with large readahead values (I have not tried\nanticipatory here yet). However, your preference for prioritizing streaming\nover random will significantly impact which you would want to use and at\nwhat readahead value -- cfq does a better job at being consistent balancing\nthe two, deadline swings strongly to being streaming biased as the readahead\nvalue gets larger and random biased when it is low. Deadline and CFQ are\nsimilar with concurrent random reads. I have not gotten to any write tests\nor concurrent read/write tests.\n\nI expect the anticipatory scheduler to perform worse with mixed loads --\nanything asking a raid array that can do 1000 iops to wait for 7 ms and do\nnothing just in case a read in the same area might occur is a bad idea for\naggregate concurrent throughput. It is a scheduler that assumes the\nunderlying hardware is essentially one spindle -- which is why it is so good\nin a standard PC or laptop. But, I could be wrong.\n\nOn Mon, Sep 15, 2008 at 9:18 AM, Matthew Wakeling <[email protected]>wrote:\n\n> On Thu, 11 Sep 2008, Scott Carey wrote:\n>\n>> Preliminary summary:\n>>\n>> readahead | 8 conc read rate | 1 conc read rate\n>> 49152 | 311 | 314\n>> 16384 | 312 | 312\n>> 12288 | 304 | 309\n>> 8192 | 292 |\n>> 4096 | 264 |\n>> 2048 | 211 |\n>> 1024 | 162 | 302\n>> 512 | 108 |\n>> 256 | 81 | 300\n>> 8 | 38 |\n>>\n>\n> What io scheduler are you using? The anticipatory scheduler is meant to\n> prevent this slowdown with multiple concurrent reads.\n>\n> Matthew\n>\n>\n> --\n> And the lexer will say \"Oh look, there's a null string. Oooh, there's\n> another. And another.\", and will fall over spectacularly when it realises\n> there are actually rather a lot.\n> - Computer Science Lecturer (edited)\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nGood question.  I'm in the process of completing more exhaustive tests with the various disk i/o schedulers.Basic findings so far:  it depends on what type of concurrency is going on.  Deadline has the best performance over a range of readahead values compared to cfq or anticipatory with concurrent sequential reads with xfs.  However, mixing random and sequential reads puts cfq ahead with low readahead values and deadline ahead with large readahead values (I have not tried anticipatory here yet).  However, your preference for prioritizing streaming over random will significantly impact which you would want to use and at what readahead value -- cfq does a better job at being consistent balancing the two, deadline swings strongly to being streaming biased as the readahead value gets larger and random biased when it is low.  Deadline and CFQ are similar with concurrent random reads.  I have not gotten to any write tests or concurrent read/write tests.\nI expect the anticipatory scheduler to perform worse with mixed loads -- anything asking a raid array that can do 1000 iops to wait for 7 ms and do nothing just in case a read in the same area might occur is a bad idea for aggregate concurrent throughput.  It is a scheduler that assumes the underlying hardware is essentially one spindle -- which is why it is so good in a standard PC or laptop.   But, I could be wrong.\nOn Mon, Sep 15, 2008 at 9:18 AM, Matthew Wakeling <[email protected]> wrote:\nOn Thu, 11 Sep 2008, Scott Carey wrote:\n\nPreliminary summary:\n\nreadahead  |  8 conc read rate  |  1 conc read rate\n49152  |  311  |  314\n16384  |  312  |  312\n12288  |  304  |  309\n 8192  |  292  |\n 4096  |  264  |\n 2048  |  211  |\n 1024  |  162  |  302\n  512  |  108  |\n  256  |  81  | 300\n    8  |  38  |\n\n\nWhat io scheduler are you using? The anticipatory scheduler is meant to prevent this slowdown with multiple concurrent reads.\n\nMatthew\n\n\n-- \nAnd the lexer will say \"Oh look, there's a null string. Oooh, there's another. And another.\", and will fall over spectacularly when it realises\nthere are actually rather a lot.\n        - Computer Science Lecturer (edited)\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 15 Sep 2008 10:33:38 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of setting linux block device readahead size" } ]
[ { "msg_contents": "Greetings,\n\nI'm relatively new to PostgreSQL but I've been in the IT applications \nindustry for a long time, mostly in the LAMP world.\n\nOne thing I'm experiencing some trouble with is running a COPY of a \nlarge file (20+ million records) into a table in a reasonable amount of \ntime. Currently it's taking about 12 hours to complete on a 64 bit \nserver with 3 GB memory allocated (shared_buffer), single SATA 320 GB \ndrive. I don't seem to get any improvement running the same operation \non a dual opteron dual-core, 16 GB server.\n\nI'm not asking for someone to solve my problem, just some direction in \nthe best ways to tune for faster bulk loading, since this will be a \nfairly regular operation for our application (assuming it can work this \nway). I've toyed with the maintenance_work_mem and some of the other \nparams, but it's still way slower than it seems like it should be.\nSo any contributions are much appreciated.\n\nThanks!\n\nP.S. Assume I've done a ton of reading and research into PG tuning, \nwhich I have. I just can't seem to find anything beyond the basics that \ntalks about really speeding up bulk loads.\n", "msg_date": "Wed, 10 Sep 2008 10:48:41 -0600", "msg_from": "Ryan Hansen <[email protected]>", "msg_from_op": true, "msg_subject": "Improve COPY performance for large data sets" }, { "msg_contents": "On Wednesday 10 September 2008, Ryan Hansen <[email protected]> \nwrote:\n>Currently it's taking about 12 hours to complete on a 64 bit\n> server with 3 GB memory allocated (shared_buffer), single SATA 320 GB\n> drive. I don't seem to get any improvement running the same operation\n> on a dual opteron dual-core, 16 GB server.\n>\n> I'm not asking for someone to solve my problem, just some direction in\n> the best ways to tune for faster bulk loading, since this will be a\n> fairly regular operation for our application (assuming it can work this\n> way). I've toyed with the maintenance_work_mem and some of the other\n> params, but it's still way slower than it seems like it should be.\n> So any contributions are much appreciated.\n\nYour drive subsystem, such as it is, is inappropriate for a database. Your \nbottleneck is your drive. \n\nTurning fsync off might help. You should also drop all indexes on the table \nbefore the COPY and add them back after (which would eliminate a lot of \nrandom I/O during the COPY).\n\n-- \nAlan\n", "msg_date": "Wed, 10 Sep 2008 10:14:34 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve COPY performance for large data sets" }, { "msg_contents": "In response to Ryan Hansen <[email protected]>:\n> \n> I'm relatively new to PostgreSQL but I've been in the IT applications \n> industry for a long time, mostly in the LAMP world.\n> \n> One thing I'm experiencing some trouble with is running a COPY of a \n> large file (20+ million records) into a table in a reasonable amount of \n> time. Currently it's taking about 12 hours to complete on a 64 bit \n> server with 3 GB memory allocated (shared_buffer), single SATA 320 GB \n> drive. I don't seem to get any improvement running the same operation \n> on a dual opteron dual-core, 16 GB server.\n> \n> I'm not asking for someone to solve my problem, just some direction in \n> the best ways to tune for faster bulk loading, since this will be a \n> fairly regular operation for our application (assuming it can work this \n> way). I've toyed with the maintenance_work_mem and some of the other \n> params, but it's still way slower than it seems like it should be.\n> So any contributions are much appreciated.\n\nThere's a program called pgloader which supposedly is faster than copy.\nI've not used it so I can't say definitively how much faster it is.\n\nA single 320G drive isn't going to get you much on speed. How many\nRPM? Watch iostat on your platform to see if you're saturating the\ndrive, if you are, the only way you're going to get it faster is to\nadd more disks in a RAID-10 or similar, or somehow get a faster disk.\n\nYou always have the option to turn off fsync, but be sure you understand\nthe consequences of doing that and have an appropriate failure plan\nbefore doing so.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 10 Sep 2008 13:16:06 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve COPY performance for large data sets" }, { "msg_contents": "Hi,\n\nLe mercredi 10 septembre 2008, Ryan Hansen a écrit :\n> One thing I'm experiencing some trouble with is running a COPY of a\n> large file (20+ million records) into a table in a reasonable amount of\n> time. Currently it's taking about 12 hours to complete on a 64 bit\n> server with 3 GB memory allocated (shared_buffer), single SATA 320 GB\n> drive. I don't seem to get any improvement running the same operation\n> on a dual opteron dual-core, 16 GB server.\n\nYou single SATA disk is probably very busy going from reading source file to \nwriting data. You could try raising checkpoint_segments to 64 or more, but a \nsingle SATA disk won't give you high perfs for IOs. You're getting what you \npayed for...\n\nYou could maybe ease the disk load by launching the COPY from a remote (local \nnetword) machine, and while at it if the file is big, try parallel loading \nwith pgloader.\n\nRegards,\n-- \ndim", "msg_date": "Wed, 10 Sep 2008 19:17:37 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve COPY performance for large data sets" }, { "msg_contents": "A single SATA drive may not be the best performer, but:\n\n1. It won't make a load take 12 hours unless we're talking a load that is in\ntotal, similar to the size of the disk. A slow, newer SATA drive will read\nand write at at ~50MB/sec at minimum, so the whole 320GB can be scanned at\n3GB per minute. Thats ~ 5 hours. It is not likely that 20M records is over\n20GB, and at that size there is no way the disk is the bottleneck.\n\n2. To figure out if the disk or CPU is a bottleneck, don't assume. Check\niostat or top and look at the disk utilization % and io wait times. Check\nthe backend process CPU utilization. In my experience, there are many\nthings that can cause COPY to be completely CPU bound even with slow disks\n-- I have seen it bound to a 5MB/sec write rate on a 3Ghz CPU, which a drive\nfrom 1998 could handle.\n\nIt seems like this case is resolved, but there are some other good tuning\nrecommendations. Don't blame the disk until the disk is actually showing\nhigh utilization though.\n\nCOPY is bound typically by the disk or a single CPU. It is usually CPU\nbound if there are indexes or constraints on the table, and sometimes even\nwhen there are none.\n\nThe pg_bulkload tool in almost all cases, will be significantly faster but\nit has limitations that make it inappropriate for some to use.\n\n\n\nOn Wed, Sep 10, 2008 at 10:14 AM, Alan Hodgson <[email protected]> wrote:\n\n> On Wednesday 10 September 2008, Ryan Hansen <\n> [email protected]>\n> wrote:\n> >Currently it's taking about 12 hours to complete on a 64 bit\n> > server with 3 GB memory allocated (shared_buffer), single SATA 320 GB\n> > drive. I don't seem to get any improvement running the same operation\n> > on a dual opteron dual-core, 16 GB server.\n> >\n> > I'm not asking for someone to solve my problem, just some direction in\n> > the best ways to tune for faster bulk loading, since this will be a\n> > fairly regular operation for our application (assuming it can work this\n> > way). I've toyed with the maintenance_work_mem and some of the other\n> > params, but it's still way slower than it seems like it should be.\n> > So any contributions are much appreciated.\n>\n> Your drive subsystem, such as it is, is inappropriate for a database. Your\n> bottleneck is your drive.\n>\n> Turning fsync off might help. You should also drop all indexes on the table\n> before the COPY and add them back after (which would eliminate a lot of\n> random I/O during the COPY).\n>\n> --\n> Alan\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nA single SATA drive may not be the best performer, but:\n\n1. It won't make a load take 12 hours unless we're talking a load that\nis in total, similar to the size of the disk.  A slow, newer SATA drive\nwill read and write at at ~50MB/sec at minimum, so the whole 320GB can\nbe scanned at 3GB per minute.  Thats ~ 5 hours.  It is not likely that 20M records is over 20GB, and at that size there is no way the disk is the bottleneck.\n\n2. To figure out if the disk or CPU is a bottleneck, don't assume. \nCheck iostat or top and look at the disk utilization % and io wait\ntimes.  Check the backend process CPU utilization.  In my experience,\nthere are many things that can cause COPY to be completely CPU bound\neven with slow disks -- I have seen it bound to a 5MB/sec write\nrate on a 3Ghz CPU, which a drive from 1998 could handle. \n\nIt seems like this case is resolved, but there are some other good tuning recommendations.  Don't blame the disk until the disk is actually showing high utilization though.  COPY is bound typically by the disk or a single CPU.  It is usually CPU bound if there are indexes or constraints on the table, and sometimes even when there are none.\nThe pg_bulkload tool in almost all cases, will be significantly faster but it has limitations that make it inappropriate for some to use.On Wed, Sep 10, 2008 at 10:14 AM, Alan Hodgson <[email protected]> wrote:\nOn Wednesday 10 September 2008, Ryan Hansen <[email protected]>\n\nwrote:\n>Currently it's taking about 12 hours to complete on a 64 bit\n> server with 3 GB memory allocated (shared_buffer), single SATA 320 GB\n> drive.  I don't seem to get any improvement running the same operation\n> on a dual opteron dual-core, 16 GB server.\n>\n> I'm not asking for someone to solve my problem, just some direction in\n> the best ways to tune for faster bulk loading, since this will be a\n> fairly regular operation for our application (assuming it can work this\n> way).  I've toyed with the maintenance_work_mem and some of the other\n> params, but it's still way slower than it seems like it should be.\n> So any contributions are much appreciated.\n\nYour drive subsystem, such as it is, is inappropriate for a database. Your\nbottleneck is your drive.\n\nTurning fsync off might help. You should also drop all indexes on the table\nbefore the COPY and add them back after (which would eliminate a lot of\nrandom I/O during the COPY).\n\n--\nAlan\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 10 Sep 2008 10:49:22 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve COPY performance for large data sets" }, { "msg_contents": "Correction --\n 2 hours to read the whole disk.\n\n1. It won't make a load take 12 hours unless we're talking a load that is in\n> total, similar to the size of the disk. A slow, newer SATA drive will read\n> and write at at ~50MB/sec at minimum, so the whole 320GB can be scanned at\n> 3GB per minute. Thats ~ 5 hours. It is not likely that 20M records is over\n> 20GB, and at that size there is no way the disk is the bottleneck.\n>\n\nCorrection --  2 hours to read the whole disk.\n1. It won't make a load take 12 hours unless we're talking a load that\nis in total, similar to the size of the disk.  A slow, newer SATA drive\nwill read and write at at ~50MB/sec at minimum, so the whole 320GB can\nbe scanned at 3GB per minute.  Thats ~ 5 hours.  It is not likely that 20M records is over 20GB, and at that size there is no way the disk is the bottleneck.", "msg_date": "Wed, 10 Sep 2008 10:51:34 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve COPY performance for large data sets" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nLe 10 sept. 08 � 19:16, Bill Moran a �crit :\n> There's a program called pgloader which supposedly is faster than \n> copy.\n> I've not used it so I can't say definitively how much faster it is.\n\nIn fact pgloader is using COPY under the hood, and doing so via a \nnetwork connection (could be unix domain socket), whereas COPY on the \nserver reads the file content directly from the local file. So no, \npgloader is not good for being faster than copy.\n\nThat said, pgloader is able to split the workload between as many \nthreads as you want to, and so could saturate IOs when the disk \nsubsystem performs well enough for a single CPU not to be able to \noverload it. Two parallel loading mode are supported, pgloader will \neither hav N parts of the file processed by N threads, or have one \nthread read and parse the file then fill up queues for N threads to \nsend COPY commands to the server.\n\nNow, it could be that using pgloader with a parallel setup performs \nbetter than plain COPY on the server. This remains to get tested, the \nuse case at hand is said to be for hundreds of GB or some TB data \nfile. I don't have any facilities to testdrive such a setup...\n\nNote that those pgloader parallel options have been asked by \nPostgreSQL hackers in order to testbed some ideas with respect to a \nparallel pg_restore, maybe re-explaining what have been implemented \nwill reopen this can of worms :)\n\nRegards,\n- --\ndim\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (Darwin)\n\niEYEARECAAYFAkjINB0ACgkQlBXRlnbh1bmhkgCgu4TduBB0bnscuEsy0CCftpSp\nO5IAoMsrPoXAB+SJEr9s5pMCYBgH/CNi\n=1c5H\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 10 Sep 2008 22:54:53 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve COPY performance for large data sets" }, { "msg_contents": "On Wed, Sep 10, 2008 at 11:16 AM, Bill Moran\n<[email protected]> wrote:\n> There's a program called pgloader which supposedly is faster than copy.\n> I've not used it so I can't say definitively how much faster it is.\n\nI think you are thinking of pg_bulkloader...\n", "msg_date": "Wed, 10 Sep 2008 15:06:31 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve COPY performance for large data sets" } ]
[ { "msg_contents": "NEVERMIND!!\n\nI found it. Turns out there was still a constraint on the table. Once \nI dropped that, the time went down to 44 minutes.\n\nMaybe I am an idiot after all. :)\n\n-Ryan\n\nGreetings,\n\nI'm relatively new to PostgreSQL but I've been in the IT applications \nindustry for a long time, mostly in the LAMP world.\n\nOne thing I'm experiencing some trouble with is running a COPY of a \nlarge file (20+ million records) into a table in a reasonable amount of \ntime. Currently it's taking about 12 hours to complete on a 64 bit \nserver with 3 GB memory allocated (shared_buffer), single SATA 320 GB \ndrive. I don't seem to get any improvement running the same operation \non a dual opteron dual-core, 16 GB server.\n\nI'm not asking for someone to solve my problem, just some direction in \nthe best ways to tune for faster bulk loading, since this will be a \nfairly regular operation for our application (assuming it can work this \nway). I've toyed with the maintenance_work_mem and some of the other \nparams, but it's still way slower than it seems like it should be.\nSo any contributions are much appreciated.\n\nThanks!\n\nP.S. Assume I've done a ton of reading and research into PG tuning, \nwhich I have. I just can't seem to find anything beyond the basics that \ntalks about really speeding up bulk loads.", "msg_date": "Wed, 10 Sep 2008 11:14:23 -0600", "msg_from": "Ryan Hansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve COPY performance for large data sets" }, { "msg_contents": "I suspect your table has index, or checkpoint_segments is small and lead PG\ndo checkpoint frequently. \nIf the table has index or constraint, drop it and copy it ,after copy\nfinished, do create index or constraint again.\nIf checkpoint_segments is small, enlarge it.\nAnd also you can turn fsync off when you do copy, after finish, turn it on\nagain.\nAnd also you can enlarge maintenance_work_mem.\n\nIf you take above, time cost will down significantly.\n\n\t 莫建祥\t\n阿里巴巴软件(上海)有限公司\n研发中心-IM服务端开发部 \n联系方式:86-0571-85022088-13072\n贸易通ID:jaymo 淘宝ID:jackem\n公司网站:www.alisoft.com\nwiki:http://10.0.32.21:1688/confluence/pages/viewpage.action?pageId=10338\n\n-----邮件原件-----\n发件人: [email protected]\n[mailto:[email protected]] 代表 Ryan Hansen\n发送时间: 2008年9月11日 1:14\n收件人: [email protected]\n主题: Re: [PERFORM] Improve COPY performance for large data sets\n\nNEVERMIND!!\n\nI found it. Turns out there was still a constraint on the table. Once \nI dropped that, the time went down to 44 minutes.\n\nMaybe I am an idiot after all. :)\n\n-Ryan\n\n", "msg_date": "Thu, 11 Sep 2008 09:23:04 +0800", "msg_from": "\"jay\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?gb2312?B?tPC4tDogW1BFUkZPUk1dIEltcHJvdmUgQ09QWSBwZXJmb3JtYW5jZQ==?=\n\t=?gb2312?B?IGZvciBsYXJnZSBkYXRhIHNldHM=?=" } ]
[ { "msg_contents": "I'm about to buy a new server. It will be a Xeon system with two \nprocessors (4 cores per processor) and 16GB RAM. Two RAID extenders \nwill be attached to an Intel s5000 series motherboard, providing 12 \nSAS/Serial ATA connectors.\n\nThe server will run FreeBSD 7.0, PostgreSQL 8, apache, PHP, mail server, \ndovecot IMAP server and background programs for database maintenance. On \nour current system, I/O performance for PostgreSQL is the biggest \nproblem, but sometimes all CPUs are at 100%. Number of users using this \nsystem:\n\nPostgreSQL: 30 connections\nApache: 30 connections\nIMAP server: 15 connections\n\nThe databases are mostly OLTP, but the background programs are creating \nhistorical data and statistic data continuously, and sometimes web site \nvisitors/serach engine robots run searches in bigger tables (with \n3million+ records).\n\nThere is an expert at the company who sells the server, and he \nrecommended that I use SAS disks for the base system at least. I would \nlike to use many SAS disks, but they are just too expensive. So the \nbasic system will reside on a RAID 1 array, created from two SAS disks \nspinning at 15 000 rpm. I will buy 10 pieces of Seagate Barracuda 320GB \nSATA (ES 7200) disks for the rest.\n\nThe expert told me to use RAID 5 but I'm hesitating. I think that RAID \n1+0 would be much faster, and I/O performance is what I really need.\n\nI would like to put the WAL file on the SAS disks to improve \nperformance, and create one big RAID 1+0 disk for the data directory. \nBut maybe I'm completely wrong. Can you please advise how to create \nlogical partitions? The hardware is capable of handling different types \nof RAID volumes on the same set of disks. For example, a smaller RAID 0 \nfor indexes and a bigger RAID 5 etc.\n\nIf you need more information about the database, please ask. :-)\n\nThank you very much,\n\n Laszlo\n\n", "msg_date": "Thu, 11 Sep 2008 18:29:36 +0200", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Choosing a filesystem" }, { "msg_contents": "On Thu, Sep 11, 2008 at 06:29:36PM +0200, Laszlo Nagy wrote:\n\n> The expert told me to use RAID 5 but I'm hesitating. I think that RAID 1+0 \n> would be much faster, and I/O performance is what I really need.\n\nI think you're right. I think it's a big mistake to use RAID 5 in a\ndatabase server where you're hoping for reasonable write performance.\nIn theory RAID 5 ought to be fast for reads, but I've never seen it\nwork that way.\n\n> I would like to put the WAL file on the SAS disks to improve performance, \n> and create one big RAID 1+0 disk for the data directory. But maybe I'm \n> completely wrong. Can you please advise how to create logical partitions? \n\nI would listen to yourself before you listen to the expert. You sound\nright to me :)\n\nA\n\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Thu, 11 Sep 2008 13:07:26 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Thu, 11 Sep 2008, Laszlo Nagy wrote:\n> So the basic system will reside on a RAID 1 array, created from two SAS \n> disks spinning at 15 000 rpm. I will buy 10 pieces of Seagate Barracuda \n> 320GB SATA (ES 7200) disks for the rest.\n\nThat sounds good. Put RAID 1 on the pair, and RAID 1+0 on the rest. It'll \nbe pretty good. Put the OS and the WAL on the pair, and the database on \nthe large array.\n\nHowever, one of the biggest things that will improve your performance \n(especially in OLTP) is to use a proper RAID controller with a \nbattery-backed-up cache.\n\nMatthew\n\n-- \nX's book explains this very well, but, poor bloke, he did the Cambridge Maths \nTripos... -- Computer Science Lecturer\n", "msg_date": "Thu, 11 Sep 2008 18:18:37 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Thu, Sep 11, 2008 at 06:18:37PM +0100, Matthew Wakeling wrote:\n> On Thu, 11 Sep 2008, Laszlo Nagy wrote:\n>> So the basic system will reside on a RAID 1 array, created from two SAS \n>> disks spinning at 15 000 rpm. I will buy 10 pieces of Seagate Barracuda \n>> 320GB SATA (ES 7200) disks for the rest.\n>\n> That sounds good. Put RAID 1 on the pair, and RAID 1+0 on the rest. It'll \n> be pretty good. Put the OS and the WAL on the pair, and the database on the \n> large array.\n>\n> However, one of the biggest things that will improve your performance \n> (especially in OLTP) is to use a proper RAID controller with a \n> battery-backed-up cache.\n>\n> Matthew\n>\n\nBut remember that putting the WAL on a separate drive(set) will only\nhelp if you do not have competing I/O, such as system logging or paging,\ngoing to the same drives. This turns your fast sequential I/O into\nrandom I/O with the accompaning 10x or more performance decrease.\n\nKen\n", "msg_date": "Thu, 11 Sep 2008 12:23:32 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": ">>> Kenneth Marshall <[email protected]> wrote:\n> On Thu, Sep 11, 2008 at 06:18:37PM +0100, Matthew Wakeling wrote:\n>> On Thu, 11 Sep 2008, Laszlo Nagy wrote:\n>>> So the basic system will reside on a RAID 1 array, created from two\nSAS \n>>> disks spinning at 15 000 rpm. I will buy 10 pieces of Seagate\nBarracuda \n>>> 320GB SATA (ES 7200) disks for the rest.\n>>\n>> That sounds good. Put RAID 1 on the pair, and RAID 1+0 on the rest.\nIt'll \n>> be pretty good. Put the OS and the WAL on the pair, and the database\non the \n>> large array.\n>>\n>> However, one of the biggest things that will improve your\nperformance \n>> (especially in OLTP) is to use a proper RAID controller with a \n>> battery-backed-up cache.\n> \n> But remember that putting the WAL on a separate drive(set) will only\n> help if you do not have competing I/O, such as system logging or\npaging,\n> going to the same drives. This turns your fast sequential I/O into\n> random I/O with the accompaning 10x or more performance decrease.\n \nUnless you have a good RAID controller with battery-backed-up cache.\n \n-Kevin\n", "msg_date": "Thu, 11 Sep 2008 12:30:16 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "\n>> going to the same drives. This turns your fast sequential I/O into\n>> random I/O with the accompaning 10x or more performance decrease.\n>> \n> \n> Unless you have a good RAID controller with battery-backed-up cache.\n> \nAll right. :-) This is what I'll have:\n\nBoxed Intel Server Board S5000PSLROMB with 8-port SAS ROMB card \n(Supports 45nm processors (Harpertown and Wolfdale-DP)\nIntel� RAID Activation key AXXRAK18E enables full intelligent SAS RAID \non S5000PAL, S5000PSL, SR4850HW4/M, SR6850HW4/M. RoHS Compliant.\n512 MB 400MHz DDR2 ECC Registered CL3 DIMM Single Rank, x8(for \ns5000pslromb)\n6-drive SAS/SATA backplane with expander (requires 2 SAS ports) for \nSC5400 and SC5299 (two pieces)\n5410 Xeon 2.33 GHz/1333 FSB/12MB Dobozos , Passive cooling / 80W (2 pieces)\n2048 MB 667MHz DDR2 ECC Fully Buffered CL5 DIMM Dual Rank, x8 (8 pieces)\n\nSAS disks will be: 146.8 GB, SAS 3G,15000RPM, 16 MB cache (two pieces)\nSATA disks will be: HDD Server SEAGATE Barracuda ES 7200.1 \n(320GB,16MB,SATA II-300) __(10 pieces)\n\nI cannot spend more money on this computer, but since you are all \ntalking about battery back up, I'll try to get money from the management \nand buy this:\n\nIntel� RAID Smart Battery AXXRSBBU3, optional battery back up for use \nwith AXXRAK18E and SRCSAS144E. RoHS Complaint.\n\n\nThis server will also be an IMAP server, web server etc. so I'm 100% \nsure that the SAS disks will be used for logging. I have two spare 200GB \nSATA disks here in the office but they are cheap ones designed for \ndesktop computers. Is it okay to dedicate these disks for the WAL file \nin RAID1? Will it improve performance? How much trouble would it cause \nif the WAL file goes wrong? Should I just put the WAL file on the RAID \n1+0 array?\n\nThanks,\n\n Laszlo\n\n", "msg_date": "Thu, 11 Sep 2008 19:47:41 +0200", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Thu, Sep 11, 2008 at 10:29 AM, Laszlo Nagy <[email protected]> wrote:\n> I'm about to buy a new server. It will be a Xeon system with two processors\n> (4 cores per processor) and 16GB RAM. Two RAID extenders will be attached\n> to an Intel s5000 series motherboard, providing 12 SAS/Serial ATA\n> connectors.\n>\n> The server will run FreeBSD 7.0, PostgreSQL 8, apache, PHP, mail server,\n> dovecot IMAP server and background programs for database maintenance. On our\n> current system, I/O performance for PostgreSQL is the biggest problem, but\n> sometimes all CPUs are at 100%. Number of users using this system:\n\n100% what? sys? user? iowait? if it's still iowait, then the newer,\nbigger, faster RAID should really help.\n\n> PostgreSQL: 30 connections\n> Apache: 30 connections\n> IMAP server: 15 connections\n>\n> The databases are mostly OLTP, but the background programs are creating\n> historical data and statistic data continuously, and sometimes web site\n> visitors/serach engine robots run searches in bigger tables (with 3million+\n> records).\n\nThis might be a good application to setup where you slony replicate to\nanother server, then run your I/O intensive processes against the\nslave.\n\n> There is an expert at the company who sells the server, and he recommended\n> that I use SAS disks for the base system at least. I would like to use many\n> SAS disks, but they are just too expensive. So the basic system will reside\n> on a RAID 1 array, created from two SAS disks spinning at 15 000 rpm. I will\n> buy 10 pieces of Seagate Barracuda 320GB SATA (ES 7200) disks for the rest.\n\nSAS = a bit faster, and better at parallel work. However, short\nstroking 7200 RPM SATA drives on the fastest parts of the platters can\nget you close to SAS territory for a fraction of the cost, plus you\ncan then store backups etc on the rest of the drives at night.\n\nSo, you're gonna put the OS o RAID1, and pgsql on the rest... Makes\nsense. consider setting up another RAID1 for the pg_clog directory.\n\n> The expert told me to use RAID 5 but I'm hesitating. I think that RAID 1+0\n> would be much faster, and I/O performance is what I really need.\n\nThe expert is most certainly wrong for an OLTP database. If your RAID\ncontroller can't run RAID-10 quickly compared to RAID-5 then it's a\ncrap card, and you need a better one. Or put it into JBOD and let the\nOS handle the RAID-10 work. Or split it RAID-1 sets on the\ncontroller, RAID-0 in the OS.\n\n> I would like to put the WAL file on the SAS disks to improve performance,\n\nActually, the WAL doesn't need SAS for good performance really.\nExcept for the 15K.6 Seagate Cheetahs, most decent SATA drives are\nwithin a few percentage of SAS drives for sequential write / read\nspeed, which is what the WAL basically does.\n\n> and create one big RAID 1+0 disk for the data directory. But maybe I'm\n> completely wrong. Can you please advise how to create logical partitions?\n> The hardware is capable of handling different types of RAID volumes on the\n> same set of disks. For example, a smaller RAID 0 for indexes and a bigger\n> RAID 5 etc.\n\nAvoid RAID-5 on OLTP.\n\nNow, if you have a slony slave for the aggregate work stuff, and\nyou're doing big reads and writes, RAID-5 on a large SATA set may be a\ngood and cost effective solution.\n", "msg_date": "Thu, 11 Sep 2008 12:25:57 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Thu, Sep 11, 2008 at 11:47 AM, Laszlo Nagy <[email protected]> wrote:\n> I cannot spend more money on this computer, but since you are all talking\n> about battery back up, I'll try to get money from the management and buy\n> this:\n>\n> Intel(R) RAID Smart Battery AXXRSBBU3, optional battery back up for use with\n> AXXRAK18E and SRCSAS144E. RoHS Complaint.\n\nSacrifice a couple of SAS drives to get that.\n\nI'd rather have all SATA drives and a BBU than SAS without one.\n", "msg_date": "Thu, 11 Sep 2008 12:32:12 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "Laszlo Nagy wrote:\n> I cannot spend more money on this computer, but since you are all \n> talking about battery back up, I'll try to get money from the management \n> and buy this:\n> \n> Intel� RAID Smart Battery AXXRSBBU3, optional battery back up for use \n> with AXXRAK18E and SRCSAS144E. RoHS Complaint.\n\nThe battery-backup is really important. You'd be better off to drop down to 8 disks in a RAID 1+0 and put everything on it, if that meant you could use the savings to get the battery-backed RAID controller. The performance improvement of a BB cache is amazing.\n\nBased on advice from this group, configured our systems with a single 8-disk RAID 1+0 with a battery-backed cache. It holds the OS, WAL and database, and it is VERY fast. We're very happy with it.\n\nCraig\n", "msg_date": "Thu, 11 Sep 2008 12:53:36 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Thu, 11 Sep 2008, Laszlo Nagy wrote:\n\n> The expert told me to use RAID 5 but I'm hesitating.\n\nYour \"expert\" isn't--at least when it comes to database performance. \nTrust yourself here, you've got the right general idea.\n\nBut I can't make any sense out of exactly how your disks are going to be \nconnected to the server with that collection of hardware. What I can tell \nis that you're approaching that part backwards, probably under the \ninfluence of the vendor you're dealing with, and since they don't \nunderstand what you're doing you're stuck sorting that out.\n\nIf you want your database to perform well on writes, the first thing you \ndo is select a disk controller that performs well, has a well-known stable \ndriver for your OS, has a reasonably large cache (>=256MB), and has a \nbattery backup on it. I don't know anything about how well this Intel \nRAID performs under FreeBSD, but you should check that if you haven't \nalready. From the little bit I read about it I'm concerned if it's fast \nenough for as many drives as you're using. The wrong disk controller will \nmake a slow mess out of any hardware you throw at it.\n\nThen, you connect as many drives to the caching controller as you can for \nthe database. OS drives can connect to another controller (like the ports \non the motherboard), but you shouldn't use them for either the database \ndata or the WAL. That's what I can't tell from your outline of the server \nconfiguration; if it presumes a couple of the SATA disks holding database \ndata are using the motherboard ports, you need to stop there and get a \nlarger battery backed caching controller.\n\nIf you're on a limited budget and the choice is between more SATA disks or \nless SAS disks, get more of the SATA ones. Once you've filled the \navailable disk slots on the controller or run out of room in the chassis, \nif there's money leftover then you can go back and analyze whether \nreplacing some of those with SAS disks makes sense--as long as they're \nstill connected to a caching controller. I don't know what flexibility \nthe \"SAS/SATA backplane\" you listed has here.\n\nYou've got enough disks that it may be worthwhile to set aside two of them \nto be dedicated WAL volumes. That call depends on the balance of OLTP \nwrites (which are more likely to take advantage of that) versus the \nreports scans (which would prefer more disks in the database array), and \nthe only way you'll know for sure is to benchmark both configurations with \nsomething resembling your application. Since you should always do stress \ntesting on any new hardware anyway before it goes into production, that's \na good time to run comparisons like that.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 11 Sep 2008 16:11:13 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Thu, 11 Sep 2008, Greg Smith wrote:\n> If you want your database to perform well on writes, the first thing you do \n> is select a disk controller that performs well, has a well-known stable \n> driver for your OS, has a reasonably large cache (>=256MB), and has a battery \n> backup on it.\n\nGreg, it might be worth you listing a few good RAID controllers. It's \nalmost an FAQ. From what I'm hearing, this Intel one doesn't sound like it \nwould be on the list.\n\nMatthew\n\n-- \nRiker: Our memory pathways have become accustomed to your sensory input.\nData: I understand - I'm fond of you too, Commander. And you too Counsellor\n", "msg_date": "Fri, 12 Sep 2008 08:45:10 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "Craig James <craig_james 'at' emolecules.com> writes:\n\n> The performance improvement of a BB cache is amazing.\n\nCould some of you share the insight on why this is the case? I\ncannot find much information on it on wikipedia, for example.\nEven http://linuxfinances.info/info/diskusage.html doesn't\nexplain *why*.\n\nOut of the blue, is it just because when postgresql fsync's after\na write, on a normal system the write has to really happen on\ndisk and waiting for it to be complete, whereas with BBU cache\nthe fsync is almost immediate because the write cache actually\nreplaces the \"really on disk\" write?\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n", "msg_date": "Fri, 12 Sep 2008 10:38:45 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Fri, 12 Sep 2008, Matthew Wakeling wrote:\n\n> Greg, it might be worth you listing a few good RAID controllers. It's almost \n> an FAQ.\n\nI started doing that at the end of \nhttp://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks , that still needs \nsome work. What I do periodically is sweep through old messages here that \nhave useful FAQ text and dump them into the appropriate part of \nhttp://wiki.postgresql.org/wiki/Performance_Optimization\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 12 Sep 2008 05:01:26 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Fri, 12 Sep 2008, Guillaume Cottenceau wrote:\n\n> Out of the blue, is it just because when postgresql fsync's after a \n> write, on a normal system the write has to really happen on disk and \n> waiting for it to be complete, whereas with BBU cache the fsync is \n> almost immediate because the write cache actually replaces the \"really \n> on disk\" write?\n\nThat's the main thing, and nothing else you can do will accelerate that. \nWithout a useful write cache (which usually means RAM with a BBU), you'll \nat best get about 100-200 write transactions per second for any one \nclient, and something like 500/second even with lots of clients (queued up \ntransaction fsyncs do get combined). Those numbers increase to several \nthousand per second the minute there's a good caching controller in the \nmix.\n\nYou might say \"but I don't write that heavily, so what?\" Even if the \nwrite volume is low enough that the disk can keep up, there's still \nlatency. A person who is writing transactions is going to be delayed a \nfew milliseconds after each commit, which drags some types of data loading \nto a crawl. Also, without a cache in places mixes of fsync'd writes and \nreads can behave badly, with readers getting stuck behind writers much \nmore often than in the cached situation.\n\nThe final factor is that additional layers of cache usually help improve \nphysical grouping of blocks into ordered sections to lower seek overhead. \nThe OS is supposed to be doing that for you, but a cache closer to the \ndrives themselves helps smooth things out when the OS dumps a large block \nof data out for some reason. The classic example in PostgreSQL land, \nparticularly before 8.3, was when a checkpoint happens.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 12 Sep 2008 05:11:45 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Fri, Sep 12, 2008 at 5:11 AM, Greg Smith <[email protected]> wrote:\n> On Fri, 12 Sep 2008, Guillaume Cottenceau wrote:\n>\n> That's the main thing, and nothing else you can do will accelerate that.\n> Without a useful write cache (which usually means RAM with a BBU), you'll at\n> best get about 100-200 write transactions per second for any one client, and\n> something like 500/second even with lots of clients (queued up transaction\n> fsyncs do get combined). Those numbers increase to several thousand per\n> second the minute there's a good caching controller in the mix.\n\nWhile this is correct, if heavy writing is sustained, especially on\nlarge databases, you will eventually outrun the write cache on the\ncontroller and things will start to degrade towards the slow case. So\nit's fairer to say that caching raid controllers burst up to several\nthousand per second, with a sustained write rate somewhat better than\nwrite-through but much worse than the burst rate.\n\nHow fast things degrade from the burst rate depends on certain\nfactors...how big the database is relative to the o/s read cache in\nthe controller write cache, and how random the i/o is generally. One\nthing raid controllers are great at is smoothing bursty i/o during\ncheckpoints for example.\n\nUnfortunately when you outrun cache on raid controllers the behavior\nis not always very pleasant...in at least one case I've experienced\n(perc 5/i) when the cache fills up the card decides to clear it before\ncontinuing. This means that if fsync is on, you get unpredictable\nrandom freezing pauses while the cache is clearing.\n\nmerlin\n", "msg_date": "Fri, 12 Sep 2008 10:35:32 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Fri, 12 Sep 2008, Merlin Moncure wrote:\n\n> On Fri, Sep 12, 2008 at 5:11 AM, Greg Smith <[email protected]> wrote:\n>> On Fri, 12 Sep 2008, Guillaume Cottenceau wrote:\n>>\n>> That's the main thing, and nothing else you can do will accelerate that.\n>> Without a useful write cache (which usually means RAM with a BBU), you'll at\n>> best get about 100-200 write transactions per second for any one client, and\n>> something like 500/second even with lots of clients (queued up transaction\n>> fsyncs do get combined). Those numbers increase to several thousand per\n>> second the minute there's a good caching controller in the mix.\n>\n> While this is correct, if heavy writing is sustained, especially on\n> large databases, you will eventually outrun the write cache on the\n> controller and things will start to degrade towards the slow case. So\n> it's fairer to say that caching raid controllers burst up to several\n> thousand per second, with a sustained write rate somewhat better than\n> write-through but much worse than the burst rate.\n>\n> How fast things degrade from the burst rate depends on certain\n> factors...how big the database is relative to the o/s read cache in\n> the controller write cache, and how random the i/o is generally. One\n> thing raid controllers are great at is smoothing bursty i/o during\n> checkpoints for example.\n>\n> Unfortunately when you outrun cache on raid controllers the behavior\n> is not always very pleasant...in at least one case I've experienced\n> (perc 5/i) when the cache fills up the card decides to clear it before\n> continuing. This means that if fsync is on, you get unpredictable\n> random freezing pauses while the cache is clearing.\n\nalthough for postgres the thing that you are doing the fsync on is the WAL \nlog file. that is a single (usually) contiguous file. As such it is very \nefficiant to write large chunks of it. so while you will degrade from the \nbattery-only mode, the fact that the controller can flush many requests \nworth of writes out to the WAL log at once while you fill the cache with \nthem one at a time is still a significant win.\n\nDavid Lang\n", "msg_date": "Sat, 13 Sep 2008 14:26:40 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "On Sat, Sep 13, 2008 at 5:26 PM, <[email protected]> wrote:\n> On Fri, 12 Sep 2008, Merlin Moncure wrote:\n>>\n>> While this is correct, if heavy writing is sustained, especially on\n>> large databases, you will eventually outrun the write cache on the\n>> controller and things will start to degrade towards the slow case. So\n>> it's fairer to say that caching raid controllers burst up to several\n>> thousand per second, with a sustained write rate somewhat better than\n>> write-through but much worse than the burst rate.\n>>\n>> How fast things degrade from the burst rate depends on certain\n>> factors...how big the database is relative to the o/s read cache in\n>> the controller write cache, and how random the i/o is generally. One\n>> thing raid controllers are great at is smoothing bursty i/o during\n>> checkpoints for example.\n>>\n>> Unfortunately when you outrun cache on raid controllers the behavior\n>> is not always very pleasant...in at least one case I've experienced\n>> (perc 5/i) when the cache fills up the card decides to clear it before\n>> continuing. This means that if fsync is on, you get unpredictable\n>> random freezing pauses while the cache is clearing.\n>\n> although for postgres the thing that you are doing the fsync on is the WAL\n> log file. that is a single (usually) contiguous file. As such it is very\n> efficiant to write large chunks of it. so while you will degrade from the\n> battery-only mode, the fact that the controller can flush many requests\n> worth of writes out to the WAL log at once while you fill the cache with\n> them one at a time is still a significant win.\n\nThe heap files have to be synced as well during checkpoints, etc.\n\nmerlin\n", "msg_date": "Sun, 14 Sep 2008 22:49:14 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "Merlin Moncure wrote:\n> > although for postgres the thing that you are doing the fsync on is the WAL\n> > log file. that is a single (usually) contiguous file. As such it is very\n> > efficiant to write large chunks of it. so while you will degrade from the\n> > battery-only mode, the fact that the controller can flush many requests\n> > worth of writes out to the WAL log at once while you fill the cache with\n> > them one at a time is still a significant win.\n> \n> The heap files have to be synced as well during checkpoints, etc.\n\nTrue, but as of 8.3 those checkpoint fsyncs are spread over the interval\nbetween checkpoints.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Tue, 23 Sep 2008 13:02:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" }, { "msg_contents": "\nOn Tue, 2008-09-23 at 13:02 -0400, Bruce Momjian wrote:\n> Merlin Moncure wrote:\n> > > although for postgres the thing that you are doing the fsync on is the WAL\n> > > log file. that is a single (usually) contiguous file. As such it is very\n> > > efficiant to write large chunks of it. so while you will degrade from the\n> > > battery-only mode, the fact that the controller can flush many requests\n> > > worth of writes out to the WAL log at once while you fill the cache with\n> > > them one at a time is still a significant win.\n> > \n> > The heap files have to be synced as well during checkpoints, etc.\n> \n> True, but as of 8.3 those checkpoint fsyncs are spread over the interval\n> between checkpoints.\n\nNo, the fsyncs still all happen in a tight window after we have issued\nthe writes. There's no waits in between them at all. The delays we\nintroduced are all in the write phase. Whether that is important or not\ndepends upon OS parameter settings.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 23 Sep 2008 20:01:15 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Choosing a filesystem" } ]
[ { "msg_contents": "I'm trying to optimize postgres performance on a headless solid state\nhardware platform (no fans or disks). I have the database stored on a\nUSB 2.0 flash drive (hdparm benchmarks reads at 10 MB/s). Performance is\nlimited by the 533Mhz CPU.\n\nHardware:\nIXP425 XScale (big endian) 533Mhz 64MB RAM\nUSB 2.0 Flash Drive\n \nSoftware:\nLinux 2.6.21.4\npostgres 8.2.5\n\nI created a fresh database using initdb, then added one table.\n\nHere is the create table:\nCREATE TABLE archivetbl\n(\n \"DateTime\" timestamp without time zone,\n \"StationNum\" smallint,\n \"DeviceDateTime\" timestamp without time zone,\n \"DeviceNum\" smallint,\n \"Tagname\" character(64),\n \"Value\" double precision,\n \"Online\" boolean\n)\nWITH (OIDS=FALSE);\nALTER TABLE archivetbl OWNER TO novatech;\n\nI've attached my postgresql.conf\n\nI populated the table with 38098 rows.\n\nI'm doing this simple query:\nselect * from archivetbl;\n \nIt takes 79 seconds to complete the query (when postgres is compiled\nwith -O2). I'm running the query from pgadmin3 over TCP/IP.\n\ntop shows CPU usage is at 100% with 95% being in userspace. oprofile\nshows memset is using 58% of the CPU cycles!\n\nCPU: ARM/XScale PMU2, speed 0 MHz (estimated)\nCounted CPU_CYCLES events (clock cycles counter) with a unit mask of\n0x00 (No unit mask) count 100000\nsamples % app name symbol name\n288445 57.9263 libc-2.5.so memset\n33273 6.6820 vmlinux default_idle\n27910 5.6050 vmlinux cpu_idle\n12611 2.5326 vmlinux schedule\n8803 1.7678 libc-2.5.so __printf_fp\n7448 1.4957 postgres dopr\n6404 1.2861 libc-2.5.so vfprintf\n6398 1.2849 oprofiled (no symbols)\n4992 1.0025 postgres __udivdi3\n4818 0.9676 vmlinux run_timer_softirq\n\n\nI was having trouble getting oprofile to give a back trace for memset\n(probably because my libc is optimized). So I redefined MemSet to call this:\nvoid * gmm_memset(void *s, int c, size_t n)\n{\n int i=0;\n unsigned char * p = (unsigned char *)s;\n for(i=0; i<n; i++)\n {\n p[i]=0;\n }\n return s;\n}\n\nHere are the oprofile results for the same select query.\n\nCPU: ARM/XScale PMU2, speed 0 MHz (estimated)\nCounted CPU_CYCLES events (clock cycles counter) with a unit mask of\n0x00 (No unit mask) count 100000\nsamples % image name app name \nsymbol name\n-------------------------------------------------------------------------------\n 1 5.2e-04 postgres postgres \nLockAcquire\n 1 5.2e-04 postgres postgres \nset_ps_display\n 20 0.0103 postgres postgres \npg_vsprintf\n 116695 60.2947 postgres postgres dopr\n116717 60.3061 postgres postgres \ngmm_memset\n 116717 60.3061 postgres postgres \ngmm_memset [self]\n-------------------------------------------------------------------------------\n20304 10.4908 oprofiled oprofiled (no\nsymbols)\n 20304 10.4908 oprofiled oprofiled \n(no symbols) [self]\n-------------------------------------------------------------------------------\n 4587 2.3700 vmlinux vmlinux \nrest_init\n 6627 3.4241 vmlinux vmlinux \ncpu_idle\n11214 5.7941 vmlinux vmlinux \ndefault_idle\n 11214 5.7941 vmlinux vmlinux \ndefault_idle [self]\n-------------------------------------------------------------------------------\n 16151 8.3450 vmlinux vmlinux \nrest_init\n9524 4.9209 vmlinux vmlinux cpu_idle\n 9524 4.9209 vmlinux vmlinux \ncpu_idle [self]\n 6627 3.4241 vmlinux vmlinux \ndefault_idle\n-------------------------------------------------------------------------------\n5111 2.6408 oprofile oprofile (no\nsymbols)\n 5111 2.6408 oprofile oprofile \n(no symbols) [self]\n\noprofile shows dopr is making most of the calls to memset.\n\nAre these results typical? If memset is indeed using over 50% of the CPU\nsomething seems seriously wrong.\n\nShould I be expecting more performance from this hardware than what I'm\ngetting in these tests?\n\nRegards,\nGeorge McCollister", "msg_date": "Fri, 12 Sep 2008 09:13:13 -0500", "msg_from": "George McCollister <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Performance on CPU limited Platforms" }, { "msg_contents": "George McCollister wrote:\n> I'm trying to optimize postgres performance on a headless solid state\n> hardware platform (no fans or disks). I have the database stored on a\n> USB 2.0 flash drive (hdparm benchmarks reads at 10 MB/s). Performance is\n> limited by the 533Mhz CPU.\n>\n> Hardware:\n> IXP425 XScale (big endian) 533Mhz 64MB RAM\n> USB 2.0 Flash Drive\n> \n\nHmmm ARM/XScale, 64MB. Just curious. Are you running a Postgres server \non a pocket pc or possibly a cell phone?\n\n> \n> Software:\n> Linux 2.6.21.4\n> postgres 8.2.5\n>\n> I created a fresh database using initdb, then added one table.\n>\n> Here is the create table:\n> CREATE TABLE archivetbl\n> (\n> \"DateTime\" timestamp without time zone,\n> \"StationNum\" smallint,\n> \"DeviceDateTime\" timestamp without time zone,\n> \"DeviceNum\" smallint,\n> \"Tagname\" character(64),\n> \"Value\" double precision,\n> \"Online\" boolean\n> )\n> WITH (OIDS=FALSE);\n> ALTER TABLE archivetbl OWNER TO novatech;\n>\n> I've attached my postgresql.conf\n>\n> I populated the table with 38098 rows.\n>\n> I'm doing this simple query:\n> select * from archivetbl;\n> \n> It takes 79 seconds to complete the query (when postgres is compiled\n> with -O2). I'm running the query from pgadmin3 over TCP/IP.\n>\n> top shows CPU usage is at 100% with 95% being in userspace. oprofile\n> shows memset is using 58% of the CPU cycles!\n>\n> CPU: ARM/XScale PMU2, speed 0 MHz (estimated)\n> Counted CPU_CYCLES events (clock cycles counter) with a unit mask of\n> 0x00 (No unit mask) count 100000\n> samples % app name symbol name\n> 288445 57.9263 libc-2.5.so memset\n> 33273 6.6820 vmlinux default_idle\n> 27910 5.6050 vmlinux cpu_idle\n> 12611 2.5326 vmlinux schedule\n> 8803 1.7678 libc-2.5.so __printf_fp\n> 7448 1.4957 postgres dopr\n> 6404 1.2861 libc-2.5.so vfprintf\n> 6398 1.2849 oprofiled (no symbols)\n> 4992 1.0025 postgres __udivdi3\n> 4818 0.9676 vmlinux run_timer_softirq\n>\n>\n> I was having trouble getting oprofile to give a back trace for memset\n> (probably because my libc is optimized). So I redefined MemSet to call this:\n> void * gmm_memset(void *s, int c, size_t n)\n> {\n> int i=0;\n> unsigned char * p = (unsigned char *)s;\n> for(i=0; i<n; i++)\n> {\n> p[i]=0;\n> }\n> return s;\n> }\n>\n> Here are the oprofile results for the same select query.\n>\n> CPU: ARM/XScale PMU2, speed 0 MHz (estimated)\n> Counted CPU_CYCLES events (clock cycles counter) with a unit mask of\n> 0x00 (No unit mask) count 100000\n> samples % image name app name \n> symbol name\n> -------------------------------------------------------------------------------\n> 1 5.2e-04 postgres postgres \n> LockAcquire\n> 1 5.2e-04 postgres postgres \n> set_ps_display\n> 20 0.0103 postgres postgres \n> pg_vsprintf\n> 116695 60.2947 postgres postgres dopr\n> 116717 60.3061 postgres postgres \n> gmm_memset\n> 116717 60.3061 postgres postgres \n> gmm_memset [self]\n> -------------------------------------------------------------------------------\n> 20304 10.4908 oprofiled oprofiled (no\n> symbols)\n> 20304 10.4908 oprofiled oprofiled \n> (no symbols) [self]\n> -------------------------------------------------------------------------------\n> 4587 2.3700 vmlinux vmlinux \n> rest_init\n> 6627 3.4241 vmlinux vmlinux \n> cpu_idle\n> 11214 5.7941 vmlinux vmlinux \n> default_idle\n> 11214 5.7941 vmlinux vmlinux \n> default_idle [self]\n> -------------------------------------------------------------------------------\n> 16151 8.3450 vmlinux vmlinux \n> rest_init\n> 9524 4.9209 vmlinux vmlinux cpu_idle\n> 9524 4.9209 vmlinux vmlinux \n> cpu_idle [self]\n> 6627 3.4241 vmlinux vmlinux \n> default_idle\n> -------------------------------------------------------------------------------\n> 5111 2.6408 oprofile oprofile (no\n> symbols)\n> 5111 2.6408 oprofile oprofile \n> (no symbols) [self]\n>\n> oprofile shows dopr is making most of the calls to memset.\n>\n> Are these results typical? If memset is indeed using over 50% of the CPU\n> something seems seriously wrong.\n>\n> Should I be expecting more performance from this hardware than what I'm\n> getting in these tests?\n>\n> Regards,\n> George McCollister\n>\n> \n>\n> \n\n\n-- \nH. Hall\nReedyRiver Group LLC\nhttp://www.reedyriver.com\n\n", "msg_date": "Fri, 12 Sep 2008 14:07:09 -0400", "msg_from": "\"H. Hall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance on CPU limited Platforms" }, { "msg_contents": "On Fri, Sep 12, 2008 at 12:07 PM, H. Hall <[email protected]> wrote:\n>\n> Hmmm ARM/XScale, 64MB. Just curious. Are you running a Postgres server on\n> a pocket pc or possibly a cell phone?\n>\n\nI would think SQLite would be a better choice on that kind of thing.\nUnless you're trying to run really complex queries maybe.\n\n-- When fascism comes to America, it will be draped in a flag and\ncarrying a cross - Sinclair Lewis\n", "msg_date": "Sat, 18 Oct 2008 23:52:28 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance on CPU limited Platforms" } ]
[ { "msg_contents": "Hi,\n\nIs it possible to put Statement timeout at User Level.\nLike If i have a user like 'guest', Can i put a statement timeout for it.\n\n-- \nRegards\nGauri\n\nHi,Is it possible to put Statement timeout at User Level.Like If i have a user like 'guest', Can i put a statement timeout for it.-- RegardsGauri", "msg_date": "Thu, 18 Sep 2008 10:37:18 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Statement Timeout at User Level" }, { "msg_contents": "Gauri Kanekar wrote:\n> Is it possible to put Statement timeout at User Level.\n> Like If i have a user like 'guest', Can i put a statement \n> timeout for it.\n\nIf only all problems were that easily solved!\n\nALTER ROLE guest SET statement_timeout=10000;\n\nThis will cause all statements longer than 10 seconds and issued\nby \"guest\" to be aborted.\n\nYours,\nLaurenz Albe\n", "msg_date": "Thu, 18 Sep 2008 09:34:50 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Statement Timeout at User Level" } ]
[ { "msg_contents": "List,\n\nI'm a bit confused as to why this query writes to the disk:\nSELECT count(*)\nFROM bigbigtable\nWHERE customerid IN (SELECT customerid FROM\nsmallcustomertable)\nAND x !=\n'special'\n\nAND y IS NULL\n\nIt writes a whole bunch of data to the disk that has the tablespace where\nbigbigtable lives as well as writes a little data to the main disk. It\nlooks like its is actually WAL logging these writes.\n\nHere is the EXPLAIN ANALYZE:\nAggregate (cost=46520194.16..46520194.17 rows=1 width=0) (actual\ntime=4892191.995..4892191.995 rows=1 loops=1)\n -> Hash IN Join (cost=58.56..46203644.01 rows=126620058 width=0) (actual\ntime=2.938..4840349.573 rows=79815986 loops=1)\n Hash Cond: ((bigbigtable.customerid)::text =\n(smallcustomertable.customerid)::text)\n -> Seq Scan on bigbigtable (cost=0.00..43987129.60 rows=126688839\nwidth=11) (actual time=0.011..4681248.143 rows=128087340 loops=1)\n Filter: ((y IS NULL) AND ((x)::text <> 'special'::text))\n -> Hash (cost=35.47..35.47 rows=1847 width=18) (actual\ntime=2.912..2.912 rows=1847 loops=1)\n -> Seq Scan on smallcustomertable (cost=0.00..35.47\nrows=1847 width=18) (actual time=0.006..1.301 rows=1847 loops=1)\nTotal runtime: 4892192.086 ms\n\nCan someone point me to some documentation as to why this writes to disk?\n\nThanks,\nNik\n\nList,I'm a bit confused as to why this query writes to the disk:SELECT count(*)FROM    bigbigtableWHERE customerid IN (SELECT customerid FROM smallcustomertable)                                                         \nAND x != 'special'                                                                                AND y IS NULL  It writes a whole bunch of data to the disk that has the tablespace where bigbigtable lives as well as writes a little data to the main disk.  It looks like its is actually WAL logging these writes.\nHere is the EXPLAIN ANALYZE:Aggregate  (cost=46520194.16..46520194.17 rows=1 width=0) (actual time=4892191.995..4892191.995 rows=1 loops=1)  ->  Hash IN Join  (cost=58.56..46203644.01 rows=126620058 width=0) (actual time=2.938..4840349.573 rows=79815986 loops=1)\n        Hash Cond: ((bigbigtable.customerid)::text = (smallcustomertable.customerid)::text)        ->  Seq Scan on bigbigtable  (cost=0.00..43987129.60 rows=126688839 width=11) (actual time=0.011..4681248.143 rows=128087340 loops=1)\n              Filter: ((y IS NULL) AND ((x)::text <> 'special'::text))        ->  Hash  (cost=35.47..35.47 rows=1847 width=18) (actual time=2.912..2.912 rows=1847 loops=1)              ->  Seq Scan on smallcustomertable  (cost=0.00..35.47 rows=1847 width=18) (actual time=0.006..1.301 rows=1847 loops=1)\nTotal runtime: 4892192.086 msCan someone point me to some documentation as to why this writes to disk?Thanks,Nik", "msg_date": "Thu, 18 Sep 2008 13:30:42 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why does this query write to the disk?" }, { "msg_contents": ">>> \"Nikolas Everett\" <[email protected]> wrote: \n \n> I'm a bit confused as to why this query writes to the disk:\n> SELECT count(*)\n> FROM bigbigtable\n> WHERE customerid IN (SELECT customerid FROM\n> smallcustomertable)\n> AND x !=\n> 'special'\n> \n> AND y IS NULL\n> \n> It writes a whole bunch of data to the disk that has the tablespace\nwhere\n> bigbigtable lives as well as writes a little data to the main disk. \nIt\n> looks like its is actually WAL logging these writes.\n \nIt's probably writing hint bits to improve performance of subsequent\naccess to the table. The issue is discussed here:\n \nhttp://wiki.postgresql.org/wiki/Hint_Bits\n \n-Kevin\n", "msg_date": "Thu, 18 Sep 2008 12:49:48 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does this query write to the disk?" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> \"Nikolas Everett\" <[email protected]> wrote: \n>> I'm a bit confused as to why this query writes to the disk:\n \n> It's probably writing hint bits to improve performance of subsequent\n> access to the table. The issue is discussed here:\n> http://wiki.postgresql.org/wiki/Hint_Bits\n\nHint-bit updates wouldn't be WAL-logged. If the table has been around a\nlong time, it might be freezing old tuples, which *would* be WAL-logged\n(since 8.2 or so) --- but that would be a one-time, non-repeatable\nbehavior. How sure are you that there was WAL output?\n\nWhat I was thinking was more likely was that the hash table for the hash\njoin was spilling out to temp files. That wouldn't be WAL-logged\neither, but depending on your tablespace setup it might result in I/O on\nsome other disk than the table proper.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Sep 2008 14:13:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does this query write to the disk? " }, { "msg_contents": "How big is your work_mem setting, and is this behavior affected by its size?\n\nYou can increase the work_mem on an individual connection before the test.\n\nSimply:\n\nset work_mem = '100MB'\n\nto set it to 100 Megabytes. If your issue is spilling data out of work_mem\nto the temp storage, this setting will affect that.\n\nOn Thu, Sep 18, 2008 at 10:30 AM, Nikolas Everett <[email protected]> wrote:\n\n> List,\n>\n> I'm a bit confused as to why this query writes to the disk:\n> SELECT count(*)\n> FROM bigbigtable\n> WHERE customerid IN (SELECT customerid FROM\n> smallcustomertable)\n> AND x !=\n> 'special'\n>\n> AND y IS NULL\n>\n> It writes a whole bunch of data to the disk that has the tablespace where\n> bigbigtable lives as well as writes a little data to the main disk. It\n> looks like its is actually WAL logging these writes.\n>\n> Here is the EXPLAIN ANALYZE:\n> Aggregate (cost=46520194.16..46520194.17 rows=1 width=0) (actual\n> time=4892191.995..4892191.995 rows=1 loops=1)\n> -> Hash IN Join (cost=58.56..46203644.01 rows=126620058 width=0)\n> (actual time=2.938..4840349.573 rows=79815986 loops=1)\n> Hash Cond: ((bigbigtable.customerid)::text =\n> (smallcustomertable.customerid)::text)\n> -> Seq Scan on bigbigtable (cost=0.00..43987129.60 rows=126688839\n> width=11) (actual time=0.011..4681248.143 rows=128087340 loops=1)\n> Filter: ((y IS NULL) AND ((x)::text <> 'special'::text))\n> -> Hash (cost=35.47..35.47 rows=1847 width=18) (actual\n> time=2.912..2.912 rows=1847 loops=1)\n> -> Seq Scan on smallcustomertable (cost=0.00..35.47\n> rows=1847 width=18) (actual time=0.006..1.301 rows=1847 loops=1)\n> Total runtime: 4892192.086 ms\n>\n> Can someone point me to some documentation as to why this writes to disk?\n>\n> Thanks,\n> Nik\n>\n\nHow big is your work_mem setting, and is this behavior affected by its size?You can increase the work_mem on an individual connection before the test.Simply:set work_mem = '100MB'\nto set it to 100 Megabytes.  If your issue is spilling data out of work_mem to the temp storage, this setting will affect that.On Thu, Sep 18, 2008 at 10:30 AM, Nikolas Everett <[email protected]> wrote:\nList,I'm a bit confused as to why this query writes to the disk:\nSELECT count(*)FROM    bigbigtableWHERE customerid IN (SELECT customerid FROM smallcustomertable)                                                         \nAND x != 'special'                                                                                AND y IS NULL  It writes a whole bunch of data to the disk that has the tablespace where bigbigtable lives as well as writes a little data to the main disk.  It looks like its is actually WAL logging these writes.\nHere is the EXPLAIN ANALYZE:Aggregate  (cost=46520194.16..46520194.17 rows=1 width=0) (actual time=4892191.995..4892191.995 rows=1 loops=1)  ->  Hash IN Join  (cost=58.56..46203644.01 rows=126620058 width=0) (actual time=2.938..4840349.573 rows=79815986 loops=1)\n\n        Hash Cond: ((bigbigtable.customerid)::text = (smallcustomertable.customerid)::text)        ->  Seq Scan on bigbigtable  (cost=0.00..43987129.60 rows=126688839 width=11) (actual time=0.011..4681248.143 rows=128087340 loops=1)\n\n              Filter: ((y IS NULL) AND ((x)::text <> 'special'::text))        ->  Hash  (cost=35.47..35.47 rows=1847 width=18) (actual time=2.912..2.912 rows=1847 loops=1)              ->  Seq Scan on smallcustomertable  (cost=0.00..35.47 rows=1847 width=18) (actual time=0.006..1.301 rows=1847 loops=1)\n\nTotal runtime: 4892192.086 msCan someone point me to some documentation as to why this writes to disk?Thanks,Nik", "msg_date": "Thu, 18 Sep 2008 11:30:23 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does this query write to the disk?" }, { "msg_contents": "Under what conditions does EXPLAIN ANALYZE report spilling work_mem to\ndisk? When does it not report work_mem or other overflow to disk?\nI know that a planned disk-sort shows up. I have also seen it report a\nhash-agg on disk, but this was a while ago and rather difficult to reproduce\nand I'm somewhat confident I have seen it spill to temp disk without\nreporting it in EXPLAIN ANALYZE, but I could be wrong.\n\nOn Thu, Sep 18, 2008 at 11:13 AM, Tom Lane <[email protected]> wrote:\n\n> \"Kevin Grittner\" <[email protected]> writes:\n> > \"Nikolas Everett\" <[email protected]> wrote:\n> >> I'm a bit confused as to why this query writes to the disk:\n>\n> > It's probably writing hint bits to improve performance of subsequent\n> > access to the table. The issue is discussed here:\n> > http://wiki.postgresql.org/wiki/Hint_Bits\n>\n> Hint-bit updates wouldn't be WAL-logged. If the table has been around a\n> long time, it might be freezing old tuples, which *would* be WAL-logged\n> (since 8.2 or so) --- but that would be a one-time, non-repeatable\n> behavior. How sure are you that there was WAL output?\n>\n> What I was thinking was more likely was that the hash table for the hash\n> join was spilling out to temp files. That wouldn't be WAL-logged\n> either, but depending on your tablespace setup it might result in I/O on\n> some other disk than the table proper.\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nUnder what conditions does EXPLAIN ANALYZE report spilling work_mem to disk?  When does it not report work_mem or other overflow to disk?I know that a planned disk-sort shows up.  I have also seen it report a hash-agg on disk, but this was a while ago and rather difficult to reproduce and I'm somewhat confident I have seen it spill to temp disk without reporting it in EXPLAIN ANALYZE, but I could be wrong.\nOn Thu, Sep 18, 2008 at 11:13 AM, Tom Lane <[email protected]> wrote:\n\"Kevin Grittner\" <[email protected]> writes:\n> \"Nikolas Everett\" <[email protected]> wrote:\n>> I'm a bit confused as to why this query writes to the disk:\n\n> It's probably writing hint bits to improve performance of subsequent\n> access to the table.  The issue is discussed here:\n> http://wiki.postgresql.org/wiki/Hint_Bits\n\nHint-bit updates wouldn't be WAL-logged.  If the table has been around a\nlong time, it might be freezing old tuples, which *would* be WAL-logged\n(since 8.2 or so) --- but that would be a one-time, non-repeatable\nbehavior.  How sure are you that there was WAL output?\n\nWhat I was thinking was more likely was that the hash table for the hash\njoin was spilling out to temp files.  That wouldn't be WAL-logged\neither, but depending on your tablespace setup it might result in I/O on\nsome other disk than the table proper.\n\n                        regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 18 Sep 2008 11:33:31 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does this query write to the disk?" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> http://wiki.postgresql.org/wiki/Hint_Bits\n\n\nOn Thu, Sep 18, 2008 at 2:13 PM, Tom Lane <[email protected]> wrote:\n\n> freezing old tuples\n\nhash join was spilling out to temp files\n>\n\nSince this was a new table and the writes to the table's disk were very\nlarge it was probably the hint bits.\n\nThe small table was about 1300 rows and my work_mem was 100MB so the writes\nto the main disk probably was not hash spillage. They were tiny, so I'm not\nworried about them.\n\nThanks very much,\nNik\n\n\"Kevin Grittner\" <[email protected]> writes:\n> http://wiki.postgresql.org/wiki/Hint_Bits\nOn Thu, Sep 18, 2008 at 2:13 PM, Tom Lane <[email protected]> wrote:\nfreezing old tuples hash join was spilling out to temp files\nSince this was a new table and the writes to the table's disk were very large it was probably the hint bits.The small\ntable was about 1300 rows and my work_mem was 100MB so the writes to the main disk probably was not hash spillage.  They were tiny, so I'm not worried about  them.Thanks very much,Nik", "msg_date": "Thu, 18 Sep 2008 14:44:27 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does this query write to the disk?" }, { "msg_contents": "\"Scott Carey\" <[email protected]> writes:\n> Under what conditions does EXPLAIN ANALYZE report spilling work_mem to\n> disk?\n\nFor hash joins, it doesn't. You might be thinking of the additional\nreporting we added for sorts recently; but there's no comparable\nlogging for hash ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Sep 2008 14:58:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does this query write to the disk? " } ]
[ { "msg_contents": "I have two identical queries except for the date range. In the first case,\nwith the wider date range, the correct (I believe) index is used. In the\nsecond case where the date range is smaller a different index is used and a\nless efficient plan is chosen. In the second query the problem seems to be\nCPU resoures; while it is running 1 core of the CPU is 100% busy.\n\nNote, if I drop the ad_log_date index then this query is always fast, but\nsome other queries I do require that index.\n\nSo, What can I do to encourage Postgres to use the first index even when the\ndate range is smaller.\n\n\n\n\n# explain analyze SELECT name FROM players AS foo WHERE EXISTS (SELECT 1\nFROM ad_log WHERE player = foo.id AND date(start_time) BETWEEN\nE'2008-09-14' AND E'2008-09-18' LIMIT 1) ORDER BY name;\n QUERY\nPLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-\n Sort (cost=1573.74..1574.31 rows=230 width=13) (actual time=28.421..28.505\nrows=306 loops=1)\n Sort Key: foo.name\n Sort Method: quicksort Memory: 28kB\n -> Seq Scan on players foo (cost=0.00..1564.72 rows=230 width=13)\n(actual time=0.104..27.876 rows=306 loops=1)\n Filter: (subplan)\n SubPlan\n -> Limit (cost=0.01..3.39 rows=1 width=0) (actual\ntime=0.058..0.058 rows=1 loops=460)\n -> Index Scan using ad_log_player_date on ad_log\n(cost=0.01..34571.03 rows=10228 width=0) (actual time=0.056..0.056 rows=1\nloops=460)\n Index Cond: ((player = $0) AND (date(start_time) >=\n'2008-09-14'::date) AND (date(start_time) <= '2008-09-18'::date))\n Total runtime: 28.623 ms\n(10 rows)\n\n\n\n# explain analyze SELECT name FROM players AS foo WHERE EXISTS (SELECT 1\nFROM ad_log WHERE player = foo.id AND date(start_time) BETWEEN\nE'2008-09-18' AND E'2008-09-18' LIMIT 1) ORDER BY name;\n QUERY PLAN\n----------------------------------------------------------------------------\n-----------------------------------------------------------------\n Index Scan using players_name_key on players foo (cost=0.00..8376.84\nrows=230 width=13) (actual time=813.695..143452.810 rows=301 loops=1)\n Filter: (subplan)\n SubPlan\n -> Limit (cost=0.01..18.14 rows=1 width=0) (actual\ntime=311.846..311.846 rows=1 loops=460)\n -> Index Scan using ad_log_date on ad_log (cost=0.01..18.14\nrows=1 width=0) (actual time=311.844..311.844 rows=1 loops=460)\n Index Cond: ((date(start_time) >= '2008-09-18'::date) AND\n(date(start_time) <= '2008-09-18'::date))\n Filter: (player = $0)\n Total runtime: 143453.100 ms\n(8 rows)\n\n\n\nThanks,\n\n--Rainer\n\n", "msg_date": "Fri, 19 Sep 2008 09:34:32 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "why does this use the wrong index?" }, { "msg_contents": "> So, What can I do to encourage Postgres to use the first index even when the\n> date range is smaller.\n> \n\nIt looks like PostgreSQL is estimating the selectivity of your date\nranges poorly. In the second (bad) plan it estimates that the index scan\nwith the filter will return 1 row (and that's probably because it\nestimates that the date range you specify will match only one row).\n\nThis leads PostgreSQL to choose the narrower index because, if the index\nscan is only going to return one row anyway, it might as well scan the\nsmaller index.\n\nWhat's the n_distinct for start_time?\n\n=> select n_distinct from pg_stats where tablename='ad_log' and\nattname='start_time';\n\nIf n_distinct is near -1, that would explain why it thinks that it will\nonly get one result.\n\nBased on the difference between the good index scan (takes 0.056ms per\nloop) and the bad index scan with the filter (311ms per loop), the\n\"player\" condition must be very selective, but PostgreSQL doesn't care\nbecause it already thinks that the date range is selective.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Fri, 19 Sep 2008 11:25:28 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why does this use the wrong index?" }, { "msg_contents": "On Fri, 2008-09-19 at 11:25 -0700, Jeff Davis wrote:\n> What's the n_distinct for start_time?\n\nActually, I take that back. Apparently, PostgreSQL can't change \"x\nBETWEEN y AND y\" into \"x=y\", so PostgreSQL can't use n_distinct at all.\n\nThat's your problem. If it's one day only, change it to equality and it\nshould be fine.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Fri, 19 Sep 2008 11:43:24 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why does this use the wrong index?" } ]
[ { "msg_contents": "Hi there,\n\nI am still concerned about this problem, because there is a big differences \nbetween the two cases, and I don't know how to identify the problem. Can \nanybody help me, please ?\n\nTIA,\nSabin \n\n\n", "msg_date": "Mon, 22 Sep 2008 23:40:15 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Different execution plan" }, { "msg_contents": "On Mon, Sep 22, 2008 at 2:40 PM, Sabin Coanda\n<[email protected]> wrote:\n> Hi there,\n>\n> I am still concerned about this problem, because there is a big differences\n> between the two cases, and I don't know how to identify the problem. Can\n> anybody help me, please ?\n\nSure, first step, if you can provide the following:\n\npg version\nos and version\noutput of explain analyze of each query (one good query, one bad)\n\nIn the meantime, make sure you've vacuum analyzed your db.\n", "msg_date": "Mon, 22 Sep 2008 15:19:07 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Different execution plan" }, { "msg_contents": "Hi Scott,\n\nI think it would be nice to log the reasons why an explain analyze chooses a \nspecific way or another for an execution plan. This would avoid wasting time \nto find the source of these decisions from the existing logs.\n\nIs it possible ?\n\nTIA,\nSabin \n\n\n", "msg_date": "Wed, 24 Sep 2008 10:42:24 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Different execution plan" }, { "msg_contents": "I use postgresql-8.2-506.jdbc3.jar, maybe helps\n\nSabin \n\n\n", "msg_date": "Wed, 24 Sep 2008 12:57:15 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Different execution plan" }, { "msg_contents": "On Wed, 24 Sep 2008, Sabin Coanda wrote:\n> I think it would be nice to log the reasons why an explain analyze chooses a\n> specific way or another for an execution plan. This would avoid wasting time\n> to find the source of these decisions from the existing logs.\n\nThat would probably fill up the logs pretty quickly.\n\nWhat may be really useful would be some sort of \"extended explain\" option. \nAt the moment, explain prints out the best query plan found. Sometimes it \nwould be nice if it printed out all the plans it considered, so we don't \nhave to go fiddling with enable_xxx to find alternatives.\n\nMatthew\n\n-- \nI suppose some of you have done a Continuous Maths course. Yes? Continuous\nMaths? <menacing stares from audience> Whoah, it was like that, was it!\n -- Computer Science Lecturer\n", "msg_date": "Wed, 24 Sep 2008 12:06:42 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Different execution plan" } ]
[ { "msg_contents": "Hello,\n\nI'm running into performance issues with various queries on a \nPostgreSQL database (of books). I'm having trouble understanding the \nthinking behind the query planner in this scenario:\nhttp://dpaste.com/hold/80101/\n(also attached at bottom of email)\n\nRelation sizes:\ndimension_books: 1998766 rows\ndimension_library_books: 10397943 rows\nVersion: PostgreSQL 8.3.3 on x86_64-pc-linux-gnu, compiled by GCC cc \n(GCC) 4.2.3 (Ubuntu 4.2.3-2ubuntu7)\n\nWhy does the query planner change when adding OFFSET? Is there a way \nto force it to use the first plan? The second plan is relatively \nslower than the first. I've run ANALYZE recently and played around \nwith different sets of indexes, but I believe my knowledge here is \nlimited.\n\ncount() is equally as slow (SELECT count(DISTINCT \n\"dimension_book\".\"call\")...). Eventually I want to paginate the \nresults, kind of like the PostgreSQL Archive search:\n\nResults 1-20 of more than 1000.\nSearching in 706,529 pages took 0.13221 seconds.\nResult pages: 1 2 3 4 5 6 7 8 9 10 11 ... Next\n\nI assume it implements something something along these lines?\n\nThanks,\n colin\n\n/******************************************************\n Table \"public.dimension_library_books\"\n Column | Type | Modifiers\n------------+--------- \n+----------------------------------------------------------------------\n id | integer | not null default \nnextval('dimension_library_books_id_seq'::regclass)\n book_id | integer | not null\n library_id | integer | not null\nIndexes:\n \"dimension_library_books_pkey\" PRIMARY KEY, btree (id)\n \"dimension_library_books_book_id\" btree (book_id)\n \"dimension_library_books_library_id\" btree (library_id)\nForeign-key constraints:\n \"dimension_library_books_book_id_fkey\" FOREIGN KEY (book_id) \nREFERENCES dimension_book(id) DEFERRABLE INITIALLY DEFERRED\n \"dimension_library_books_library_id_fkey\" FOREIGN KEY \n(library_id) REFERENCES dimension_library(id) DEFERRABLE INITIALLY \nDEFERRED\n\n Table \"public.dimension_book\"\n Column | Type | Modifiers\n----------+------------------------ \n+-------------------------------------------------------------\n id | integer | not null default \nnextval('dimension_book_id_seq'::regclass)\n acno | character varying(255) |\n title | character varying(255) |\n allusage | double precision |\n dousage | double precision |\n comusage | double precision |\n year | integer |\n language | character varying(255) |\n bclass | character varying(255) |\n call | character varying(255) | not null\nIndexes:\n \"dimension_book_pkey\" PRIMARY KEY, btree (id)\n \"call_idx\" btree (call)\n******************************************************/\n\ndimension=# EXPLAIN ANALYZE\nSELECT DISTINCT ON (\"dimension_book\".\"call\")\n \"dimension_book\".\"title\"\nFROM \"dimension_book\"\n INNER JOIN \"dimension_library_books\"\n ON (\"dimension_book\".\"id\" = \n\"dimension_library_books\".\"book_id\")\nWHERE (\"dimension_book\".\"call\" >= 'PA0000'\n AND \"dimension_library_books\".\"library_id\" IN (12,15,20))\nORDER BY \"dimension_book\".\"call\" ASC\nLIMIT 10;\n QUERY \n PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..19141.37 rows=10 width=105) (actual \ntime=0.349..1.874 rows=10 loops=1)\n -> Unique (cost=0.00..15389657.66 rows=8040 width=105) (actual \ntime=0.348..1.865 rows=10 loops=1)\n -> Nested Loop (cost=0.00..15389443.94 rows=85489 \nwidth=105) (actual time=0.344..1.832 rows=14 loops=1)\n -> Index Scan using call_idx on dimension_book \n(cost=0.00..311156.04 rows=806644 width=105) (actual time=0.118..0.452 \nrows=133 loops=1)\n Index Cond: ((call)::text >= 'PA0000'::text)\n -> Index Scan using dimension_library_books_book_id \non dimension_library_books (cost=0.00..18.61 rows=7 width=4) (actual \ntime=0.009..0.009 rows=0 loops=133)\n Index Cond: (dimension_library_books.book_id = \ndimension_book.id)\n Filter: (dimension_library_books.library_id = \nANY ('{12,15,20}'::integer[]))\n Total runtime: 1.947 ms\n(9 rows)\n\nTime: 3.157 ms\n\ndimension=# EXPLAIN ANALYZE\nSELECT DISTINCT ON (\"dimension_book\".\"call\")\n \"dimension_book\".\"title\"\nFROM \"dimension_book\"\n INNER JOIN \"dimension_library_books\"\n ON (\"dimension_book\".\"id\" = \n\"dimension_library_books\".\"book_id\")\nWHERE (\"dimension_book\".\"call\" >= 'PA0000'\n AND \"dimension_library_books\".\"library_id\" IN (12,15,20))\nORDER BY \"dimension_book\".\"call\" ASC\nLIMIT 10 OFFSET 100;\n QUERY \n PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=137122.20..137122.73 rows=10 width=105) (actual \ntime=3428.164..3428.180 rows=10 loops=1)\n -> Unique (cost=137116.88..137544.33 rows=8040 width=105) \n(actual time=3427.981..3428.159 rows=110 loops=1)\n -> Sort (cost=137116.88..137330.60 rows=85489 width=105) \n(actual time=3427.978..3428.039 rows=212 loops=1)\n Sort Key: dimension_book.call\n Sort Method: quicksort Memory: 34844kB\n -> Hash Join (cost=71699.90..133790.78 rows=85489 \nwidth=105) (actual time=1676.993..2624.015 rows=167419 loops=1)\n Hash Cond: (dimension_library_books.book_id = \ndimension_book.id)\n -> Bitmap Heap Scan on dimension_library_books \n(cost=3951.25..63069.35 rows=211789 width=4) (actual \ntime=112.627..581.554 rows=426156 loops=1)\n Recheck Cond: (library_id = ANY \n('{12,15,20}'::integer[]))\n -> Bitmap Index Scan on \ndimension_library_books_library_id (cost=0.00..3898.30 rows=211789 \nwidth=0) (actual time=95.030..95.030 rows=426156 loops=1)\n Index Cond: (library_id = ANY \n('{12,15,20}'::integer[]))\n -> Hash (cost=57665.60..57665.60 rows=806644 \nwidth=105) (actual time=1484.803..1484.803 rows=799876 loops=1)\n -> Seq Scan on dimension_book \n(cost=0.00..57665.60 rows=806644 width=105) (actual \ntime=37.391..1028.518 rows=799876 loops=1)\n Filter: ((call)::text >= \n'PA0000'::text)\n Total runtime: 3446.154 ms\n(15 rows)\n\nTime: 3447.396 ms\n\n\n-- \nColin Copeland\nCaktus Consulting Group, LLC\nP.O. Box 1454\nCarrboro, NC 27510\n(919) 951-0052\nhttp://www.caktusgroup.com\n\n", "msg_date": "Tue, 23 Sep 2008 17:22:28 -0400", "msg_from": "Colin Copeland <[email protected]>", "msg_from_op": true, "msg_subject": "query planner and scanning methods" }, { "msg_contents": "On Tue, Sep 23, 2008 at 2:22 PM, Colin Copeland <[email protected]> wrote:\n> dimension=# EXPLAIN ANALYZE\n> SELECT DISTINCT ON (\"dimension_book\".\"call\")\n> \"dimension_book\".\"title\"\n> FROM \"dimension_book\"\n> INNER JOIN \"dimension_library_books\"\n> ON (\"dimension_book\".\"id\" = \"dimension_library_books\".\"book_id\")\n> WHERE (\"dimension_book\".\"call\" >= 'PA0000'\n> AND \"dimension_library_books\".\"library_id\" IN (12,15,20))\n> ORDER BY \"dimension_book\".\"call\" ASC\n> LIMIT 10 OFFSET 100;\n\nYa offset works by scanning over the first 100 rows. When the offsets\nget big, it become a performance looser.\n\nYou can guarantee a faster index scan if you recall the last 10th\nvalue from the previous query. Then remove the offset predicate and\nreplace it with the following WHERE clause:\n\nWHERE ...\nAND dimension_book.call > _last_queried_10th_row-dimension_book_call,\n...\nLIMIT 10;\n\n\n-- \nRegards,\nRichard Broersma Jr.\n\nVisit the Los Angeles PostgreSQL Users Group (LAPUG)\nhttp://pugs.postgresql.org/lapug\n", "msg_date": "Tue, 23 Sep 2008 15:07:50 -0700", "msg_from": "\"Richard Broersma\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planner and scanning methods" }, { "msg_contents": "\nOn Sep 23, 2008, at 6:07 PM, Richard Broersma wrote:\n\n> On Tue, Sep 23, 2008 at 2:22 PM, Colin Copeland <[email protected] \n> > wrote:\n>> dimension=# EXPLAIN ANALYZE\n>> SELECT DISTINCT ON (\"dimension_book\".\"call\")\n>> \"dimension_book\".\"title\"\n>> FROM \"dimension_book\"\n>> INNER JOIN \"dimension_library_books\"\n>> ON (\"dimension_book\".\"id\" = \n>> \"dimension_library_books\".\"book_id\")\n>> WHERE (\"dimension_book\".\"call\" >= 'PA0000'\n>> AND \"dimension_library_books\".\"library_id\" IN (12,15,20))\n>> ORDER BY \"dimension_book\".\"call\" ASC\n>> LIMIT 10 OFFSET 100;\n>\n> Ya offset works by scanning over the first 100 rows. When the offsets\n> get big, it become a performance looser.\n>\n> You can guarantee a faster index scan if you recall the last 10th\n> value from the previous query. Then remove the offset predicate and\n> replace it with the following WHERE clause:\n>\n> WHERE ...\n> AND dimension_book.call > _last_queried_10th_row-dimension_book_call,\n> ...\n> LIMIT 10;\n\nRichard,\n\nYes, I was thinking about this too. How would one generate a list of \npages from this, though? I can't predict values of dimension_book.call \n(it's not a serial number).\n\nThanks,\n colin\n\n-- \nColin Copeland\nCaktus Consulting Group, LLC\nP.O. Box 1454\nCarrboro, NC 27510\n(919) 951-0052\nhttp://www.caktusgroup.com\n\n", "msg_date": "Tue, 23 Sep 2008 18:25:17 -0400", "msg_from": "Colin Copeland <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query planner and scanning methods" }, { "msg_contents": "On Tue, Sep 23, 2008 at 3:25 PM, Colin Copeland <[email protected]> wrote:\n\n>>> dimension=# EXPLAIN ANALYZE\n>>> SELECT DISTINCT ON (\"dimension_book\".\"call\")\n>>> \"dimension_book\".\"title\"\n>>> FROM \"dimension_book\"\n>>> INNER JOIN \"dimension_library_books\"\n>>> ON (\"dimension_book\".\"id\" = \"dimension_library_books\".\"book_id\")\n>>> WHERE (\"dimension_book\".\"call\" >= 'PA0000'\n>>> AND \"dimension_library_books\".\"library_id\" IN (12,15,20))\n>>> ORDER BY \"dimension_book\".\"call\" ASC\n>>> LIMIT 10 OFFSET 100;\n\n> Yes, I was thinking about this too. How would one generate a list of pages\n> from this, though? I can't predict values of dimension_book.call (it's not a\n> serial number).\n\nI can think of one very ugly way to get the first record for each\npage. Hopefully, you will not need to generate these list pages very\noften. Also, you could probably refine the following query in a\ncouple of ways to improve performance.\n\nSELECT A.\"dimension_book\".\"call\", SUM( B.\"dimension_book\".\"call\" ) AS\nOrderedRowNbr\nFROM ( your_above_query_without_the_limits ) AS A\nINNER JOIN ( your_above_query_without_the_limits ) AS B\nON A.\"dimension_book\".\"call\" >= B.\"dimension_book\".\"call\"\nORDER BY A.\"dimension_book\".\"call\"\nHAVING SUM( A.\"dimension_book\".\"call\" ) % 10 = 0;\n\n\n-- \nRegards,\nRichard Broersma Jr.\n\nVisit the Los Angeles PostgreSQL Users Group (LAPUG)\nhttp://pugs.postgresql.org/lapug\n", "msg_date": "Tue, 23 Sep 2008 15:57:31 -0700", "msg_from": "\"Richard Broersma\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planner and scanning methods" }, { "msg_contents": "On Tue, Sep 23, 2008 at 3:57 PM, Richard Broersma\n<[email protected]> wrote:\n> SELECT A.\"dimension_book\".\"call\", SUM( B.\"dimension_book\".\"call\" ) AS\n> OrderedRowNbr\n> FROM ( your_above_query_without_the_limits ) AS A\n> INNER JOIN ( your_above_query_without_the_limits ) AS B\n> ON A.\"dimension_book\".\"call\" >= B.\"dimension_book\".\"call\"\n> ORDER BY A.\"dimension_book\".\"call\"\n> HAVING SUM( A.\"dimension_book\".\"call\" ) % 10 = 0;\n\nOops I just noticed that I used sum() where count() should be used and\nthat I forgot to include the group by clause. Other than that, I hope\nthe suggestion was at least halfway helpful.\n\n-- \nRegards,\nRichard Broersma Jr.\n\nVisit the Los Angeles PostgreSQL Users Group (LAPUG)\nhttp://pugs.postgresql.org/lapug\n", "msg_date": "Wed, 24 Sep 2008 12:22:49 -0700", "msg_from": "\"Richard Broersma\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planner and scanning methods" } ]
[ { "msg_contents": "When displaying information about information about an user in our\nsite, I noticed an unreasonable slowdown. The culprit turned out to be\na trivial select, which determines the number of comments left by an\nuser:\n\nselect count(*) from comments where created_by=80 and status=1;\n\n\nThe comments table structure is below, and contains ~2 million\nrecords. I guess postgresql is unable to figure out exactly how to\nmake use of the index condition? As query plan shows, it got the\ncorrect answer, 15888, very fast: the rest of the 13 seconds it's just\nrechecking all the comments for some weird reasons. The weird thing\nis, SOMETIMES, for other created_by values, it seems to work fine, as\nshown below as well. Is this a bug, or I'm missing something here?\n\nThanks,\nEinars Lielmanis\n\n\n\n*** worse plan example:\n\netests=> explain analyze select count(*) from comments where\ncreated_by=80 and status=1;\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=50947.51..50947.52 rows=1 width=0) (actual\ntime=13134.360..13134.361 rows=1 loops=1)\n -> Bitmap Heap Scan on comments (cost=331.42..50898.41 rows=19639\nwidth=0) (actual time=40.865..13124.116 rows=15888 loops=1)\n Recheck Cond: ((created_by = 80) AND (status = 1))\n -> Bitmap Index Scan on comments_created_by\n(cost=0.00..326.51 rows=19639 width=0) (actual time=33.547..33.547\nrows=15888 loops=1)\n Index Cond: (created_by = 80)\n Total runtime: 13134.688 ms\n\n\n\n*** better plan example:\n\netests=> explain analyze select count(*) from comments where\ncreated_by=81 and status=1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=854.10..854.11 rows=1 width=0) (actual\ntime=0.083..0.083 rows=1 loops=1)\n -> Index Scan using comments_created_by on comments\n(cost=0.00..853.44 rows=262 width=0) (actual time=0.057..0.076 rows=3\nloops=1)\n Index Cond: (created_by = 81)\n Total runtime: 0.121 ms\n\n\n\n*** structure\n\netests=> \\d comments;\n Table \"public.comments\"\n Column | Type |\n Modifiers\n-----------------+-----------------------------+---------------------------------------------------------------\n comment_id | integer | not null default\nnextval('comments_comment_id_seq'::regclass)\n message_wiki | text |\n message_html | text |\n status | integer |\n post_id | integer |\n created | timestamp without time zone |\n created_by | integer |\n\nIndexes:\n \"comments_pkey\" PRIMARY KEY, btree (comment_id)\n \"comments_created_by\" btree (created_by) WHERE status = 1\n \"comments_for_post\" btree (post_id, created) WHERE status = 1\nCheck constraints:\n \"comments_status_check\" CHECK (status = ANY (ARRAY[0, 1, 2]))\nForeign-key constraints:\n \"comments_created_by_fkey\" FOREIGN KEY (created_by) REFERENCES\nmembers(member_id)\n \"comments_thread_id_fkey\" FOREIGN KEY (post_id) REFERENCES posts(post_id)\n\nPostgreSQL 8.3.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.2.3\n(Ubuntu 4.2.3-2ubuntu7)\n", "msg_date": "Wed, 24 Sep 2008 03:53:49 +0300", "msg_from": "Einars <[email protected]>", "msg_from_op": true, "msg_subject": "Chaotically weird execution plan" }, { "msg_contents": "Einars wrote:\n> As query plan shows, it got the\n> correct answer, 15888, very fast: the rest of the 13 seconds it's just\n> rechecking all the comments for some weird reasons.\n\nI'd already written: \"If you need the test for status = 1, consider a\npartial index\" when I noticed your schema definition:\n\n> \"comments_created_by\" btree (created_by) WHERE status = 1\n\nI find it hard to guess why it's having to recheck the WHERE clause\ngiven the use of a partial index that should cover that nicely. I don't\nsee how it could be a visibility issue (in that I thought tuples were\nread and tested for visibility as part of the initial heap scan) but I\ndon't see what else it could be.\n\nIt seems odd to me, so I'm really interested in seeing what others have\nto say on this.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 24 Sep 2008 09:13:31 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Chaotically weird execution plan" }, { "msg_contents": "Einars <[email protected]> writes:\n> When displaying information about information about an user in our\n> site, I noticed an unreasonable slowdown. The culprit turned out to be\n> a trivial select, which determines the number of comments left by an\n> user:\n\nI don't see anything even slightly wrong here. Your first example pulls\n15888 rows from the table in 13134 ms. Your second example pulls 3\nrows from the table in 0.121 ms --- which I rather imagine is because\nthose three rows are already in cache; it would take a lot longer if it\nactually had to go to disk several times.\n\nThe planner's rowcount estimates are on target and it appears to be\nchoosing appropriate plans in each case. It just takes longer to\nprocess 5000 times as many rows ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Sep 2008 22:58:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Chaotically weird execution plan " }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> I'd already written: \"If you need the test for status = 1, consider a\n> partial index\" when I noticed your schema definition:\n\n>> \"comments_created_by\" btree (created_by) WHERE status = 1\n\n> I find it hard to guess why it's having to recheck the WHERE clause\n> given the use of a partial index that should cover that nicely.\n\nNo, that's operating as designed. A bitmap scan's RECHECK condition\nis only applied when the bitmap has become lossy due to memory\npressure. In that case we have to look at each row on each of the pages\nfingered by the index as containing possible matches ... and we'd better\ncheck the partial-index qual too, since maybe not all the rows on those\npages will satisfy it. In a plain indexscan there is no lossiness\ninvolved and so the partial-index qual need never be rechecked.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Sep 2008 23:01:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Chaotically weird execution plan " }, { "msg_contents": "Tom Lane wrote:\n> Craig Ringer <[email protected]> writes:\n>> I'd already written: \"If you need the test for status = 1, consider a\n>> partial index\" when I noticed your schema definition:\n> \n>>> \"comments_created_by\" btree (created_by) WHERE status = 1\n> \n>> I find it hard to guess why it's having to recheck the WHERE clause\n>> given the use of a partial index that should cover that nicely.\n> \n> No, that's operating as designed. A bitmap scan's RECHECK condition\n> is only applied when the bitmap has become lossy due to memory\n> pressure. In that case we have to look at each row on each of the pages\n> fingered by the index as containing possible matches ... and we'd better\n> check the partial-index qual too, since maybe not all the rows on those\n> pages will satisfy it. In a plain indexscan there is no lossiness\n> involved and so the partial-index qual need never be rechecked.\n\nAah. Thanks very much for the explanation of that, the plan now makes sense.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 24 Sep 2008 11:12:26 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Chaotically weird execution plan" }, { "msg_contents": "Einars, you may be able to force a query plan similar to the first one, on\nthe second data set, if you decrease the random page cost or disable bitmap\nscans. If a regular index scan is faster than the bitmap scan with the same\nquery returning the same results, there may be some benefit that can be\ngained with tuning further. But it isn't likely and will depend on how\nlikely the pages are to be in RAM versus disk. If this is a big problem for\nyou, it may be worth trying however.\n\n\n\nOn Tue, Sep 23, 2008 at 7:58 PM, Tom Lane <[email protected]> wrote:\n\n> Einars <[email protected]> writes:\n> > When displaying information about information about an user in our\n> > site, I noticed an unreasonable slowdown. The culprit turned out to be\n> > a trivial select, which determines the number of comments left by an\n> > user:\n>\n> I don't see anything even slightly wrong here. Your first example pulls\n> 15888 rows from the table in 13134 ms. Your second example pulls 3\n> rows from the table in 0.121 ms --- which I rather imagine is because\n> those three rows are already in cache; it would take a lot longer if it\n> actually had to go to disk several times.\n>\n> The planner's rowcount estimates are on target and it appears to be\n> choosing appropriate plans in each case. It just takes longer to\n> process 5000 times as many rows ...\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\nEinars, you may be able to force a query plan similar to the first one, on the second data set, if you decrease the random page cost or disable bitmap scans.  If a regular index scan is faster than the bitmap scan with the same query returning the same results, there may be some benefit that can be gained with tuning further.  But it isn't likely and will depend on how likely the pages are to be in RAM versus disk.  If this is a big problem for you, it may be worth trying however.\nOn Tue, Sep 23, 2008 at 7:58 PM, Tom Lane <[email protected]> wrote:\nEinars <[email protected]> writes:\n> When displaying information about information about an user in our\n> site, I noticed an unreasonable slowdown. The culprit turned out to be\n> a trivial select, which determines the number of comments left by an\n> user:\n\nI don't see anything even slightly wrong here.  Your first example pulls\n15888 rows from the table in 13134 ms.  Your second example pulls 3\nrows from the table in 0.121 ms --- which I rather imagine is because\nthose three rows are already in cache; it would take a lot longer if it\nactually had to go to disk several times.\n\nThe planner's rowcount estimates are on target and it appears to be\nchoosing appropriate plans in each case.  It just takes longer to\nprocess 5000 times as many rows ...\n\n                        regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 23 Sep 2008 21:07:11 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Chaotically weird execution plan" } ]
[ { "msg_contents": "Hi again,\n\nShould I use gjournal on FreeBSD 7? Or just soft updates?\n\nHere is my opinion: I suspect that gjournal would be much slower than\nsoft updates. Also gjournal is relatively new code, not very well\ntested. But gjournal is better when the system crashes. Although I have\nheard that sometimes gjournal will crash the system itself. There are\nmore pros for soft updates I would pefer that. But please let me\nknow if I'm wrong.\n\nThanks,\n\n Laszlo\n\n\n\n\n\n", "msg_date": "Wed, 24 Sep 2008 12:18:23 +0200", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "UFS 2: soft updates vs. gjournal (AKA: Choosing a filesystem 2.)" }, { "msg_contents": "> Should I use gjournal on FreeBSD 7? Or just soft updates?\n>\n> Here is my opinion: I suspect that gjournal would be much slower than\n> soft updates. Also gjournal is relatively new code, not very well\n> tested. But gjournal is better when the system crashes. Although I have\n> heard that sometimes gjournal will crash the system itself. There are\n> more pros for soft updates I would pefer that. But please let me\n> know if I'm wrong.\n\nIf softupdates suites your needs why not just use that? :-) Is\nperformance adequate with softupdates? I have a 103 GB db on FreeBSD\n7.0 and softupdates and it has survived one unplanned stop when we had\na power-outage lasting some hours.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Wed, 24 Sep 2008 12:39:23 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UFS 2: soft updates vs. gjournal (AKA: Choosing a filesystem 2.)" }, { "msg_contents": "On Wed, Sep 24, 2008 at 1:18 PM, Laszlo Nagy <[email protected]> wrote:\n> Here is my opinion: I suspect that gjournal would be much slower than\n> soft updates. Also gjournal is relatively new code, not very well\n> tested.\n\nIn some cases it's much faster than SU, in other a bit slower. :)\ngjournal is quiet \"old\" code, it's already more than two years around,\nand very stable. Haven't seen any gjournal related crash.\n\n\n\n\n\n-- \nregards,\nArtis Caune\n\n<----. CCNA\n<----|====================\n<----' didii FreeBSD\n", "msg_date": "Wed, 24 Sep 2008 13:52:44 +0300", "msg_from": "\"Artis Caune\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UFS 2: soft updates vs. gjournal (AKA: Choosing a filesystem 2.)" }, { "msg_contents": "\nAm 24.09.2008 um 12:18 schrieb Laszlo Nagy:\n\n> Should I use gjournal on FreeBSD 7? Or just soft updates?\nI'm using gjournal for 5 weeks now on my production server.\nThere are 4 journaled filesystems on a raid controller with\nBBU. pg uses 23GB out of 1.6TB. I can't see any performance impact or \nother issue.\nRecovery from an unclean shutdown took less than a minute as compared \nto half an hour with ufs2/softupdates/fsck.\nHowever I'm still unsure if I should enable async mounts of rhe fs \nwith tablespaces/WAL.\nAnybody giving me advice?\n\nAxel\n--- ar3\n\n", "msg_date": "Wed, 24 Sep 2008 17:05:34 +0200", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UFS 2: soft updates vs. gjournal (AKA: Choosing a filesystem 2.)" } ]
[ { "msg_contents": "\nHi all. I'm having an interesting time performance-wise with a set of indexes. \nAny clues as to what is going on or tips to fix it would be appreciated.\n\nMy application runs lots of queries along the lines of:\n\nSELECT * from table where field IN (.., .., ..);\n\nThere is always an index on the field in the table, but the table is not \nnecessarily clustered on the index.\n\nThere are six different indexes which the application hits quite hard, that I \nhave investigated. These are:\n\ngene__key_primaryidentifier (index size 20MB) (table size 72MB)\ngene__key_secondaryidentifier (index size 20MB) (table size 72MB)\nontologyterm__key_name_ontology (index size 2.5MB) (table size 10MB)\nprotein__key_primaryacc (index size 359MB) (table size 1.2GB)\npublication__key_pubmed (index size 12MB) (table size 48MB)\nsynonym__key_synonym (index size 3GB) (table size 3.5GB)\n\nThese six indexes all perform very differently.\n\nThese are the results from a few thousand queries on each index, from our \napplication logs. Generally, the same value is not used more than once in all \nthe queries.\n\n (1) (2) (3) (4) (5)\ngene__key_primaryidentifier 22 17 417 19 24.5\ngene__key_secondaryidentifier 8.5 5.3 21 2.4 3.9\nontologyterm__key_name_ontology 6.5 6.5 9.4 1.4 1.4\nprotein__key_primaryacc 73 8.1 164 2.2 20\npublication__key_pubmed 52 31 156 3.0 5.0\nsynonym__key_synonym 335 66 245 0.7 3.7\n\n(1) - Average number of values in the IN list.\n(2) - Average number of rows returned by the queries.\n(3) - Average time taken to execute the query, in ms.\n(4) - Average time per value in the IN lists.\n(5) - Average time per row returned.\n\nAll the queries are answered with a bitmap index scan on the correct \nindex.\n\nI have also plotted all the log entries on an XY graph, with number of \nelements in the IN list against time taken, which is at \nhttp://wakeling.homeip.net/~mnw21/slow_index1.png. It is clear that the \ngene__key_primaryidentifier index runs a lot slower than some of the other \nindexes.\n\nThe thing is, the table and the index are both small. The machine has 16GB of \nRAM, and its disc subsystem is a RAID array of 16 15krpm drives with a BBU \ncaching RAID controller. The entire table and index should be in the cache. Why \nit is taking 20 milliseconds per value is beyond me. Moreover, the synonym \nindex is MUCH larger, has bigger queries, and performs better.\n\nIf we concentrate on just this index, it seems that some queries are \nanswered very quickly indeed, while others are answered a lot slower. I \nhave plotted just this one index on an XY graph, with two colours for \nvalues in the IN list and actual rows returned, which is at \nhttp://wakeling.homeip.net/~mnw21/slow_index2.png. It is clear that there \nis a gap in the graph between the slow queries and the fast queries.\n\nIs there something else going on here which is slowing the system down? The \ntable is not bloated. There is quite heavy simultaneous write traffic, but \nlittle other read traffic, and the 16 spindles and BBU cache should take care \nof that quite happily. I don't think it's slow parsing the query, as it seems \nto manage on other queries in a millisecond or less.\n\nAny ideas welcome.\n\nAlso, the mailing list server doesn't seem to be able to cope with image \nattachments.\n\nMatthew\n\n-- \nimport oz.wizards.Magic;\n if (Magic.guessRight())... -- Computer Science Lecturer\n", "msg_date": "Thu, 25 Sep 2008 13:07:09 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Slow index" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> Hi all. I'm having an interesting time performance-wise with a set of indexes. \n> Any clues as to what is going on or tips to fix it would be appreciated.\n\nAre the indexed columns all the same datatype? (And which type is it?)\n\nIt might be helpful to REINDEX the \"slow\" index. It's possible that\nwhat you're seeing is the result of a chance imbalance in the btree,\nwhich reindexing would fix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Sep 2008 09:11:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index " }, { "msg_contents": "On Thu, 25 Sep 2008, Tom Lane wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> Hi all. I'm having an interesting time performance-wise with a set of indexes.\n>> Any clues as to what is going on or tips to fix it would be appreciated.\n>\n> Are the indexed columns all the same datatype? (And which type is it?)\n\nGene.key_primaryidentifier is a text column\nGene.key_secondaryidentifier is a text column followed by an integer\nOntologyTerm.key_name_ontology is a text column followed by an integer\nProtein.key_primaryacc is a text column\nPublication.key_pubmed is a text column\nSynonym.key_synonym is an integer, two texts, and an integer\n\nIn most cases, the first text will be enough to uniquely identify the \nrelevant row.\n\n> It might be helpful to REINDEX the \"slow\" index. It's possible that\n> what you're seeing is the result of a chance imbalance in the btree,\n> which reindexing would fix.\n\nThat's unlikely to be the problem. When the application starts, the \ndatabase has just been loaded from a dump, so the indexes are completely \nfresh. The behaviour starts out bad, and does not get progressively worse.\n\nI don't know - is there likely to be any locking getting in the way? Our \nwrite traffic is fairly large chunks of binary COPY in. Could it be \nlocking the index while it adds the write traffic to it?\n\nMatthew\n\n-- \nMost books now say our sun is a star. But it still knows how to change\nback into a sun in the daytime.\n", "msg_date": "Thu, 25 Sep 2008 14:31:58 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow index " } ]
[ { "msg_contents": "Hello,\n\npostmaster heavily loads processor. The database is accessed from java\naplication (with several threads), C applications and from PHP scripts.\n\nIt seems that one php script, called periodicaly, rises the load but the\nscript is very simple, something like this:\n\n$var__base = new baza($dbhost,$dbport,$dbname,$dbuser,$dbpasswd);\n$pok_baza = new upit($var__base->veza);\n$upit_datum=\"SELECT * FROM system_alarm WHERE date= '$danas' AND\ntime>=(LOCALTIME - interval '$vrijeme_razmak hours') ORDER BY date DESC,\ntime DESC\";\n\nThe statment is executed in approximately 0.6 sec.\n\nThe number of open connections is constantly 107.\n\nThe operating system is Debian GNU/Linux kernel 2.6.18-4-686.\nDatabase version is PostgreSQL 8.2.4.\n\n\nThank you very much for any help.\n\nMaja Stula\n\n\n_________________________________________________________________________\n\nThe result of the top command:\n\ntop - 20:44:58 up 5:36, 1 user, load average: 1.31, 1.39, 1.24\nTasks: 277 total, 2 running, 275 sleeping, 0 stopped, 0 zombie\nCpu(s): 11.5%us, 2.2%sy, 0.0%ni, 86.3%id, 0.0%wa, 0.0%hi, 0.0%si, \n0.0%st\nMem: 3370808k total, 1070324k used, 2300484k free, 49484k buffers\nSwap: 1951888k total, 0k used, 1951888k free, 485396k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 4990 postgres 25 0 41160 19m 18m R 100 0.6 1:36.74 postmaster\n15278 test 24 0 1000m 40m 5668 S 9 1.2 1:42.37 java\n18892 root 15 0 2468 1284 884 R 0 0.0 0:00.05 top\n 1 root 15 0 2044 696 596 S 0 0.0 0:02.51 init\n 2 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0\n 3 root 34 19 0 0 0 S 0 0.0 0:00.12 ksoftirqd/0\n 4 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1\n 5 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/1\n 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/2\n 7 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/2\n\n__________________________________________________________________________\n\nThe result of vmstat command:\n\nkamis03:/etc# vmstat 1\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 2 0 0 2271356 49868 505252 0 0 2 32 40 83 6 1\n93 0\n 2 0 0 2271232 49868 505304 0 0 0 2348 459 1118 14 2\n84 0\n 3 0 0 2271232 49868 505304 0 0 0 16 305 1197 11 2\n87 0\n 3 0 0 2270984 49868 505432 0 0 0 8 407 1821 15 3\n82 0\n 2 0 0 2270984 49868 505432 0 0 0 0 271 1328 11 2\n87 0\n 1 0 0 2270984 49868 505440 0 0 0 24 375 1530 5 1\n94 0\n 2 0 0 2270488 49868 505440 0 0 0 1216 401 1541 12 2\n86 0\n\n__________________________________________________________________________\n\nThe cpu configuration is:\n\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz\nstepping : 7\ncpu MHz : 1596.076\ncache size : 4096 KB\nphysical id : 0\nsiblings : 4\ncore id : 0\ncpu cores : 4\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 10\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm\nconstant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm\nbogomips : 3194.46\n\nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz\nstepping : 7\ncpu MHz : 1596.076\ncache size : 4096 KB\nphysical id : 0\nsiblings : 4\ncore id : 1\ncpu cores : 4\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 10\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm\nconstant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm\nbogomips : 3191.94\n\nprocessor : 2\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz\nstepping : 7\ncpu MHz : 1596.076\ncache size : 4096 KB\nphysical id : 0\nsiblings : 4\ncore id : 2\ncpu cores : 4\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 10\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm\nconstant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm\nbogomips : 3192.01\n\nprocessor : 3\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz\nstepping : 7\ncpu MHz : 1596.076\ncache size : 4096 KB\nphysical id : 0\nsiblings : 4\ncore id : 3\ncpu cores : 4\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 10\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm\nconstant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm\nbogomips : 3192.01\n\nprocessor : 4\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz\nstepping : 7\ncpu MHz : 1596.076\ncache size : 4096 KB\nphysical id : 1\nsiblings : 4\ncore id : 0\ncpu cores : 4\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 10\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm\nconstant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm\nbogomips : 3191.98\n\nprocessor : 5\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz\nstepping : 7\ncpu MHz : 1596.076\ncache size : 4096 KB\nphysical id : 1\nsiblings : 4\ncore id : 1\ncpu cores : 4\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 10\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm\nconstant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm\nbogomips : 3191.98\n\nprocessor : 6\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz\nstepping : 7\ncpu MHz : 1596.076\ncache size : 4096 KB\nphysical id : 1\nsiblings : 4\ncore id : 2\ncpu cores : 4\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 10\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm\nconstant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm\nbogomips : 3191.97\n\nprocessor : 7\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU E5310 @ 1.60GHz\nstepping : 7\ncpu MHz : 1596.076\ncache size : 4096 KB\nphysical id : 1\nsiblings : 4\ncore id : 3\ncpu cores : 4\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 10\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm\nconstant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm\nbogomips : 3191.97\n\n__________________________________________________________________________\n\n\nPostgresql.conf file:\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Please note that re-commenting a setting is NOT sufficient to revert it\n# to the default value, unless you restart the postmaster.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have\n# to SIGHUP the postmaster for the changes to take effect, or use\n# \"pg_ctl reload\". Some settings, such as listen_addresses, require\n# a postmaster shutdown and restart to take effect.\n\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir'\t\t# use data in another directory\nhba_file = '/etc/postgresql/8.1/main/pg_hba.conf'\t# host-based\nauthentication file\nident_file = '/etc/postgresql/8.1/main/pg_ident.conf'\t# IDENT\nconfiguration file\n\n# If external_pid_file is not explicitly set, no extra pid file is written.\nexternal_pid_file = '/var/run/postgresql/8.1-main.pid'\t\t# write an extra\npid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\n#listen_addresses = 'localhost'\t\t# what IP address(es) to listen on;\n\t\t\t\t\t# comma-separated list of addresses;\n\t\t\t\t\t# defaults to 'localhost', '*' = all\nport = 5432\n\n# Maksimalni broj konekcija je podignut na 1000\n# Maja 15.6\nmax_connections = 1000\n#max_connections = 100\n# note: increasing max_connections costs ~400 bytes of shared memory per\n# connection slot, plus lock space (see max_locks_per_transaction). You\n# might also need to raise shared_buffers to support more connections.\n#superuser_reserved_connections = 2\nunix_socket_directory = '/var/run/postgresql'\n#unix_socket_group = ''\n#unix_socket_permissions = 0777\t\t# octal\n#bonjour_name = ''\t\t\t# defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60\t\t# 1-600, in seconds\nssl = false\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos\n#krb_server_keyfile = ''\n#krb_srvname = 'postgres'\n#krb_server_hostname = ''\t\t# empty string matches any keytab entry\n#krb_caseins_users = off\n\n# - TCP Keepalives -\n# see 'man 7 tcp' for details\n\n#tcp_keepalives_idle = 0\t\t# TCP_KEEPIDLE, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_interval = 0\t\t# TCP_KEEPINTVL, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_count = 0\t\t# TCP_KEEPCNT;\n\t\t\t\t\t# 0 selects the system default\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\n#shared_buffers = 1000\t\t\t# min 16 or max_connections*2, 8KB each\n# broj buffera mora biti dva puta veci od max. broj konekcija\nshared_buffers = 2000\n#temp_buffers = 1000\t\t\t# min 100, 8KB each\n#max_prepared_transactions = 5\t\t# can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n#work_mem = 1024\t\t\t# min 64, size in KB\n#maintenance_work_mem = 16384\t\t# min 1024, size in KB\n#max_stack_depth = 2048\t\t\t# min 100, size in KB\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000\t\t\t# min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000\t\t# min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000\t\t# min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0\t\t\t# 0-1000 milliseconds\n#vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n#vacuum_cost_page_miss = 10\t\t# 0-10000 credits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n#vacuum_cost_limit = 200\t\t# 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200\t\t\t# 10-10000 milliseconds between rounds\n#bgwriter_lru_percent = 1.0\t\t# 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5\t\t# 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333\t\t# 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5\t\t# 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = on\t\t\t\t# turns forced synchronization on or off\n#wal_sync_method = fsync\t\t# the default is the first option\n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t# open_datasync\n\t\t\t\t\t# fdatasync\n\t\t\t\t\t# fsync\n\t\t\t\t\t# fsync_writethrough\n\t\t\t\t\t# open_sync\n#full_page_writes = on\t\t\t# recover from partial page writes\n#wal_buffers = 8\t\t\t# min 4, 8KB each\n#commit_delay = 0\t\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n# - Checkpoints -\n\n#checkpoint_segments = 3\t\t# in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300\t\t# range 30-3600, in seconds\n#checkpoint_warning = 30\t\t# in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = ''\t\t\t# command to use to archive a logfile\n\t\t\t\t\t# segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#effective_cache_size = 1000\t\t# typically 8KB each\n#random_page_cost = 4\t\t\t# units are one sequential page fetch\n\t\t\t\t\t# cost\n#cpu_tuple_cost = 0.01\t\t\t# (same)\n#cpu_index_tuple_cost = 0.001\t\t# (same)\n#cpu_operator_cost = 0.0025\t\t# (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default based on effort\n#geqo_generations = 0\t\t\t# selects default based on effort\n#geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10\t\t# range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8\t\t# 1 disables collapsing of explicit\n\t\t\t\t\t# JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr'\t\t# Valid values are combinations of\n\t\t\t\t\t# stderr, syslog and eventlog,\n\t\t\t\t\t# depending on platform.\n\n# This is used when logging to stderr:\n#redirect_stderr = off\t\t\t# Enable capturing of stderr into log\n\t\t\t\t\t# files\n\n# These are only used if redirect_stderr is on:\n#log_directory = 'pg_log'\t\t# Directory where log files are written\n\t\t\t\t\t# Can be absolute or relative to PGDATA\n#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern.\n\t\t\t\t\t# Can include strftime() escapes\n#log_truncate_on_rotation = off # If on, any existing log file of the same\n\t\t\t\t\t# name as the new log file will be\n\t\t\t\t\t# truncated rather than appended to. But\n\t\t\t\t\t# such truncation only occurs on\n\t\t\t\t\t# time-driven rotation, not on restarts\n\t\t\t\t\t# or size-driven rotation. Default is\n\t\t\t\t\t# off, meaning append to existing files\n\t\t\t\t\t# in all cases.\n#log_rotation_age = 1440\t\t# Automatic rotation of logfiles will\n\t\t\t\t\t# happen after so many minutes. 0 to\n\t\t\t\t\t# disable.\n#log_rotation_size = 10240\t\t# Automatic rotation of logfiles will\n\t\t\t\t\t# happen after so many kilobytes of log\n\t\t\t\t\t# output. 0 to disable.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice\t\t# Values, in order of decreasing detail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# log\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\n#log_min_messages = notice\t\t# Values, in order of decreasing detail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# log\n\t\t\t\t\t# fatal\n\t\t\t\t\t# panic\n\n#log_error_verbosity = default\t\t# terse, default, or verbose messages\n\n#log_min_error_statement = panic\t# Values in order of increasing severity:\n\t\t\t\t \t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t \t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# panic(off)\n\n#log_min_duration_statement = -1\t# -1 is disabled, 0 logs all statements\n\t\t\t\t\t# and their durations, in milliseconds.\n\n#silent_mode = off\t\t\t# DO NOT USE without syslog or\n\t\t\t\t\t# redirect_stderr\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\nlog_line_prefix = '%t '\t\t\t# Special values:\n\t\t\t\t\t# %u = user name\n\t\t\t\t\t# %d = database name\n\t\t\t\t\t# %r = remote host and port\n\t\t\t\t\t# %h = remote host\n\t\t\t\t\t# %p = PID\n\t\t\t\t\t# %t = timestamp (no milliseconds)\n\t\t\t\t\t# %m = timestamp with milliseconds\n\t\t\t\t\t# %i = command tag\n\t\t\t\t\t# %c = session id\n\t\t\t\t\t# %l = session line number\n\t\t\t\t\t# %s = session start timestamp\n\t\t\t\t\t# %x = transaction id\n\t\t\t\t\t# %q = stop here in non-session\n\t\t\t\t\t# processes\n\t\t\t\t\t# %% = '%'\n\t\t\t\t\t# e.g. '<%u%%%d> '\n#log_statement = 'none'\t\t\t# none, mod, ddl, all\n#log_hostname = off\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = on\n#stats_command_string = off\n#stats_block_level = off\nstats_row_level = on\n#stats_reset_on_server_start = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = on\t\t\t# enable autovacuum subprocess?\n#autovacuum_naptime = 60\t\t# time between autovacuum runs, in secs\n#autovacuum_vacuum_threshold = 1000\t# min # of tuple updates before\n\t\t\t\t\t# vacuum\n#autovacuum_analyze_threshold = 500\t# min # of tuple updates before\n\t\t\t\t\t# analyze\n#autovacuum_vacuum_scale_factor = 0.4\t# fraction of rel size before\n\t\t\t\t\t# vacuum\n#autovacuum_analyze_scale_factor = 0.2\t# fraction of rel size before\n\t\t\t\t\t# analyze\n#autovacuum_vacuum_cost_delay = -1\t# default vacuum cost delay for\n\t\t\t\t\t# autovac, -1 means use\n\t\t\t\t\t# vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1\t# default vacuum cost limit for\n\t\t\t\t\t# autovac, -1 means use\n\t\t\t\t\t# vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public'\t\t# schema names\n#default_tablespace = ''\t\t# a tablespace name, '' uses\n\t\t\t\t\t# the default\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#statement_timeout = 0\t\t\t# 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown\t\t\t# actually, defaults to TZ\n\t\t\t\t\t# environment setting\n#australian_timezones = off\n#extra_float_digits = 0\t\t\t# min -15, max 2\n#client_encoding = sql_ascii\t\t# actually, defaults to database\n\t\t\t\t\t# encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'en_US.UTF-8'\t\t\t# locale for system error message\n\t\t\t\t\t# strings\nlc_monetary = 'en_US.UTF-8'\t\t\t# locale for monetary formatting\nlc_numeric = 'en_US.UTF-8'\t\t\t# locale for number formatting\nlc_time = 'en_US.UTF-8'\t\t\t\t# locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = on\n#dynamic_library_path = '$libdir'\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000\t\t# in milliseconds\n#max_locks_per_transaction = 64\t\t# min 10\n# note: each lock table slot uses ~220 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = off\n#backslash_quote = safe_encoding\t# on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = off\n#regex_flavor = advanced\t\t# advanced, extended, or basic\n#sql_inheritance = on\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = ''\t\t# list of custom variable class names\n\n\n\n", "msg_date": "Thu, 25 Sep 2008 21:13:21 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "CPU load" }, { "msg_contents": "2008/9/25 <[email protected]>:\n\n> The result of the top command:\n>\n> top - 20:44:58 up 5:36, 1 user, load average: 1.31, 1.39, 1.24\n> Tasks: 277 total, 2 running, 275 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 11.5%us, 2.2%sy, 0.0%ni, 86.3%id, 0.0%wa, 0.0%hi, 0.0%si,\n> 0.0%st\n> Mem: 3370808k total, 1070324k used, 2300484k free, 49484k buffers\n> Swap: 1951888k total, 0k used, 1951888k free, 485396k cached\n\nSNIP\n\n> The result of vmstat command:\n>\n> kamis03:/etc# vmstat 1\n> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 2 0 0 2271356 49868 505252 0 0 2 32 40 83 6 1\n> 93 0\n> 2 0 0 2271232 49868 505304 0 0 0 2348 459 1118 14 2\n> 84 0\n> 3 0 0 2271232 49868 505304 0 0 0 16 305 1197 11 2\n> 87 0\n> 3 0 0 2270984 49868 505432 0 0 0 8 407 1821 15 3\n\nIf that's what it looks like your server is running just fine. Load\nof 1.31, 85+% idle, no wait time. Or is that top and vmstat output\nfrom when the server is running fine?\n", "msg_date": "Thu, 25 Sep 2008 14:00:11 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU load" }, { "msg_contents": "> If that's what it looks like your server is running just fine. Load\n> of 1.31, 85+% idle, no wait time. Or is that top and vmstat output\n> from when the server is running fine?\n\nDon't forget that there are 8 CPUs, and the backend will only run on one\nof them.\n\nBut I concur that this seems ok.\nHow many rows are returned? Is 0.6 seconds an unacceptable time for that?\n\nIf there is a lot of sorting going on and the pages are residing in the\nbuffer, I would expect high CPU load.\n\nNormally, I am quite happy if my database is CPU bound. I start worrying\nif I/O wait grows too high.\n\nYours,\nLaurenz Albe\n", "msg_date": "Fri, 26 Sep 2008 08:43:25 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU load" }, { "msg_contents": "Thank's for your response.\n\nThe situation is that the top result is when the server is already\nexhibiting problems.\n\nThe number of rows returned by the query varies, right now is:\n\n49 row(s)\nTotal runtime: 3,965.718 ms\nThe table currently has 971582 rows.\n\nBut the problem is that when database server is restarted everything works\nfine and fast. No heavy loads of the processor and as time passes\nsituation with the processor is worsen.\n\nI forget to mention that php scrip is executed as a web application \n(Apache web server 2.2.3, php installed as a Server API\tApache 2.0\nHandler) called periodically each 8 seconds. After the restart of the\npostgres server everything works fine for several hours, the web\napplication has a fast response when opening a web page. But after some\ntime postmaster process (sometimes two postmaster process both owned by\npostgres user) rises and response time of the web application becomes\nslow, i.e. opening a php page with postgres access last for 8-10 seconds\nor even more. The php configuration for the postgres is default\n\n\nPostgreSQL Support\tenabled\nPostgreSQL(libpq) Version \t8.1.8\nMultibyte character support \tenabled\nSSL support \tenabled\nActive Persistent Links \t1\nActive Links \t1\n\nDirective\tLocal Value\tMaster Value\npgsql.allow_persistent\tOn\tOn\npgsql.auto_reset_persistent\tOff\tOff\npgsql.ignore_notice\tOff\tOff\npgsql.log_notice\tOff\tOff\npgsql.max_links\tUnlimited\tUnlimited\npgsql.max_persistent\tUnlimited\tUnlimited\n\n\n\n>> If that's what it looks like your server is running just fine. Load\n>> of 1.31, 85+% idle, no wait time. Or is that top and vmstat output\n>> from when the server is running fine?\n>\n> Don't forget that there are 8 CPUs, and the backend will only run on one\n> of them.\n>\n> But I concur that this seems ok.\n> How many rows are returned? Is 0.6 seconds an unacceptable time for that?\n>\n> If there is a lot of sorting going on and the pages are residing in the\n> buffer, I would expect high CPU load.\n>\n> Normally, I am quite happy if my database is CPU bound. I start worrying\n> if I/O wait grows too high.\n>\n> Yours,\n> Laurenz Albe\n>\n\n\n", "msg_date": "Fri, 26 Sep 2008 13:28:07 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: CPU load" }, { "msg_contents": "kiki wrote:\n> The number of rows returned by the query varies, right now is:\n>\n> 49 row(s)\n> Total runtime: 3,965.718 ms\n> The table currently has 971582 rows.\n>\n> But the problem is that when database server is restarted everything works\n> fine and fast. No heavy loads of the processor and as time passes\n> situation with the processor is worsen.\n\nIt would be interesting to know the result of EXPLAIN ANALYZE for the\nquery, both when it performs well and when it doesn't.\n\nOne thing I see right away when I look at your postgresql.conf is that\nyou have set shared_buffers to an awfully small value of 2000, when you have\nenough memory on the machine (vmstat reports 2GB free memory, right?).\n\nDoes the situation improve if you set it to a higher value?\n\nYours,\nLaurenz Albe\n", "msg_date": "Fri, 26 Sep 2008 14:52:33 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU load" }, { "msg_contents": "It would be useful to confirm that this is a backend process.\nWith top, hit the 'c' key to show the full path / description of the\nprocess.\nBackend postgres processes should then have more useful descriptions of what\nthey are doing and identifying themselves.\nYou can also confirm what query is causing that by lining up the process id\nfrom top with the one returned by:\n\nselect current_query, procpid from pg_stat_activity where current_query not\nlike '<IDLE%';\n\nOr by simply using the process id for the where clause (where procpid = ).\n\nHow often is the table being queried modified? Between the startup when the\nquery is fast, and when it slows down, is there a lot of modification to its\nrows?\n\n\nOn Fri, Sep 26, 2008 at 5:52 AM, Albe Laurenz <[email protected]>wrote:\n\n> kiki wrote:\n> > The number of rows returned by the query varies, right now is:\n> >\n> > 49 row(s)\n> > Total runtime: 3,965.718 ms\n> > The table currently has 971582 rows.\n> >\n> > But the problem is that when database server is restarted everything\n> works\n> > fine and fast. No heavy loads of the processor and as time passes\n> > situation with the processor is worsen.\n>\n> It would be interesting to know the result of EXPLAIN ANALYZE for the\n> query, both when it performs well and when it doesn't.\n>\n> One thing I see right away when I look at your postgresql.conf is that\n> you have set shared_buffers to an awfully small value of 2000, when you\n> have\n> enough memory on the machine (vmstat reports 2GB free memory, right?).\n>\n> Does the situation improve if you set it to a higher value?\n>\n> Yours,\n> Laurenz Albe\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIt would be useful to confirm that this is a backend process.With top, hit the 'c' key to show the full path / description of the process.Backend postgres processes should then have more useful descriptions of what they are doing and identifying themselves.\nYou can also confirm what query is causing that by lining up the process id from top with the one returned by:select current_query, procpid from pg_stat_activity where current_query not like '<IDLE%';\nOr by simply using the process id for the where clause (where procpid = ).How often is the table being queried modified?  Between the startup when the query is fast, and when it slows down, is there a lot of modification to its rows? \nOn Fri, Sep 26, 2008 at 5:52 AM, Albe Laurenz <[email protected]> wrote:\nkiki wrote:\n> The number of rows returned by the query varies, right now is:\n>\n> 49 row(s)\n> Total runtime: 3,965.718 ms\n> The table currently has 971582 rows.\n>\n> But the problem is that when database server is restarted everything works\n> fine and fast. No heavy loads of the processor and as time passes\n> situation with the processor is worsen.\n\nIt would be interesting to know the result of EXPLAIN ANALYZE for the\nquery, both when it performs well and when it doesn't.\n\nOne thing I see right away when I look at your postgresql.conf is that\nyou have set shared_buffers to an awfully small value of 2000, when you have\nenough memory on the machine (vmstat reports 2GB free memory, right?).\n\nDoes the situation improve if you set it to a higher value?\n\nYours,\nLaurenz Albe\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 26 Sep 2008 08:01:17 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU load" }, { "msg_contents": "Thanksďż˝ for the instructions for detecting the problem.\nIt helped a lot.\n\nFirst I have increased shared_buffers from 2000 to 8000. Since the\npostgresql is on Debian I had to increase SHMMAX kernel value.\nEverything is working much faster now.\nThere is still heavy load of postmaster process (up to 100%) for a simple\nquery\n\nEXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND\nconfirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC\nLIMIT 1;\n\n(the table is indexed by id_camera, has around 1 milion rows, and this\nquery returns around 700000 rows and is executed (EXPLAIN ANALYSE) in\naround 4800 ms, and this table is queried a lot although not so often\nqueried modified)\n\nbut I don't think that is strange behavior of the postgresql.\nAnd it is exhibited all the time; the postgresql reset does not influence\nit at all.\nOnce again thanks a lot, I learned a lot.\n\nRegards,\nMaja\n> It would be useful to confirm that this is a backend process.\n> With top, hit the 'c' key to show the full path / description of the\n> process.\n> Backend postgres processes should then have more useful descriptions of\n> what\n> they are doing and identifying themselves.\n> You can also confirm what query is causing that by lining up the process\n> id\n> from top with the one returned by:\n>\n> select current_query, procpid from pg_stat_activity where current_query\n> not\n> like '<IDLE%';\n>\n> Or by simply using the process id for the where clause (where procpid = ).\n>\n> How often is the table being queried modified? Between the startup when\n> the\n> query is fast, and when it slows down, is there a lot of modification to\n> its\n> rows?\n>\n>\n> On Fri, Sep 26, 2008 at 5:52 AM, Albe Laurenz\n> <[email protected]>wrote:\n>\n>> kiki wrote:\n>> > The number of rows returned by the query varies, right now is:\n>> >\n>> > 49 row(s)\n>> > Total runtime: 3,965.718 ms\n>> > The table currently has 971582 rows.\n>> >\n>> > But the problem is that when database server is restarted everything\n>> works\n>> > fine and fast. No heavy loads of the processor and as time passes\n>> > situation with the processor is worsen.\n>>\n>> It would be interesting to know the result of EXPLAIN ANALYZE for the\n>> query, both when it performs well and when it doesn't.\n>>\n>> One thing I see right away when I look at your postgresql.conf is that\n>> you have set shared_buffers to an awfully small value of 2000, when you\n>> have\n>> enough memory on the machine (vmstat reports 2GB free memory, right?).\n>>\n>> Does the situation improve if you set it to a higher value?\n>>\n>> Yours,\n>> Laurenz Albe\n>>\n>> --\n>> Sent via pgsql-performance mailing list\n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n\n\n", "msg_date": "Mon, 29 Sep 2008 09:17:22 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: CPU load" }, { "msg_contents": "Hello Maja,\n\n> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND\n> confirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC\n> LIMIT 1;\n>\n> (the table is indexed by id_camera, has around 1 milion rows, and this\n> query returns around 700000 rows and is executed (EXPLAIN ANALYSE) in\n> around 4800 ms, and this table is queried a lot although not so often\n> queried modified)\n\n700.000 of 1.000.000 rows is around 70% ... that are nearly all rows.\nAs much as I read you, this table is not often modified. What reason\nis there for quering all that data again and again instead of keeping\nit in memory (should it be really needed) ?\n\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nno fx, no carrier pigeon\n-\nEuroPython 2009 will take place in Birmingham - Stay tuned!\n", "msg_date": "Mon, 29 Sep 2008 09:26:03 +0200", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU load" }, { "msg_contents": "kiki wrote:\n> First I have increased shared_buffers from 2000 to 8000. Since the\n> postgresql is on Debian I had to increase SHMMAX kernel value.\n> Everything is working much faster now.\n\nGood to hear that the problem is gone.\n\n> There is still heavy load of postmaster process (up to 100%) for a simple\n> query\n> \n> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND\n> confirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC\n> LIMIT 1;\n> \n> (the table is indexed by id_camera, has around 1 milion rows, and this\n> query returns around 700000 rows and is executed (EXPLAIN ANALYSE) in\n> around 4800 ms, and this table is queried a lot although not so often\n> queried modified)\n> \n> but I don't think that is strange behavior of the postgresql.\n> And it is exhibited all the time; the postgresql reset does not influence\n> it at all.\n\nI'd expect a sequential scan for a query that returns 70% of the table.\n\nBut I cannot believe that this query returns more than one row since\nit has a \"LIMIT 1\". Can you enlighten me?\n\nIn the above query (with LIMIT 1), maybe an index on \"date\" could help.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Mon, 29 Sep 2008 09:39:09 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU load" }, { "msg_contents": "Sorry, without LIMIT returns around 700000 rows.\nTried to index date column and time column but the performance is pretty\nmuch the same.\nEverything is OK, I just donďż˝t understand way is this query burdening the\nprocessor so much.\n\nRegards,\nMaja\n\n> kiki wrote:\n>> First I have increased shared_buffers from 2000 to 8000. Since the\n>> postgresql is on Debian I had to increase SHMMAX kernel value.\n>> Everything is working much faster now.\n>\n> Good to hear that the problem is gone.\n>\n>> There is still heavy load of postmaster process (up to 100%) for a\n>> simple\n>> query\n>>\n>> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND\n>> confirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC\n>> LIMIT 1;\n>>\n>> (the table is indexed by id_camera, has around 1 milion rows, and this\n>> query returns around 700000 rows and is executed (EXPLAIN ANALYSE) in\n>> around 4800 ms, and this table is queried a lot although not so often\n>> queried modified)\n>>\n>> but I don't think that is strange behavior of the postgresql.\n>> And it is exhibited all the time; the postgresql reset does not\n>> influence\n>> it at all.\n>\n> I'd expect a sequential scan for a query that returns 70% of the table.\n>\n> But I cannot believe that this query returns more than one row since\n> it has a \"LIMIT 1\". Can you enlighten me?\n>\n> In the above query (with LIMIT 1), maybe an index on \"date\" could help.\n>\n> Yours,\n> Laurenz Albe\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Mon, 29 Sep 2008 10:29:45 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: CPU load" }, { "msg_contents": "Hello Herald,\n\nthe queried table is used for communication between server application and\nweb user interface.\nWhen application detects an event it writes it down in table.\nThe web client checks every 10 second if something new is written in the\ntable.\nUsually nothing new is written but the client has to check it.\nI don't fetch all rows, usually just the last one written.\nThe speed of the query is not a problem but the strange thing is the\nprocessor load with postmaster when the query is executed.\nI donďż˝t now how to reduce processor load.\nShould I change some other settings beside shared_buffers like work_mem?\nOr maybe such processor load is OK?\n\nRegards,\nMaja\n\n> Hello Maja,\n>\n>> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND\n>> confirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC\n>> LIMIT 1;\n>>\n>> (the table is indexed by id_camera, has around 1 milion rows, and this\n>> query returns around 700000 rows and is executed (EXPLAIN ANALYSE) in\n>> around 4800 ms, and this table is queried a lot although not so often\n>> queried modified)\n>\n> 700.000 of 1.000.000 rows is around 70% ... that are nearly all rows.\n> As much as I read you, this table is not often modified. What reason\n> is there for quering all that data again and again instead of keeping\n> it in memory (should it be really needed) ?\n>\n>\n> Harald\n>\n> --\n> GHUM Harald Massa\n> persuadere et programmare\n> Harald Armin Massa\n> Spielberger Straďż˝e 49\n> 70435 Stuttgart\n> 0173/9409607\n> no fx, no carrier pigeon\n> -\n> EuroPython 2009 will take place in Birmingham - Stay tuned!\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Mon, 29 Sep 2008 10:39:29 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: CPU load" }, { "msg_contents": "On Mon, Sep 29, 2008 at 10:29:45AM +0200, [email protected] wrote:\n> >> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND\n> >> confirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC\n> >> LIMIT 1;\n> Sorry, without LIMIT returns around 700000 rows.\n> Tried to index date column and time column but the performance is pretty\n> much the same.\n> Everything is OK, I just don’t understand way is this query burdening the\n> processor so much.\n\n1. please do not top-post.\n2. for this query, you can use this index:\ncreate index xxx on system_alarm (id_camera, date, time) where confirmed = 'false' and dismissed = 'false';\nor you can make it without where:\ncreate index xxx on system_alarm (id_camera, confirmed, dismissed, date, time);\nbut if you usually have the criteria \"confirmed = 'false' and dismissed\n= 'false'\" then the first index should be faster.\n\nBest regards,\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Mon, 29 Sep 2008 10:46:30 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU load" }, { "msg_contents": "Please try to avoid top-posting where inappropriate.\n\nkiki wrote:\n>>> There is still heavy load of postmaster process (up to 100%) for a simple\n>>> query\n>>>\n>>> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND\n>>> confirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC\n>>> LIMIT 1;\n>>>\n>>> (the table is indexed by id_camera, has around 1 milion rows, and this\n>>> query returns around 700000 rows and is executed (EXPLAIN ANALYSE) in\n>>> around 4800 ms, and this table is queried a lot although not so often\n>>> queried modified)\n>>>\n>>> but I don't think that is strange behavior of the postgresql.\n>>> And it is exhibited all the time; the postgresql reset does not\n>>> influence it at all.\n>>\n>> I'd expect a sequential scan for a query that returns 70% of the table.\n>>\n>> But I cannot believe that this query returns more than one row since\n>> it has a \"LIMIT 1\". Can you enlighten me?\n>>\n>> In the above query (with LIMIT 1), maybe an index on \"date\" could help.\n>\n> Sorry, without LIMIT returns around 700000 rows.\n> Tried to index date column and time column but the performance is pretty\n> much the same.\n> Everything is OK, I just don't understand way is this query burdening the\n> processor so much.\n\nYes, for the query without the LIMIT clause I wouldn't expect any gain from\nindexing.\n\nProbably the CPU load is caused by the sorting.\nDoes it look different if you omit ORDER BY?\nMaybe the sort will perform better if you increase work_mem in postgresql.conf,\nyou could experiment with that.\n\nYours,\nLaurenz Albe\n", "msg_date": "Mon, 29 Sep 2008 10:50:27 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU load" }, { "msg_contents": "kiki wrote:\n> The speed of the query is not a problem but the strange thing is the\n> processor load with postmaster when the query is executed.\n> I don’t now how to reduce processor load.\n\nDid you try without the ORDER BY?\nWhere are the execution plans?\n\nYours,\nLaurenz Albe\n", "msg_date": "Mon, 29 Sep 2008 14:06:44 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU load" }, { "msg_contents": "> kiki wrote:\n>> The speed of the query is not a problem but the strange thing is the\n>> processor load with postmaster when the query is executed.\n>> I donďż˝t now how to reduce processor load.\n>\n> Did you try without the ORDER BY?\n> Where are the execution plans?\n>\n> Yours,\n> Laurenz Albe\n>\n\nI expanded work_mem to 256 Mb and created index on table\n\ncreate index xxx on system_alarm (id_camera, date, time) where confirmed =\n'false' and dismissed = 'false';\n\nthe processor load now executing the query is max. 70%\n\nthe query execution with and without order is:\n\nistra_system=> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE\nid_camera='3' AND confirmed='false' AND dismissed='false' ;\n\n Seq Scan on system_alarm (cost=0.00..24468.33 rows=735284 width=47)\n(actual time=90.792..1021.967 rows=724846 loops=1)\n Filter: ((id_camera = 3) AND (NOT confirmed) AND (NOT dismissed))\n Total runtime: 1259.426 ms\n(3 rows)\n\nistra_system=> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE\nid_camera='3' AND confirmed='false' AND dismissed='false' ORDER BY date\nDESC, time ;\n\n Sort (cost=96114.18..97952.39 rows=735284 width=47) (actual\ntime=2303.547..2602.116 rows=724846 loops=1)\n Sort Key: date, \"time\"\n -> Seq Scan on system_alarm (cost=0.00..24468.33 rows=735284\nwidth=47) (actual time=100.322..1115.837 rows=724846 loops=1)\n Filter: ((id_camera = 3) AND (NOT confirmed) AND (NOT dismissed))\n Total runtime: 2916.557 ms\n(5 rows)\n\nI think this is OK.\nThanx\n\n", "msg_date": "Mon, 29 Sep 2008 15:07:08 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: CPU load" }, { "msg_contents": "kiki wrote:\n> I expanded work_mem to 256 Mb and created index on table\n> \n> create index xxx on system_alarm (id_camera, date, time) where confirmed =\n> 'false' and dismissed = 'false';\n\nThat index is not used for the query (as could be expected).\nYou better remove it.\n\n> the processor load now executing the query is max. 70%\n> \n> the query execution with and without order is:\n> \n> istra_system=> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE\n> id_camera='3' AND confirmed='false' AND dismissed='false' ;\n> \n> Seq Scan on system_alarm (cost=0.00..24468.33 rows=735284 width=47) (actual time=90.792..1021.967 rows=724846 loops=1)\n> Filter: ((id_camera = 3) AND (NOT confirmed) AND (NOT dismissed))\n> Total runtime: 1259.426 ms\n> (3 rows)\n> \n> istra_system=> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE\n> id_camera='3' AND confirmed='false' AND dismissed='false' ORDER BY date\n> DESC, time ;\n> \n> Sort (cost=96114.18..97952.39 rows=735284 width=47) (actual time=2303.547..2602.116 rows=724846 loops=1)\n> Sort Key: date, \"time\"\n> -> Seq Scan on system_alarm (cost=0.00..24468.33 rows=735284 width=47) (actual time=100.322..1115.837 rows=724846 loops=1)\n> Filter: ((id_camera = 3) AND (NOT confirmed) AND (NOT dismissed))\n> Total runtime: 2916.557 ms\n> (5 rows)\n> \n> I think this is OK.\n\nI think so too.\nI would say it is OK for the query to use much CPU during sort as long as this\ndoes not last for too long.\n\nYours,\nLaurenz Albe\n", "msg_date": "Mon, 29 Sep 2008 15:15:27 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU load" } ]
[ { "msg_contents": "I've just had an interesting encounter with the slow full table update \nproblem that is inherent with MVCC\n\nThe system is 64 bit linux with 2.6.25 kernel feeding scsi disks.\n\nthe table is\n\nCREATE TABLE file (\n fileid integer NOT NULL,\n fileindex integer DEFAULT 0 NOT NULL,\n jobid integer NOT NULL,\n pathid integer NOT NULL,\n filenameid integer NOT NULL,\n markid integer DEFAULT 0 NOT NULL,\n lstat text NOT NULL,\n md5 text NOT NULL,\n perms text\n);\n\nALTER TABLE ONLY file\n ADD CONSTRAINT file_pkey PRIMARY KEY (fileid);\n\nCREATE INDEX file_fp_idx ON file USING btree (filenameid, pathid);\nCREATE INDEX file_jobid_idx ON file USING btree (jobid);\n\nThere are 2.7M rows.\n\nrunning update file set perms='0664' took about 10 mins\n\nduring this period, vmstat reported Blocks Out holding in the 4000 to \n6000 range.\n\n\nWhen I dropped the indexes this query ran in 48sec.\nBlocks out peaking at 55000.\n\nSo there is a double whammy.\nMVCC requires more work to be done when indexes are defined and then \nthis work\nresults in much lower IO, compounding the problem.\n\n\nComments anyone?\n\n\n--john\n\n\n", "msg_date": "Fri, 26 Sep 2008 07:24:55 +1200", "msg_from": "John Huttley <[email protected]>", "msg_from_op": true, "msg_subject": "Slow updates, poor IO" }, { "msg_contents": "On Thu, Sep 25, 2008 at 1:24 PM, John Huttley <[email protected]> wrote:\n> I've just had an interesting encounter with the slow full table update\n> problem that is inherent with MVCC\n>\n> The system is 64 bit linux with 2.6.25 kernel feeding scsi disks.\n>\n> the table is\n>\n> CREATE TABLE file (\n> fileid integer NOT NULL,\n> fileindex integer DEFAULT 0 NOT NULL,\n> jobid integer NOT NULL,\n> pathid integer NOT NULL,\n> filenameid integer NOT NULL,\n> markid integer DEFAULT 0 NOT NULL,\n> lstat text NOT NULL,\n> md5 text NOT NULL,\n> perms text\n> );\n>\n> ALTER TABLE ONLY file\n> ADD CONSTRAINT file_pkey PRIMARY KEY (fileid);\n>\n> CREATE INDEX file_fp_idx ON file USING btree (filenameid, pathid);\n> CREATE INDEX file_jobid_idx ON file USING btree (jobid);\n>\n> There are 2.7M rows.\n>\n> running update file set perms='0664' took about 10 mins\n\nSo, how many rows would already be set to 0664? Would adding a where\nclause speed it up?\n\nupdate file set perms='0664' where perms <> '0664';\n\n> during this period, vmstat reported Blocks Out holding in the 4000 to 6000\n> range.\n>\n>\n> When I dropped the indexes this query ran in 48sec.\n> Blocks out peaking at 55000.\n>\n> So there is a double whammy.\n> MVCC requires more work to be done when indexes are defined and then this\n> work\n> results in much lower IO, compounding the problem.\n\nThat's because it becomes more random and less sequential. If you had\na large enough drive array you could get that kind of performance for\nupdating indexes, since the accesses would tend to hit different\ndrives most the time.\n\nUnder heavy load on the production servers at work we can see 30 to 60\nMegs a second random access with 12 drives, meaning 2.5 to 5Megs per\nsecond per drive. Sequential throughput is about 5 to 10 times\nhigher.\n\nWhat you're seeing are likely the effects of running a db on\ninsufficient drive hardware.\n", "msg_date": "Thu, 25 Sep 2008 13:35:19 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "On Thursday 25 September 2008, John Huttley <[email protected]> wrote:\n>\n> Comments anyone?\n\nDon't do full table updates? This is not exactly a news flash.\n\n\n-- \nAlan\n", "msg_date": "Thu, 25 Sep 2008 12:39:27 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "On Fri, 26 Sep 2008, John Huttley wrote:\n\n> running update file set perms='0664' took about 10 mins\n\nWhat do you have checkpoint_segments and shared_buffers set to? If you \nwant something that's doing lots of updates to perform well, you need to \nlet PostgreSQL have a decent size chunk of memory to buffer the index \nwrites with, so it's more likely they'll get combined into larger and \ntherefore more easily sorted blocks rather than as more random ones. The \nrandomness of the writes is why your write rate is so slow. You also need \nto cut down on the frequency of checkpoints which are very costly on this \ntype of statement.\n\nAlso: which version of PostgreSQL? 8.3 includes an improvement aimed at \nupdates like this you might benefit from.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 25 Sep 2008 17:57:35 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "Hi,\n\nOn Fri, Sep 26, 2008 at 07:24:55AM +1200, John Huttley wrote:\n> I've just had an interesting encounter with the slow full table update \n> problem that is inherent with MVCC\n\nQuite apart from the other excellent observations in this thread, what\nmakes you think this is an MVCC issue exactly?\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Fri, 26 Sep 2008 09:15:57 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "Hi Andrew,\nThere are two problems.\nThe first is the that if there is a table with a index and an update is \nperformed on a non indexed field,\nthe index is still re indexed. this is part of the trade-offs of MVCC.\nApparently this is documented under 'MVCC' in the manual. It should be \ndocumented under 'performance'\n\nWe should reasonably expect that the total amount of IO will go up, over \na non-indexed table.\n\nThe second thing is that the disk IO throughput goes way down.\n\nThis is not an issue with MVCC, as such, except that it exposes the \neffect of a write to an indexed field.\n--even if you don't expect it.\n\n--john\n\nAndrew Sullivan wrote:\n> Hi,\n>\n> On Fri, Sep 26, 2008 at 07:24:55AM +1200, John Huttley wrote:\n> \n>> I've just had an interesting encounter with the slow full table update \n>> problem that is inherent with MVCC\n>> \n>\n> Quite apart from the other excellent observations in this thread, what\n> makes you think this is an MVCC issue exactly?\n>\n> A\n>\n> \n\n\n\n\n\n\n\nHi Andrew,\nThere are two problems.\nThe first is the that if there is a table with a index and an update is\nperformed on a non indexed field,\nthe index is still re indexed. this is part of the trade-offs of MVCC.\nApparently this is documented under 'MVCC' in the manual. It should be\ndocumented under 'performance'\n\nWe should reasonably expect that the total amount of IO will go up,\nover a non-indexed table.\n\nThe second thing is that the disk IO throughput goes way down.\n\nThis is not an issue with MVCC, as such, except that it exposes the\neffect of a write to an indexed field.\n--even if you don't expect it.\n\n--john\n\nAndrew Sullivan wrote:\n\nHi,\n\nOn Fri, Sep 26, 2008 at 07:24:55AM +1200, John Huttley wrote:\n \n\nI've just had an interesting encounter with the slow full table update \nproblem that is inherent with MVCC\n \n\n\nQuite apart from the other excellent observations in this thread, what\nmakes you think this is an MVCC issue exactly?\n\nA", "msg_date": "Sat, 27 Sep 2008 11:03:38 +1200", "msg_from": "John Huttley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "Hi Greg,\n\nI've got 32M shared on a 1G machine and 16 checkpoint segments.\nI'll run some tests against 64 segments and see what happens.\n\nYour previous postings were extremely helpful wrt the MVCC issue.\nI thank you!\n\n-john\n\n\nGreg Smith wrote:\n> On Fri, 26 Sep 2008, John Huttley wrote:\n>\n>> running update file set perms='0664' took about 10 mins\n>\n> What do you have checkpoint_segments and shared_buffers set to? If \n> you want something that's doing lots of updates to perform well, you \n> need to let PostgreSQL have a decent size chunk of memory to buffer \n> the index writes with, so it's more likely they'll get combined into \n> larger and therefore more easily sorted blocks rather than as more \n> random ones. The randomness of the writes is why your write rate is \n> so slow. You also need to cut down on the frequency of checkpoints \n> which are very costly on this type of statement.\n>\n> Also: which version of PostgreSQL? 8.3 includes an improvement aimed \n> at updates like this you might benefit from.\n>\n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n", "msg_date": "Sat, 27 Sep 2008 11:09:31 +1200", "msg_from": "John Huttley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "On Sat, 27 Sep 2008, John Huttley wrote:\n\n> I've got 32M shared on a 1G machine and 16 checkpoint segments.\n> I'll run some tests against 64 segments and see what happens.\n\nIncrease shared_buffers to 256MB as well. That combination should give \nyou much better performance with the type of update you're doing. Right \nnow the database server has to write the index blocks updated to disk all \nthe time because it has so little working room to store them in. If an \nindex block is updated but there is room to keep it memory, it doesn't \nhave to get written out, which considerably lowers the overhead here.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 27 Sep 2008 03:04:08 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "On Fri, Sep 26, 2008 at 5:03 PM, John Huttley <[email protected]> wrote:\n> Hi Andrew,\n> There are two problems.\n> The first is the that if there is a table with a index and an update is\n> performed on a non indexed field,\n> the index is still re indexed.\n\nI assume you mean updated, not reindexed, as reindexed has a different\nmeaning as regards postgresql. Also, this is no longer true as of\nversion 8.3. If you're updating non-indexed fields a lot and you're\nnot running 8.3 you are doing yourself a huge disservice.\n\n>this is part of the trade-offs of MVCC.\n\nwas... was a part of the trade-offs.\n\n> We should reasonably expect that the total amount of IO will go up, over a\n> non-indexed table.\n>\n> The second thing is that the disk IO throughput goes way down.\n>\n> This is not an issue with MVCC, as such, except that it exposes the effect\n> of a write to an indexed field.\n\nIt's really an effect of parallel updates / writes / accesses, and is\nalways an issue for a database running on a poor storage subsystem. A\ndb with a two drive mirror set is always going to be at a disadvantage\nto one running on a dozen or so drives in a RAID-10\n", "msg_date": "Sat, 27 Sep 2008 09:09:09 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "Scott Marlowe wrote:\n> On Fri, Sep 26, 2008 at 5:03 PM, John Huttley <[email protected]> wrote:\n> \n>> Hi Andrew,\n>> There are two problems.\n>> The first is the that if there is a table with a index and an update is\n>> performed on a non indexed field,\n>> the index is still re indexed.\n>> \n>\n> I assume you mean updated, not reindexed, as reindexed has a different\n> meaning as regards postgresql. Also, this is no longer true as of\n> version 8.3. If you're updating non-indexed fields a lot and you're\n> not running 8.3 you are doing yourself a huge disservice.\n>\n> \n\nYes sorry, I mean all indexes are updated even when the updated field is \nnot indexed.\nI'm running 8.3.3\n>> this is part of the trade-offs of MVCC.\n>> \n>\n> was... was a part of the trade-offs.\n>\n> \nYou are thinking of HOT?\nI don't think it applies in the case of full table updates??\n\n>> We should reasonably expect that the total amount of IO will go up, over a\n>> non-indexed table.\n>>\n>> The second thing is that the disk IO throughput goes way down.\n>>\n>> This is not an issue with MVCC, as such, except that it exposes the effect\n>> of a write to an indexed field.\n>> \n>\n> It's really an effect of parallel updates / writes / accesses, and is\n> always an issue for a database running on a poor storage subsystem. A\n> db with a two drive mirror set is always going to be at a disadvantage\n> to one running on a dozen or so drives in a RAID-10\n>\n> \nOh well, I'm forever going to be disadvantaged.\n\n\n\n\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn Fri, Sep 26, 2008 at 5:03 PM, John Huttley <[email protected]> wrote:\n \n\nHi Andrew,\nThere are two problems.\nThe first is the that if there is a table with a index and an update is\nperformed on a non indexed field,\nthe index is still re indexed.\n \n\n\nI assume you mean updated, not reindexed, as reindexed has a different\nmeaning as regards postgresql. Also, this is no longer true as of\nversion 8.3. If you're updating non-indexed fields a lot and you're\nnot running 8.3 you are doing yourself a huge disservice.\n\n \n\n\nYes sorry, I mean all indexes are updated even when the updated field\nis not indexed.\nI'm running 8.3.3\n\n\n\nthis is part of the trade-offs of MVCC.\n \n\n\nwas... was a part of the trade-offs.\n\n \n\nYou are thinking of HOT?\nI don't think it applies in the case of full table updates??\n\n\n\n\nWe should reasonably expect that the total amount of IO will go up, over a\nnon-indexed table.\n\nThe second thing is that the disk IO throughput goes way down.\n\nThis is not an issue with MVCC, as such, except that it exposes the effect\nof a write to an indexed field.\n \n\n\nIt's really an effect of parallel updates / writes / accesses, and is\nalways an issue for a database running on a poor storage subsystem. A\ndb with a two drive mirror set is always going to be at a disadvantage\nto one running on a dozen or so drives in a RAID-10\n\n \n\nOh well, I'm forever going to be disadvantaged.", "msg_date": "Sun, 28 Sep 2008 11:33:56 +1300", "msg_from": "John Huttley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "On Sat, Sep 27, 2008 at 4:33 PM, John Huttley <[email protected]> wrote:\n>\n> > > this is part of the trade-offs of MVCC.\n>\n> > was... was a part of the trade-offs.\n>\n> You are thinking of HOT?\n> I don't think it applies in the case of full table updates??\n\nSure, you just need a table with plenty of empty space in it, either\nfrom vacuumed previous deletes / inserts or with a low fill factor\nlike 50%.\n\n> It's really an effect of parallel updates / writes / accesses, and is\n> always an issue for a database running on a poor storage subsystem. A\n> db with a two drive mirror set is always going to be at a disadvantage\n> to one running on a dozen or so drives in a RAID-10\n>\n> Oh well, I'm forever going to be disadvantaged.\n\nWhy? A decent caching raid controller and a set of 4 to 8 SATA drives\ncan make a world of difference and the cost is not that high for the\ngain in performance. Even going to 4 drives in a software RAID-10 can\nmake a lot of difference in these situations, and that can be done\nwith spare machines and hard drives.\n", "msg_date": "Sat, 27 Sep 2008 16:54:20 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "John Huttley <[email protected]> writes:\n> Scott Marlowe wrote:\n>> was... was a part of the trade-offs.\n\n> You are thinking of HOT?\n> I don't think it applies in the case of full table updates??\n\nSure, as long as there's enough free space on each page.\n\nIf you wanted to make a table that was optimized for this kind of thing,\nyou could try creating it with fillfactor 50.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Sep 2008 10:33:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO " }, { "msg_contents": "I have had great success using FILLFACTOR on certain tables where big\nupdates like this occur and improving performance. It is still not as fast\nas I would like, but there are significant gains. A big disk array won't\nhelp you as much as it should -- yes it will be faster, but it will still be\nchugging during one of these sorts of large updates and very inefficiently\nat that.\n\nOn some of my cases, a FILLFACTOR of 95 or 98 is enough to do the trick. On\nothers, 80 or 70 works.\nIt depends on the size of your rows versus the size of the modifications you\nmake. A fillfactor of 99 holds between ~80 bytes and one row-width worth of\nfree space in every page, and is all that is needed if you have larger rows\nand only modify small fields such as ints. I'm not sure why FILLFACTOR = 99\nisn't the default, to be honest. The size difference on disk is far less\nthan 1% since most tables can't fit an exact number of rows in one page, and\nthe benefit for updates is huge in certain cases.\nOn the other hand, your table has a narrow row width and will fit many rows\non one page, and if you are modifying text or varchars, you may need more\nspace for those reserved in the fillfactor void and a smaller FILLFACTOR\nsetting on the table, down to about 50 for updates where the updated rows\naccount for a big fraction of the row width.\n\nA second benefit of using a fillfactor is that you can CLUSTER on an index\nand the table will retain that ordering for longer while\ninserts/updates/deletes occur. A fillfactor setting, REINDEX, then CLUSTER\nsequence can have a big impact.\n\n\nOn Sun, Sep 28, 2008 at 7:33 AM, Tom Lane <[email protected]> wrote:\n\n> John Huttley <[email protected]> writes:\n> > Scott Marlowe wrote:\n> >> was... was a part of the trade-offs.\n>\n> > You are thinking of HOT?\n> > I don't think it applies in the case of full table updates??\n>\n>\n\nI have had great success using FILLFACTOR on certain tables where big updates like this occur and improving performance.  It is still not as fast as I would like, but there are significant gains.  A big disk array won't help you as much as it should -- yes it will be faster, but it will still be chugging during one of these sorts of large updates and very inefficiently at that. \nOn some of my cases, a FILLFACTOR of 95 or 98 is enough to do the trick.  On others, 80 or 70 works.It depends on the size of your rows versus the size of the modifications you make.  A fillfactor of 99 holds between ~80 bytes and one row-width worth of free space in every page, and is all that is needed if you have larger rows and only modify small fields such as ints.  I'm not sure why FILLFACTOR = 99 isn't the default, to be honest.  The size difference on disk is far less than 1% since most tables can't fit an exact number of rows in one page, and the benefit for updates is huge in certain cases.\nOn the other hand, your table has a narrow row width and will fit many rows on one page, and if you are modifying text or varchars, you may need more space for those reserved in the fillfactor void and a smaller FILLFACTOR setting on the table, down to about 50 for updates where the updated rows account for a big fraction of the row width.\nA second benefit of using a fillfactor is that you can CLUSTER on an index and the table will retain that ordering for longer while inserts/updates/deletes occur.  A fillfactor setting, REINDEX, then CLUSTER sequence can have a big impact.\nOn Sun, Sep 28, 2008 at 7:33 AM, Tom Lane <[email protected]> wrote:\nJohn Huttley <[email protected]> writes:\n> Scott Marlowe wrote:\n>> was...  was a part of the trade-offs.\n\n> You are thinking of HOT?\n> I don't think it applies in the case of full table updates??", "msg_date": "Sun, 28 Sep 2008 09:24:06 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "Ahh! I've not dealt with that before. I'll look it up.\nThanks Tom.\n\n\nTom Lane wrote:\n> John Huttley <[email protected]> writes:\n> \n>\n>> You are thinking of HOT?\n>> I don't think it applies in the case of full table updates??\n>> \n>\n> Sure, as long as there's enough free space on each page.\n>\n> If you wanted to make a table that was optimized for this kind of thing,\n> you could try creating it with fillfactor 50.\n>\n> \t\t\tregards, tom lane\n>\n> \n\n\n\n\n\n\n\nAhh! I've not dealt with that before. I'll look it up.\nThanks Tom.\n\n\nTom Lane wrote:\n\nJohn Huttley <[email protected]> writes:\n \n\n\nYou are thinking of HOT?\nI don't think it applies in the case of full table updates??\n \n\n\nSure, as long as there's enough free space on each page.\n\nIf you wanted to make a table that was optimized for this kind of thing,\nyou could try creating it with fillfactor 50.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 29 Sep 2008 10:53:31 +1300", "msg_from": "John Huttley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "Thanks to everyone that responded.\nI've done some benchmarking\n\ncheckpoint _segments=16 is fine, going to 64 made no improvement.\nUsing \"update file set size=99\" as a statement, but changing 99 on each \nrun..\n\nWith 32M shared memory, time in sec and leaving the system idle long \nenough between runs for auto vacuum to complete.\n\n415\n421\n470\n\nThe I decided to drop the Db and restore from a dump\n\n1150\n1500\n1018\n1071\n1077\n1140\n\nThen I tried shared_mem=256M as suggested.\n\n593\n544\n\nSo thats made a big difference. vmstat showed a higher, more consistent, \nIO level\n\nI wondered why it slowed down after a restore. I thought it would \nimprove, less fragmentation\nand all that. So I tried a reindex on all three indexes.\n\n209\n228\n\nSo thats it! lots of ram and reindex as part of standard operation.\n\nInterestingly, the reindexing took about 16s each. The update on the \ntable with no indexes took about 48sec\nSo the aggregate time for each step would be about 230s. I take that as \nbeing an indicator that it is\nnow maximally efficient.\n\n\nThe option of having more spindles for improved IO request processing \nisn't feasible in most cases.\nWith the requirement for redundancy, we end with a lot of them, needing \nan external enclosure.\nThey would have to be expensive SCSI/SAS/FC drives too, since SATA just \ndon't have the IO processing.\n\nIt will be interesting to see what happens when good performing SSD's \nappear.\n\nMeanwhile RAM is cheaper than that drive array!\n\nIt would be nice if thing like\n* The effect of updates on indexed tables\n* Fill Factor\n* reindex after restore\n\nWere mentioned in the 'performance' section of the manual, since that's \nthe part someone will go\nto when looking for a solution.\n\n\nAgain, thanks to everyone,\n\n--John\n\n", "msg_date": "Mon, 29 Sep 2008 11:22:36 +1300", "msg_from": "John Huttley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "On Mon, 29 Sep 2008, John Huttley wrote:\n\n> checkpoint _segments=16 is fine, going to 64 made no improvement.\n\nYou might find that it does *after* increasing shared_buffers. If the \nbuffer cache is really small, the checkpoints can't have very much work to \ndo, so their impact on performance is smaller. Once you've got a couple \nof hundred MB on there, the per-checkpoint overhead can be considerable.\n\n> It would be nice if thing like\n> * The effect of updates on indexed tables\n> * Fill Factor\n> * reindex after restore\n> Were mentioned in the 'performance' section of the manual, since that's \n> the part someone will go to when looking for a solution.\n\nIf you have to reindex after restore to get good performance, that means \nwhat you should do instead is drop the indexes on the table during the \nrestore and then create them once the data is there. The REINDEX is more \naimed at when the system has been running for a while and getting \nfragmented.\n\nUnfortunately most of the people who know enough about those topics to \nreally do a good treatment of them are too busy fixing slow systems to \nhave time to write about it. There are many articles on this general \ntopic trickling out at \nhttp://wiki.postgresql.org/wiki/Performance_Optimization you might find \nvaluable in addition to the manual.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 28 Sep 2008 21:40:22 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "\n\nGreg Smith wrote:\n> On Mon, 29 Sep 2008, John Huttley wrote:\n>\n>> checkpoint _segments=16 is fine, going to 64 made no improvement.\n>\n> You might find that it does *after* increasing shared_buffers. If the \n> buffer cache is really small, the checkpoints can't have very much \n> work to do, so their impact on performance is smaller. Once you've \n> got a couple of hundred MB on there, the per-checkpoint overhead can \n> be considerable.\n>\nAhh bugger, I've just trashed my test setup.\nI've settled on 64Mb shared memory since I've only got 1Gb or RAM and \nthe system impact of 256M is severe.\nAlso it uses FB-DIMMS which cost arm+leg+first born\n\n\n>> It would be nice if thing like\n>> * The effect of updates on indexed tables\n>> * Fill Factor\n>> * reindex after restore\n>> Were mentioned in the 'performance' section of the manual, since \n>> that's the part someone will go to when looking for a solution.\n>\n> If you have to reindex after restore to get good performance, that \n> means what you should do instead is drop the indexes on the table \n> during the restore and then create them once the data is there. The \n> REINDEX is more aimed at when the system has been running for a while \n> and getting fragmented.\n\nI thought that the pg_dump generated files did that, so I dismissed it \ninitially. Maybe I did a data only restore into an existing schema..\n>\n> Unfortunately most of the people who know enough about those topics to \n> really do a good treatment of them are too busy fixing slow systems to \n> have time to write about it. There are many articles on this general \n> topic trickling out at \n> http://wiki.postgresql.org/wiki/Performance_Optimization you might \n> find valuable in addition to the manual.\n>\nAn of course this is now in mail archive!\n\n\n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n>\n", "msg_date": "Mon, 29 Sep 2008 15:01:24 +1300", "msg_from": "John Huttley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "On Sun, Sep 28, 2008 at 8:01 PM, John Huttley <[email protected]> wrote:\n> Ahh bugger, I've just trashed my test setup.\n> I've settled on 64Mb shared memory since I've only got 1Gb or RAM and the\n> system impact of 256M is severe.\n> Also it uses FB-DIMMS which cost arm+leg+first born\n\nhttp://www.crucial.com/search/searchresults.aspx?keywords=buffered\n\nFully buffered memory there is $56.99 for a 1 Gig stick. That's\nhardly an arm and a leg. Considering many pgsql DBAs make that in 1\nto 3 hours, it's not much at all really. A lot cheaper than pulling\nyour hair out trying to make a db server run on 1 Gig.\n", "msg_date": "Sun, 28 Sep 2008 20:15:05 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "Ah yess... actually I can get the Kingston stuff locally.\nHowever at the moment I'm happily married and want to keep it that way!\n\nEverything is in pairs too. Actually its a lot cheaper than when it \nfirst came out, but still\na lot more than your corner shop DDR-2 stuff.\n\n--John\n\n\n\n\nScott Marlowe wrote:\n> On Sun, Sep 28, 2008 at 8:01 PM, John Huttley <[email protected]> wrote:\n> \n>> Ahh bugger, I've just trashed my test setup.\n>> I've settled on 64Mb shared memory since I've only got 1Gb or RAM and the\n>> system impact of 256M is severe.\n>> Also it uses FB-DIMMS which cost arm+leg+first born\n>> \n>\n> http://www.crucial.com/search/searchresults.aspx?keywords=buffered\n>\n> Fully buffered memory there is $56.99 for a 1 Gig stick. That's\n> hardly an arm and a leg. Considering many pgsql DBAs make that in 1\n> to 3 hours, it's not much at all really. A lot cheaper than pulling\n> your hair out trying to make a db server run on 1 Gig.\n>\n>\n> \n\n\n\n\n\n\nAh yess... actually I can get the Kingston stuff locally. \nHowever at the moment I'm happily married and want to keep it that way!\n\nEverything is in pairs too. Actually its a lot cheaper than when it\nfirst came out, but still\na lot more than your corner shop DDR-2 stuff.\n\n--John\n\n\n\n\nScott Marlowe wrote:\n\nOn Sun, Sep 28, 2008 at 8:01 PM, John Huttley <[email protected]> wrote:\n \n\nAhh bugger, I've just trashed my test setup.\nI've settled on 64Mb shared memory since I've only got 1Gb or RAM and the\nsystem impact of 256M is severe.\nAlso it uses FB-DIMMS which cost arm+leg+first born\n \n\n\nhttp://www.crucial.com/search/searchresults.aspx?keywords=buffered\n\nFully buffered memory there is $56.99 for a 1 Gig stick. That's\nhardly an arm and a leg. Considering many pgsql DBAs make that in 1\nto 3 hours, it's not much at all really. A lot cheaper than pulling\nyour hair out trying to make a db server run on 1 Gig.", "msg_date": "Mon, 29 Sep 2008 16:08:51 +1300", "msg_from": "John Huttley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "\nOn Sep 28, 2008, at 10:01 PM, John Huttley wrote:\n\n>\n>\n> Greg Smith wrote:\n>> On Mon, 29 Sep 2008, John Huttley wrote:\n>>\n>>> checkpoint _segments=16 is fine, going to 64 made no improvement.\n>>\n>> You might find that it does *after* increasing shared_buffers. If \n>> the buffer cache is really small, the checkpoints can't have very \n>> much work to do, so their impact on performance is smaller. Once \n>> you've got a couple of hundred MB on there, the per-checkpoint \n>> overhead can be considerable.\n>>\n> Ahh bugger, I've just trashed my test setup.\n\nPardon? How did you do that?\n\n-- \nDan Langille\nhttp://langille.org/\n\n\n\n\n", "msg_date": "Sun, 28 Sep 2008 23:41:03 -0400", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "I've canned the db and got rid my of data.\nI'm in the midst of doing some other benchmarking for a possible change \nto the bacula database.\n\nLoading up 1M records into a table of 60M records complete with indexes.\nIt's still going...\n\n--john\n\n\nDan Langille wrote:\n>\n> On Sep 28, 2008, at 10:01 PM, John Huttley wrote:\n>\n>>\n>>\n>> Greg Smith wrote:\n>>> On Mon, 29 Sep 2008, John Huttley wrote:\n>>>\n>>>> checkpoint _segments=16 is fine, going to 64 made no improvement.\n>>>\n>>> You might find that it does *after* increasing shared_buffers. If \n>>> the buffer cache is really small, the checkpoints can't have very \n>>> much work to do, so their impact on performance is smaller. Once \n>>> you've got a couple of hundred MB on there, the per-checkpoint \n>>> overhead can be considerable.\n>>>\n>> Ahh bugger, I've just trashed my test setup.\n>\n> Pardon? How did you do that?\n>\n", "msg_date": "Mon, 29 Sep 2008 16:47:09 +1300", "msg_from": "John Huttley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow updates, poor IO" }, { "msg_contents": "On Sun, Sep 28, 2008 at 9:08 PM, John Huttley <[email protected]> wrote:\n> Ah yess... actually I can get the Kingston stuff locally.\n> However at the moment I'm happily married and want to keep it that way!\n>\n> Everything is in pairs too. Actually its a lot cheaper than when it first\n> came out, but still\n> a lot more than your corner shop DDR-2 stuff.\n\nI don't mean to keep arguing here, but it's not any more expensive\nthan the same speed DDR-2 667MHz memory. for ECC memory memory,\nthey're almost the same price.\n\nhttp://www.crucial.com/store/listparts.aspx?model=PowerEdge%201950\n\nThat's memory for my Dell PowerEdge web server, and it's $105.99 for 2\n1Gig sticks. $56.99 * 2 = $113.98. It's only 7.99 more. I get the\npoint about not wanting to anger the wife, but maybe if you ask for it\nnice for Christmas? :)\n", "msg_date": "Sun, 28 Sep 2008 22:12:18 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow updates, poor IO" } ]
[ { "msg_contents": "Hello\n\nI'm running pgsql 8.1.11 (from debian stable) on a server with 16GB RAM\n(Linux helios 2.6.18-6-amd64 #1 SMP Tue Aug 19 04:30:56 UTC 2008 x86_64\nGNU/Linux).\nI have a table \"tickets\" with 1 000 000 insert by month ( ~2600 each 2hours\n) (for the moment 13000000 rows for 5GB )\nand i have to extract statistics ( number of calls, number of calls less\nthan X seconds, number of news calles, number of calls from the new\ncallers, ...)\n\n\n\n1°) The server will handle max 15 queries at a time.\nSo this is my postgresql.conf\n\nmax_connections = 15\nshared_buffers = 995600 # ~1Go\ntemp_buffers = 1000\nwork_mem = 512000 # ~512Ko\nmaintenance_work_mem = 1048576 # 1Mo\n\nmax_fsm_pages = 41522880 # ~40Mo\nmax_fsm_relations = 8000 \ncheckpoint_segments = 10\ncheckpoint_timeout = 3600\neffective_cache_size = 13958643712 # 13Go\n\nstats_start_collector = on\nstats_command_string = on\nstats_block_level = on\nstats_row_level = on\nautovacuum = off\n\nHow can i optimize the configuration?\n\n\n\n\n2°) My queries look like\nSELECT tday AS n,\nCOUNT(DISTINCT(a.appelant)) AS new_callers,\nCOUNT(a.appelant) AS new_calls\nFROM cirpacks.tickets AS a\nWHERE LENGTH(a.appelant) > 4\nAND a.service_id IN ( 95, 224, 35, 18 )\nAND a.exploitant_id = 66\nAND a.tyear = 2008\nAND a.tmonth = 08\nAND EXISTS ( SELECT 1 FROM cirpacks.clients AS b WHERE b.appelant =\na.appelant AND b.service_id IN ( 95, 224, 35, 18 ) AND b.heberge_id = 66\nHAVING to_char(MIN(b.premier_appel), 'YYYYMMDD') = to_char(a.date,\n'YYYYMMDD') )\nGROUP BY n\nORDER BY n;\n\nor select ... SUM( CASE WHEN condition THEN value ELSE 0) ... FROM\ncirpacks.tickets WHERE tyear = ... and tmonth = ... and tday = ... AND\naudiotel IN ( '...', '...' ....);\nor select ... SUM( CASE WHEN condition THEN value ELSE 0) ... FROM\ncirpacks.tickets WHERE '2007-01-01' <= date AND date <= '2008-08-31' AND\naudiotel IN ( '...', '...' ....);\n\n\nwhich indexes are the best ?\ncase 0:\nindex_0_0 (service_id, exploitant_id, palier_id, habillage_id, tweek, tday,\nthour, tmonth, tyear, length(appelant::text))\nindex_0_1 (audiotel, cat, tweek, tday, thour, tmonth, tyear,\nlength(appelant::text))\n\nor case 1\nindex_1_0 (audiotel, cat, service_id, exploitant_id, palier_id,\nhabillage_id, tweek, tday, thour, tmonth, tyear, length(appelant::text))\n\nor case 2:\nindex_2_0 (tweek, tday, thour, tmonth, tyear, length(appelant::text))\nindex_2_1 (service_id, exploitant_id, palier_id, habillage_id)\nindex_2_2 (audiotel, cat)\n\nor even (case 3)\nindex_3_0 (service_id, exploitant_id, palier_id, habillage_id, tyear,\nlength(appelant::text))\nindex_3_1 (service_id, exploitant_id, palier_id, habillage_id, tmonth,\ntyear, length(appelant::text))\nindex_3_2 (service_id, exploitant_id, palier_id, habillage_id, tday,\ntmonth, tyear, length(appelant::text))\n[...]\n\n\n\n\n", "msg_date": "Mon, 29 Sep 2008 15:25:53 +0200", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "dedicated server & postgresql 8.1 conf tunning" }, { "msg_contents": "[email protected] wrote:\n> Hello\n> \n> I'm running pgsql 8.1.11 (from debian stable) on a server with 16GB RAM\n> (Linux helios 2.6.18-6-amd64 #1 SMP Tue Aug 19 04:30:56 UTC 2008 x86_64\n> GNU/Linux).\n\nUnless you're committed to this version, I'd seriously look into 8.3\nfrom backports (or compiled yourself). I'd expect some serious\nperformance improvements for the workload you describe.\n\n> I have a table \"tickets\" with 1 000 000 insert by month ( ~2600 each 2hours\n> ) (for the moment 13000000 rows for 5GB )\n> and i have to extract statistics ( number of calls, number of calls less\n> than X seconds, number of news calles, number of calls from the new\n> callers, ...)\n\nOK, so not a lot of updates, but big aggregation queries. You might want\nto pre-summarise older data as the system gets larger.\n\n> 1°) The server will handle max 15 queries at a time.\n> So this is my postgresql.conf\n> \n> max_connections = 15\n\nWell, I'd allow 20 - just in case.\n\n> shared_buffers = 995600 # ~1Go\n> temp_buffers = 1000\n> work_mem = 512000 # ~512Ko\n\nI'd be tempted to increase work_mem by a lot, possibly even at the\nexpense of shared_buffers. You're going to be summarising large amounts\nof data so the larger the better, particularly as your database is\ncurrently smaller than RAM. Start with 5MB then try 10MB, 20MB and see\nwhat difference it makes.\n\n> maintenance_work_mem = 1048576 # 1Mo\n> \n> max_fsm_pages = 41522880 # ~40Mo\n> max_fsm_relations = 8000 \n\nSee what a vacuum full verbose says for how much free space you need to\ntrack.\n\n> checkpoint_segments = 10\n> checkpoint_timeout = 3600\n\nWith your low rate of updates shouldn't matter.\n\n> effective_cache_size = 13958643712 # 13Go\n\nAssuming that's based on what \"top\" or \"free\" say, that's fine. Don't\nforget it will need to be reduced if you increase work_mem or\nshared_buffers.\n\n> stats_start_collector = on\n> stats_command_string = on\n> stats_block_level = on\n> stats_row_level = on\n> autovacuum = off\n\nMake sure you're vacuuming if autovacuum is off.\n\n> How can i optimize the configuration?\n\nLooks reasonable, so far as you can tell from an email. Try playing with\nwork_mem though.\n\n> 2°) My queries look like\n> SELECT tday AS n,\n> COUNT(DISTINCT(a.appelant)) AS new_callers,\n> COUNT(a.appelant) AS new_calls\n> FROM cirpacks.tickets AS a\n> WHERE LENGTH(a.appelant) > 4\n> AND a.service_id IN ( 95, 224, 35, 18 )\n> AND a.exploitant_id = 66\n> AND a.tyear = 2008\n> AND a.tmonth = 08\n\nIndex on (tyear,tmonth) might pay off, or one on exploitant_id perhaps.\n\n> AND EXISTS ( SELECT 1 FROM cirpacks.clients AS b WHERE b.appelant =\n> a.appelant AND b.service_id IN ( 95, 224, 35, 18 ) AND b.heberge_id = 66\n> HAVING to_char(MIN(b.premier_appel), 'YYYYMMDD') = to_char(a.date,\n> 'YYYYMMDD') )\n\nIt looks like you're comparing two dates by converting them to text.\nThat's probably not the most efficient way of doing it. Might not be an\nissue here.\n\n> GROUP BY n\n> ORDER BY n;\n> \n> or select ... SUM( CASE WHEN condition THEN value ELSE 0) ... FROM\n> cirpacks.tickets WHERE tyear = ... and tmonth = ... and tday = ... AND\n> audiotel IN ( '...', '...' ....);\n> or select ... SUM( CASE WHEN condition THEN value ELSE 0) ... FROM\n> cirpacks.tickets WHERE '2007-01-01' <= date AND date <= '2008-08-31' AND\n> audiotel IN ( '...', '...' ....);\n> \n> \n> which indexes are the best ?\n\nThe only way to find out is to test. You'll want to run EXPLAIN after\nadding each index to see what difference it makes. Then you'll want to\nsee what impact this has on overall workload.\n\nMostly though, I'd try out 8.3 and see if that buys you a free\nperformance boost.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 01 Oct 2008 12:36:48 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dedicated server & postgresql 8.1 conf tunning" }, { "msg_contents": "Thanks,\n\nUnfornatly, i can't update pgsql to 8.3 since it's not in debian stable.\n\nSo i'm going to play with work_mem & shared_buffers.\n\nWith big shared_buffers pgsql tells me \nshmget(cle=5432001, taille=11183431680, 03600).\nso i do \"echo 13183431680 > /proc/sys/kernel/shmmax\" ( 10Go + 2Go just\nin case)\n\nbut pgsql tells me again that it there's not enought shm..\nHow can i compute the go shmmax for my server ?\n\nOn Wed, 01 Oct 2008 12:36:48 +0100, Richard Huxton <[email protected]>\nwrote:\n> [email protected] wrote:\n>> Hello\n>> \n>> I'm running pgsql 8.1.11 (from debian stable) on a server with 16GB RAM\n>> (Linux helios 2.6.18-6-amd64 #1 SMP Tue Aug 19 04:30:56 UTC 2008 x86_64\n>> GNU/Linux).\n> \n> Unless you're committed to this version, I'd seriously look into 8.3\n> from backports (or compiled yourself). I'd expect some serious\n> performance improvements for the workload you describe.\n> \n>> I have a table \"tickets\" with 1 000 000 insert by month ( ~2600 each\n> 2hours\n>> ) (for the moment 13000000 rows for 5GB )\n>> and i have to extract statistics ( number of calls, number of calls less\n>> than X seconds, number of news calles, number of calls from the new\n>> callers, ...)\n> \n> OK, so not a lot of updates, but big aggregation queries. You might want\n> to pre-summarise older data as the system gets larger.\n> \n>> 1°) The server will handle max 15 queries at a time.\n>> So this is my postgresql.conf\n>> \n>> max_connections = 15\n> \n> Well, I'd allow 20 - just in case.\n> \n>> shared_buffers = 995600 # ~1Go\n>> temp_buffers = 1000\n>> work_mem = 512000 # ~512Ko\n> \n> I'd be tempted to increase work_mem by a lot, possibly even at the\n> expense of shared_buffers. You're going to be summarising large amounts\n> of data so the larger the better, particularly as your database is\n> currently smaller than RAM. Start with 5MB then try 10MB, 20MB and see\n> what difference it makes.\n> \n>> maintenance_work_mem = 1048576 # 1Mo\n>> \n>> max_fsm_pages = 41522880 # ~40Mo\n>> max_fsm_relations = 8000 \n> \n> See what a vacuum full verbose says for how much free space you need to\n> track.\n> \n>> checkpoint_segments = 10\n>> checkpoint_timeout = 3600\n> \n> With your low rate of updates shouldn't matter.\n> \n>> effective_cache_size = 13958643712 # 13Go\n> \n> Assuming that's based on what \"top\" or \"free\" say, that's fine. Don't\n> forget it will need to be reduced if you increase work_mem or\n> shared_buffers.\n> \n>> stats_start_collector = on\n>> stats_command_string = on\n>> stats_block_level = on\n>> stats_row_level = on\n>> autovacuum = off\n> \n> Make sure you're vacuuming if autovacuum is off.\n> \n>> How can i optimize the configuration?\n> \n> Looks reasonable, so far as you can tell from an email. Try playing with\n> work_mem though.\n> \n>> 2°) My queries look like\n>> SELECT tday AS n,\n>> COUNT(DISTINCT(a.appelant)) AS new_callers,\n>> COUNT(a.appelant) AS new_calls\n>> FROM cirpacks.tickets AS a\n>> WHERE LENGTH(a.appelant) > 4\n>> AND a.service_id IN ( 95, 224, 35, 18 )\n>> AND a.exploitant_id = 66\n>> AND a.tyear = 2008\n>> AND a.tmonth = 08\n> \n> Index on (tyear,tmonth) might pay off, or one on exploitant_id perhaps.\n> \n>> AND EXISTS ( SELECT 1 FROM cirpacks.clients AS b WHERE b.appelant =\n>> a.appelant AND b.service_id IN ( 95, 224, 35, 18 ) AND b.heberge_id = 66\n>> HAVING to_char(MIN(b.premier_appel), 'YYYYMMDD') = to_char(a.date,\n>> 'YYYYMMDD') )\n> \n> It looks like you're comparing two dates by converting them to text.\n> That's probably not the most efficient way of doing it. Might not be an\n> issue here.\n> \n>> GROUP BY n\n>> ORDER BY n;\n>> \n>> or select ... SUM( CASE WHEN condition THEN value ELSE 0) ... FROM\n>> cirpacks.tickets WHERE tyear = ... and tmonth = ... and tday = ... AND\n>> audiotel IN ( '...', '...' ....);\n>> or select ... SUM( CASE WHEN condition THEN value ELSE 0) ... FROM\n>> cirpacks.tickets WHERE '2007-01-01' <= date AND date <= '2008-08-31' AND\n>> audiotel IN ( '...', '...' ....);\n>> \n>> \n>> which indexes are the best ?\n> \n> The only way to find out is to test. You'll want to run EXPLAIN after\n> adding each index to see what difference it makes. Then you'll want to\n> see what impact this has on overall workload.\n> \n> Mostly though, I'd try out 8.3 and see if that buys you a free\n> performance boost.\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n> \n>\n\n", "msg_date": "Thu, 2 Oct 2008 10:00:36 +0200", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: dedicated server & postgresql 8.1 conf tunning" }, { "msg_contents": "[email protected] wrote:\n> Thanks,\n> \n> Unfornatly, i can't update pgsql to 8.3 since it's not in debian stable.\n\nThat's why backports.org was invented :-)\nOr does can't mean \"not allowed to\"?\n\n> So i'm going to play with work_mem & shared_buffers.\n> \n> With big shared_buffers pgsql tells me \n> shmget(cle=5432001, taille=11183431680, 03600).\n> so i do \"echo 13183431680 > /proc/sys/kernel/shmmax\" ( 10Go + 2Go just\n> in case)\n> \n> but pgsql tells me again that it there's not enought shm..\n> How can i compute the go shmmax for my server ?\n\nI'm not seeing anything terribly wrong there. Are you hitting a limit\nwith shmall?\n\nOh - and I'm not sure there's much point in having more shared-buffers\nthan you have data.\n\nTry much larger work_mem first, I think that's the biggest gain for you.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 02 Oct 2008 09:29:19 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dedicated server & postgresql 8.1 conf tunning" }, { "msg_contents": "Richard Huxton wrote:\n> [email protected] wrote:\n>> Thanks,\n>>\n>> Unfornatly, i can't update pgsql to 8.3 since it's not in debian stable.\n> \n> That's why backports.org was invented :-)\n> Or does can't mean \"not allowed to\"?\n\n\nWell, running production servers from backports can be a risky \nproposition too, and can land you in situations like the one discussed \nin \"Debian packages for Postgres 8.2\" from the General list.\n\n\n-- \nTommy Gildseth\n", "msg_date": "Thu, 02 Oct 2008 10:36:50 +0200", "msg_from": "Tommy Gildseth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dedicated server & postgresql 8.1 conf tunning" }, { "msg_contents": "\nOn 2. Oct, 2008, at 10:00, <[email protected]> <[email protected]> wrote:\n> Unfornatly, i can't update pgsql to 8.3 since it's not in debian \n> stable.\n\nDid you consider using backport packages (http://www.backports.org) for\nDebian Etch? They are providing postgresql v.8.3.3 packages for Debian \nEtch.\n\nCheers.\n\nPS: We are also running backported postgresql packages using Debian Etch\non our production servers without any problems.\n", "msg_date": "Thu, 2 Oct 2008 10:37:30 +0200", "msg_from": "Thomas Spreng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dedicated server & postgresql 8.1 conf tunning" }, { "msg_contents": "Tommy Gildseth wrote:\n> Richard Huxton wrote:\n>> [email protected] wrote:\n>>> Thanks,\n>>>\n>>> Unfornatly, i can't update pgsql to 8.3 since it's not in debian stable.\n>>\n>> That's why backports.org was invented :-)\n>> Or does can't mean \"not allowed to\"?\n> \n> Well, running production servers from backports can be a risky\n> proposition too, and can land you in situations like the one discussed\n> in \"Debian packages for Postgres 8.2\" from the General list.\n\nWell, there's a reason why \"stable\" is a popular choice for production\nservers. I must admit that I build from source for my PostgreSQL\npackages (because I care which version I run). I was reading one of the\nPerl fellows recommending the same.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 02 Oct 2008 10:07:43 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dedicated server & postgresql 8.1 conf tunning" }, { "msg_contents": "I played with work_mem and setting work_mem more than 256000 do not change\nthe performance.\n\nI try to upgrade to 8.3 using etch-backports but it's a new install not an\nupgrade.\nSo i have to create users, permissions, import data again, it scared me so\ni want to find another solutions first.\nBut now i'll try 8.3\n\n\nOn Thu, 02 Oct 2008 10:36:50 +0200, Tommy Gildseth\n<[email protected]> wrote:\n> Richard Huxton wrote:\n>> [email protected] wrote:\n>>> Thanks,\n>>>\n>>> Unfornatly, i can't update pgsql to 8.3 since it's not in debian\n> stable.\n>>\n>> That's why backports.org was invented :-)\n>> Or does can't mean \"not allowed to\"?\n> \n> \n> Well, running production servers from backports can be a risky\n> proposition too, and can land you in situations like the one discussed\n> in \"Debian packages for Postgres 8.2\" from the General list.\n> \n> \n> --\n> Tommy Gildseth\n\n", "msg_date": "Thu, 2 Oct 2008 17:14:34 +0200", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: dedicated server & postgresql 8.1 conf tunning" } ]
[ { "msg_contents": "I have two identical databases that run the same query each morning. Starting this morning, something caused the first db to start using a different execution plan for the query, resulting in much worse performance. I've have tried several things this morning, but I am currently stumped on what would be causing the different execution plans.\n\nThe query and the results of the explain analyze on the two db's:\n\ndb1=> explain analyze \nselect\n t1.bn,\n t2.mu,\n t1.nm,\n t1.root,\n t1.suffix,\n t1.type\nfrom\n t1,\n t2\nwhere\n t2.eff_dt = current_date\n and t1.active = true\n and t1.bn = t2.sn;\n\nThe slower plan used on db1:\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=145.12..38799.61 rows=7876 width=47) (actual time=6.494..352.166 rows=8437 loops=1)\n -> Bitmap Heap Scan on t2 (cost=145.12..19464.74 rows=10898 width=22) (actual time=6.472..22.684 rows=12204 loops=1)\n Recheck Cond: (eff_dt = ('now'::text)::date)\n -> Bitmap Index Scan on t2_nu1 (cost=0.00..142.40 rows=10898 width=0) (actual time=4.013..4.013 rows=24482 loops=1)\n Index Cond: (eff_dt = ('now'::text)::date)\n -> Index Scan using t1_uc2 on t1 (cost=0.00..1.76 rows=1 width=32) (actual time=0.012..0.026 rows=1 loops=12204)\n Index Cond: ((t1.bn)::text = (t2.sn)::text)\n Filter: active\n Total runtime: 353.629 ms\n(9 rows)\n\nTime: 354.795 ms\n\n\nAnd the faster plan from db2:\n\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=21371.63..21720.78 rows=7270 width=47) (actual time=60.412..80.865 rows=8437 loops=1)\n Merge Cond: (\"outer\".\"?column6?\" = \"inner\".\"?column3?\")\n -> Sort (cost=8988.56..9100.55 rows=44794 width=32) (actual time=30.685..33.370 rows=8438 loops=1)\n Sort Key: (t1.bn)::text\n -> Seq Scan on t1 (cost=0.00..5528.00 rows=44794 width=32) (actual time=0.008..18.280 rows=8439 loops=1)\n Filter: active\n -> Sort (cost=12383.07..12409.32 rows=10500 width=22) (actual time=29.718..33.515 rows=12204 loops=1)\n Sort Key: (t2.sn)::text\n -> Index Scan using t2_nu1 on t2 (cost=0.00..11681.77 rows=10500 width=22) (actual time=0.052..13.295 rows=12204 loops=1)\n Index Cond: (eff_dt = ('now'::text)::date)\n Total runtime: 83.385 ms\n(11 rows)\n\nt2.eff_dt is defined as a date, t1.active is a boolean, all other fields are varchar. Table t1 has a unique index (uc2) on field bn and a second unique index (uc3) on fields (root, suffix). Table t2 has a unique index (uc1) on (sn, eff_dt), and a non-unique index (nu1) on eff_dt.\n\nTable t1 has 12204 rows. Table t2 has 7.1M rows, 12204 of which have eff_dt = current_date. \n\nBoth database have autovacuum turned on, and both have been vacuumed and analyzed in the last 24 hours.\n\nAny ideas as to what could the first db to opt for the slower subquery rather than the merge?\n\nThanks in advance.\n\n\n\n \nI have two identical databases that run the same query each morning.  Starting this morning, something caused the first db to start using a different execution plan for the query, resulting in much worse performance.  I've have tried several things this morning, but I am currently stumped on what would be causing the different execution plans.The query and the results of the explain analyze on the two db's:db1=> explain analyze select    t1.bn,    t2.mu,    t1.nm,    t1.root,    t1.suffix,    t1.typefrom     t1,     t2where    t2.eff_dt = current_date   \n and t1.active = true    and t1.bn = t2.sn;The slower plan used on db1:                                                               QUERY PLAN                                                               \n ----------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=145.12..38799.61 rows=7876 width=47) (actual time=6.494..352.166 rows=8437 loops=1)   ->  Bitmap Heap Scan on t2  (cost=145.12..19464.74 rows=10898 width=22) (actual time=6.472..22.684 rows=12204 loops=1)         Recheck Cond: (eff_dt = ('now'::text)::date)         ->  Bitmap Index Scan on t2_nu1  (cost=0.00..142.40 rows=10898 width=0) (actual time=4.013..4.013 rows=24482 loops=1)               Index Cond: (eff_dt = ('now'::text)::date)   ->  Index Scan using t1_uc2 on t1  (cost=0.00..1.76 rows=1 width=32) (actual time=0.012..0.026 rows=1\n loops=12204)         Index Cond: ((t1.bn)::text = (t2.sn)::text)         Filter: active Total runtime: 353.629 ms(9 rows)Time: 354.795 msAnd the faster plan from db2:                                                                         QUERY\n PLAN                                                                         ------------------------------------------------------------------------------------------------------------------------------------------------------------ Merge Join  (cost=21371.63..21720.78 rows=7270 width=47) (actual time=60.412..80.865 rows=8437 loops=1)   Merge Cond: (\"outer\".\"?column6?\" = \"inner\".\"?column3?\")   ->  Sort  (cost=8988.56..9100.55 rows=44794 width=32) (actual time=30.685..33.370 rows=8438\n loops=1)         Sort Key: (t1.bn)::text         ->  Seq Scan on t1  (cost=0.00..5528.00 rows=44794 width=32) (actual time=0.008..18.280 rows=8439 loops=1)               Filter: active   ->  Sort  (cost=12383.07..12409.32 rows=10500 width=22) (actual time=29.718..33.515 rows=12204 loops=1)         Sort Key: (t2.sn)::text         ->  Index Scan using t2_nu1 on t2  (cost=0.00..11681.77 rows=10500 width=22) (actual time=0.052..13.295 rows=12204 loops=1)               Index Cond: (eff_dt = ('now'::text)::date) Total runtime: 83.385 ms(11 rows)t2.eff_dt is defined as\n a date, t1.active is a boolean, all other fields are varchar.  Table t1 has a unique index (uc2) on field bn and a second unique index (uc3) on fields (root, suffix).  Table t2 has a unique index (uc1) on (sn, eff_dt), and a non-unique index (nu1) on eff_dt.Table t1 has 12204 rows.  Table t2 has 7.1M rows, 12204 of which have eff_dt = current_date.  Both database have autovacuum turned on, and both have been vacuumed and analyzed in the last 24 hours.Any ideas as to what could the first db to opt for the slower subquery rather than the merge?Thanks in advance.", "msg_date": "Mon, 29 Sep 2008 09:00:02 -0700 (PDT)", "msg_from": "Doug Eck <[email protected]>", "msg_from_op": true, "msg_subject": "Identical DB's, different execution plans" }, { "msg_contents": "Doug Eck <[email protected]> writes:\n> Any ideas as to what could the first db to opt for the slower subquery rather than the merge?\n\nNot from the information given. Presumably db1 thinks that the\nmergejoin plan would be slower, but why it thinks that isn't clear yet.\nTry setting enable_nestloop = off (and enable_hashjoin = off if it then\nwants a hashjoin) and then post the EXPLAIN ANALYZE results.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Sep 2008 12:42:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical DB's, different execution plans " } ]
[ { "msg_contents": "Setting enable_nestloop = off did result in a hash join, so I also set enable_hashjoin = off.\n\nThe new plan from the slower db:\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=20195.54..46442.99 rows=7876 width=47) (actual time=136.531..478.708 rows=8437 loops=1)\n Merge Cond: ((t1.bn)::text = \"inner\".\"?column3?\")\n -> Index Scan using t1_uc2 on t1 (cost=0.00..25604.74 rows=204906 width=32) (actual time=0.061..327.285 rows=8438 loops=1)\n Filter: active\n -> Sort (cost=20195.54..20222.79 rows=10898 width=22) (actual time=136.461..138.621 rows=12204 loops=1)\n Sort Key: (t2.sn)::text\n -> Bitmap Heap Scan on t2 (cost=145.12..19464.74 rows=10898 width=22) (actual time=7.580..120.144 rows=12204 loops=1)\n Recheck Cond: (eff_dt = ('now'::text)::date)\n -> Bitmap Index Scan on t2_nu1 (cost=0.00..142.40 rows=10898 width=0) (actual time=4.964..4.964 rows=24483 loops=1)\n Index Cond: (eff_dt = ('now'::text)::date)\n Total runtime: 480.344 ms\n(11 rows)\n\nAnd the faster one:\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=21371.63..21720.78 rows=7270 width=47) (actual time=60.435..80.604 rows=8437 loops=1)\n Merge Cond: (\"outer\".\"?column6?\" = \"inner\".\"?column3?\")\n -> Sort (cost=8988.56..9100.55 rows=44794 width=32) (actual time=30.498..33.093 rows=8438 loops=1)\n Sort Key: (t1.bn)::text\n -> Seq Scan on t1 (cost=0.00..5528.00 rows=44794 width=32) (actual time=0.010..17.950 rows=8439 loops=1)\n Filter: active\n -> Sort (cost=12383.07..12409.32 rows=10500 width=22) (actual time=29.928..33.658 rows=12204 loops=1)\n Sort Key: (t2.sn)::text\n -> Index Scan using t2_nu1 on t2 (cost=0.00..11681.77 rows=10500 width=22) (actual time=0.062..13.356 rows=12204 loops=1)\n Index Cond: (eff_dt = ('now'::text)::date)\n Total runtime: 83.054 ms\n(11 rows)\n\nAnd the query again:\n\nexplain analyze \nselect\n t1.bn,\n t2.mu,\n t1.nm,\n t1.root,\n t1.suffix,\n t1.type\nfrom\n t1,\n t2\nwhere\n t2.eff_dt = current_date\n and t1.active = true\n and t1.bn = t2.sn;\n\nThanks.\n\n\n\n----- Original Message ----\nFrom: Tom Lane <[email protected]>\nTo: Doug Eck <[email protected]>\nCc: [email protected]\nSent: Monday, September 29, 2008 11:42:01 AM\nSubject: Re: [PERFORM] Identical DB's, different execution plans \n\nDoug Eck <[email protected]> writes:\n> Any ideas as to what could the first db to opt for the slower subquery rather than the merge?\n\nNot from the information given. Presumably db1 thinks that the\nmergejoin plan would be slower, but why it thinks that isn't clear yet.\nTry setting enable_nestloop = off (and enable_hashjoin = off if it then\nwants a hashjoin) and then post the EXPLAIN ANALYZE results.\n\n regards, tom lane\n\n\n\n \nSetting enable_nestloop = off did result in a hash join, so I also set enable_hashjoin = off.The new plan from the slower db:                                                                  QUERY\n PLAN                                                                   ----------------------------------------------------------------------------------------------------------------------------------------------- Merge Join  (cost=20195.54..46442.99 rows=7876 width=47) (actual time=136.531..478.708 rows=8437 loops=1)   Merge Cond: ((t1.bn)::text = \"inner\".\"?column3?\")   ->  Index Scan using t1_uc2 on t1  (cost=0.00..25604.74 rows=204906 width=32) (actual time=0.061..327.285 rows=8438 loops=1)         Filter:\n active   ->  Sort  (cost=20195.54..20222.79 rows=10898 width=22) (actual time=136.461..138.621 rows=12204 loops=1)         Sort Key: (t2.sn)::text         ->  Bitmap Heap Scan on t2  (cost=145.12..19464.74 rows=10898 width=22) (actual time=7.580..120.144 rows=12204 loops=1)               Recheck Cond: (eff_dt = ('now'::text)::date)               ->  Bitmap Index Scan on t2_nu1  (cost=0.00..142.40 rows=10898 width=0) (actual time=4.964..4.964 rows=24483 loops=1)                     Index Cond: (eff_dt = ('now'::text)::date) Total runtime: 480.344 ms(11\n rows)And the faster one:                                                                         QUERY PLAN                                                                        \n ------------------------------------------------------------------------------------------------------------------------------------------------------------ Merge Join  (cost=21371.63..21720.78 rows=7270 width=47) (actual time=60.435..80.604 rows=8437 loops=1)   Merge Cond: (\"outer\".\"?column6?\" = \"inner\".\"?column3?\")   ->  Sort  (cost=8988.56..9100.55 rows=44794 width=32) (actual time=30.498..33.093 rows=8438 loops=1)         Sort Key: (t1.bn)::text         ->  Seq Scan on t1  (cost=0.00..5528.00 rows=44794 width=32) (actual time=0.010..17.950 rows=8439 loops=1)               Filter: active   ->  Sort  (cost=12383.07..12409.32 rows=10500 width=22) (actual time=29.928..33.658 rows=12204\n loops=1)         Sort Key: (t2.sn)::text         ->  Index Scan using t2_nu1 on t2  (cost=0.00..11681.77 rows=10500 width=22) (actual time=0.062..13.356 rows=12204 loops=1)               Index Cond: (eff_dt = ('now'::text)::date) Total runtime: 83.054 ms(11 rows)And the query again:explain analyze select    t1.bn,    t2.mu,    t1.nm,    t1.root,    t1.suffix,    t1.typefrom     t1,     t2where    t2.eff_dt = current_date    and t1.active = true    and t1.bn = t2.sn;Thanks.----- Original Message ----From: Tom Lane <[email protected]>To: Doug Eck <[email protected]>Cc: [email protected]: Monday, September 29, 2008 11:42:01 AMSubject: Re: [PERFORM] Identical DB's, different execution plans \nDoug Eck <[email protected]> writes:> Any ideas as to what could the first db to opt for the slower subquery rather than the merge?Not from the information given.  Presumably db1 thinks that themergejoin plan would be slower, but why it thinks that isn't clear yet.Try setting enable_nestloop = off (and enable_hashjoin = off if it thenwants a hashjoin) and then post the EXPLAIN ANALYZE results.            regards, tom lane", "msg_date": "Mon, 29 Sep 2008 11:18:45 -0700 (PDT)", "msg_from": "Doug Eck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Identical DB's, different execution plans" }, { "msg_contents": "Doug Eck <[email protected]> writes:\n> The new plan from the slower db:\n\n> -> Index Scan using t1_uc2 on t1 (cost=0.00..25604.74 rows=204906 width=32) (actual time=0.061..327.285 rows=8438 loops=1)\n> Filter: active\n\nThis seems a bit fishy. In the first place, with such a simple filter\ncondition it shouldn't be that far off on the rowcount estimate. In\nthe second place, the cost estimate is more than twice what the other\nserver estimates to do a seqscan and sort of the same data, and the\nrowcount estimate is five times as much. So there's something really\nsignificantly different about the t1 tables in the two cases.\n\nThe first thing you ought to do is to look at the pg_class.relpages\nand reltuples entries for t1 in both databases. What I am suspecting is\nthat for some reason the \"slow\" db has suffered a lot of bloat in that\ntable, leading to a corresponding increase in the cost of a seqscan.\nIf so, a VACUUM FULL or CLUSTER should fix it, though you'll next need\nto look into why routine vacuumings weren't happening. (It looks like\nt2 may be a bit bloated as well.)\n\nIf that's not it, we'll need to probe deeper ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Sep 2008 19:20:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical DB's, different execution plans " } ]
[ { "msg_contents": "Tom,\n\nYou nailed it. The t1 table was using 9600 relpages versus 410 after the vacuum full. The two databases are now showing similar execution plans and times.\n\nThanks for your help. It is greatly appreciated.\n\nDoug Eck\n\n\n\n----- Original Message ----\nFrom: Tom Lane <[email protected]>\nTo: Doug Eck <[email protected]>\nCc: [email protected]\nSent: Monday, September 29, 2008 6:20:20 PM\nSubject: Re: [PERFORM] Identical DB's, different execution plans \n\nDoug Eck <[email protected]> writes:\n> The new plan from the slower db:\n\n> -> Index Scan using t1_uc2 on t1 (cost=0.00..25604.74 rows=204906 width=32) (actual time=0.061..327.285 rows=8438 loops=1)\n> Filter: active\n\nThis seems a bit fishy. In the first place, with such a simple filter\ncondition it shouldn't be that far off on the rowcount estimate. In\nthe second place, the cost estimate is more than twice what the other\nserver estimates to do a seqscan and sort of the same data, and the\nrowcount estimate is five times as much. So there's something really\nsignificantly different about the t1 tables in the two cases.\n\nThe first thing you ought to do is to look at the pg_class.relpages\nand reltuples entries for t1 in both databases. What I am suspecting is\nthat for some reason the \"slow\" db has suffered a lot of bloat in that\ntable, leading to a corresponding increase in the cost of a seqscan.\nIf so, a VACUUM FULL or CLUSTER should fix it, though you'll next need\nto look into why routine vacuumings weren't happening. (It looks like\nt2 may be a bit bloated as well.)\n\nIf that's not it, we'll need to probe deeper ...\n\n regards, tom lane\n\n\n\n \nTom,You nailed it.  The t1 table was using 9600 relpages versus 410 after the vacuum full.  The two databases are now showing similar execution plans and times.Thanks for your help.  It is greatly appreciated.Doug Eck----- Original Message ----From: Tom Lane <[email protected]>To: Doug Eck <[email protected]>Cc: [email protected]: Monday, September 29, 2008 6:20:20 PMSubject: Re: [PERFORM] Identical DB's, different execution plans \nDoug Eck <[email protected]> writes:> The new plan from the slower db:>    ->  Index Scan using t1_uc2 on t1  (cost=0.00..25604.74 rows=204906 width=32) (actual time=0.061..327.285 rows=8438 loops=1)>          Filter: activeThis seems a bit fishy.  In the first place, with such a simple filtercondition it shouldn't be that far off on the rowcount estimate.  Inthe second place, the cost estimate is more than twice what the otherserver estimates to do a seqscan and sort of the same data, and therowcount estimate is five times as much.  So there's something reallysignificantly different about the t1 tables in the two cases.The first thing you ought to do is to look at the pg_class.relpagesand reltuples entries for t1 in both databases.  What I am\n suspecting isthat for some reason the \"slow\" db has suffered a lot of bloat in thattable, leading to a corresponding increase in the cost of a seqscan.If so, a VACUUM FULL or CLUSTER should fix it, though you'll next needto look into why routine vacuumings weren't happening.  (It looks liket2 may be a bit bloated as well.)If that's not it, we'll need to probe deeper ...            regards, tom lane", "msg_date": "Mon, 29 Sep 2008 17:17:44 -0700 (PDT)", "msg_from": "Doug Eck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Identical DB's, different execution plans" } ]
[ { "msg_contents": "Hi,\n\nWe have a table called \"table1\" which contains around 638725448 records.\nWe created a subset of this table and named it as \"new_table1\" which has\naround 120107519 records.\n\n\"new_table1\" is 18% of the the whole \"table1\".\n\nIf we fire the below queries we are not finding any drastic performance\ngain.\n\n\nQuery 1 :\nSELECT SUM(table1.idlv), SUM(table1.cdlv)\n FROM table1, table2 CROSS JOIN table3\n WHERE table1.dk = table2.k\n AND table2.dt BETWEEN '2008.08.01' AND '2008.08.20'\n AND table1.nk = table3.k\n AND table3.id = 999 ;\n\nTime taken :\n9967.051 ms\n9980.021 ms\n\n\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=647373.04..647373.05 rows=1 width=16) (actual\ntime=9918.010..9918.010 rows=1 loops=1)\n -> Nested Loop (cost=186.26..647160.32 rows=42543 width=16) (actual\ntime=655.832..6622.011 rows=5120582 loops=1)\n -> Nested Loop (cost=0.00..17.42 rows=30 width=8) (actual\ntime=0.024..0.164 rows=31 loops=1)\n -> Index Scan using ridx on table3 (cost=0.00..8.27 rows=1\nwidth=4) (actual time=0.012..0.014 rows=1 loops=1)\n Index Cond: (id = 999)\n -> Index Scan using rdtidx on table2 (cost=0.00..8.85\nrows=30 width=4) (actual time=0.008..0.110 rows=31 loops=1)\n Index Cond: ((table2.dt >= '2008-08-01\n00:00:00'::timestamp without time zone) AND (table2.dt <= '2008-08-20\n00:00:00'::timestamp without time zone))\n -> Bitmap Heap Scan on table1 (cost=186.26..21489.55 rows=5459\nwidth=24) (actual time=57.053..170.657 rows=165180 loops=31)\n Recheck Cond: ((table1.nk = table3.k) AND (table1.dk =\ntable2.k))\n -> Bitmap Index Scan on rndtidx (cost=0.00..184.89\nrows=5459 width=0) (actual time=47.855..47.855 rows=165180 loops=31)\n Index Cond: ((table1.nk = table3.k) AND (table1.dk =\ntable2.k))\n Total runtime: 9918.118 ms\n(12 rows)\n\nTime: 9967.051 ms\n\n\n\nQuery 2 :\n\nSELECT SUM(new_table1.idlv) , SUM(new_table1.cdlv)\n FROM new_table1, table2 CROSS JOIN table3\n WHERE new_table1.dk = table2.k\n AND table2.dt BETWEEN '2008.08.01' AND '2008.08.20'\n AND new_table1.nk = table3.k\n AND table3.id = 999 ;\n\nTime taken :\n8225.308 ms\n8500.728 ms\n\n\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=414372.59..414372.60 rows=1 width=16) (actual\ntime=8224.300..8224.300 rows=1 loops=1)\n -> Nested Loop (cost=0.00..414246.81 rows=25155 width=16) (actual\ntime=19.578..4922.680 rows=5120582 loops=1)\n -> Nested Loop (cost=0.00..17.42 rows=30 width=8) (actual\ntime=0.034..0.125 rows=31 loops=1)\n -> Index Scan using ridx on table3 (cost=0.00..8.27 rows=1\nwidth=4) (actual time=0.020..0.022 rows=1 loops=1)\n Index Cond: (id = 999)\n -> Index Scan using rdtidx on table2 (cost=0.00..8.85\nrows=30 width=4) (actual time=0.010..0.064 rows=31 loops=1)\n Index Cond: ((table2.dt >= '2008-08-01\n00:00:00'::timestamp without time zone) AND (table2.dt <= '2008-08-20\n00:00:00'::timestamp without time zone))\n -> Index Scan using rndtidx on new_table1 (cost=0.00..13685.26\nrows=8159 width=24) (actual time=0.648..117.415 rows=165180 loops=31)\n Index Cond: ((new_table1.nk = table3.k) AND (new_table1.dk =\ntable2.k))\n Total runtime: 8224.386 ms\n(10 rows)\n\nTime: 8225.308 ms\n\nWe have set join_collapse_limit = 8, from_collapse_limit = 1.\n\n-- \nRegards\nGauri\n\nHi,We have a table called \"table1\" which contains around 638725448 records.We created a subset of this table and named it as \"new_table1\" which has around 120107519 records.\n\"new_table1\" is 18% of the the whole \"table1\".If we fire the below queries we are not finding any drastic performance gain.Query 1 :SELECT SUM(table1.idlv), SUM(table1.cdlv)\n       FROM table1, table2 CROSS JOIN table3        WHERE table1.dk = table2.k               AND table2.dt BETWEEN '2008.08.01' AND '2008.08.20'              AND table1.nk = table3.k \n             AND table3.id = 999 ;Time taken : 9967.051 ms 9980.021 ms                                                                                       QUERY PLAN                            \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=647373.04..647373.05 rows=1 width=16) (actual time=9918.010..9918.010 rows=1 loops=1)\n   ->  Nested Loop  (cost=186.26..647160.32 rows=42543 width=16) (actual time=655.832..6622.011 rows=5120582 loops=1)         ->  Nested Loop  (cost=0.00..17.42 rows=30 width=8) (actual time=0.024..0.164 rows=31 loops=1)\n               ->  Index Scan using ridx on table3  (cost=0.00..8.27 rows=1 width=4) (actual time=0.012..0.014 rows=1 loops=1)                     Index Cond: (id = 999)               ->  Index Scan using rdtidx on table2  (cost=0.00..8.85 rows=30 width=4) (actual time=0.008..0.110 rows=31 loops=1)\n                     Index Cond: ((table2.dt >= '2008-08-01 00:00:00'::timestamp without time zone) AND (table2.dt <= '2008-08-20 00:00:00'::timestamp without time zone))         ->  Bitmap Heap Scan on table1  (cost=186.26..21489.55 rows=5459 width=24) (actual time=57.053..170.657 rows=165180 loops=31)\n               Recheck Cond: ((table1.nk = table3.k) AND (table1.dk = table2.k))               ->  Bitmap Index Scan on rndtidx  (cost=0.00..184.89 rows=5459 width=0) (actual time=47.855..47.855 rows=165180 loops=31)\n                     Index Cond: ((table1.nk = table3.k) AND (table1.dk = table2.k)) Total runtime: 9918.118 ms(12 rows)Time: 9967.051 msQuery 2 :SELECT SUM(new_table1.idlv) , SUM(new_table1.cdlv)\n       FROM new_table1, table2 CROSS JOIN table3        WHERE new_table1.dk = table2.k               AND table2.dt BETWEEN '2008.08.01' AND '2008.08.20'              AND new_table1.nk = table3.k \n             AND table3.id = 999 ;Time taken :8225.308 ms8500.728 ms                                                                                       QUERY PLAN                            \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=414372.59..414372.60 rows=1 width=16) (actual time=8224.300..8224.300 rows=1 loops=1)\n   ->  Nested Loop  (cost=0.00..414246.81 rows=25155 width=16) (actual time=19.578..4922.680 rows=5120582 loops=1)         ->  Nested Loop  (cost=0.00..17.42 rows=30 width=8) (actual time=0.034..0.125 rows=31 loops=1)\n               ->  Index Scan using ridx on table3  (cost=0.00..8.27 rows=1 width=4) (actual time=0.020..0.022 rows=1 loops=1)                     Index Cond: (id = 999)               ->  Index Scan using rdtidx on table2  (cost=0.00..8.85 rows=30 width=4) (actual time=0.010..0.064 rows=31 loops=1)\n                     Index Cond: ((table2.dt >= '2008-08-01 00:00:00'::timestamp without time zone) AND (table2.dt <= '2008-08-20 00:00:00'::timestamp without time zone))         ->  Index Scan using rndtidx on new_table1  (cost=0.00..13685.26 rows=8159 width=24) (actual time=0.648..117.415 rows=165180 loops=31)\n               Index Cond: ((new_table1.nk = table3.k) AND (new_table1.dk = table2.k)) Total runtime: 8224.386 ms(10 rows)Time: 8225.308 msWe have set join_collapse_limit = 8, from_collapse_limit = 1.\n-- RegardsGauri", "msg_date": "Wed, 1 Oct 2008 16:04:28 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Confusing Query Performance" }, { "msg_contents": "On Wed, 1 Oct 2008, Gauri Kanekar wrote:\n> \"new_table1\" is 18% of the the whole \"table1\".\n\n>    ->  Nested Loop  (cost=186.26..647160.32 rows=42543 width=16) (actual time=655.832..6622.011 rows=5120582 loops=1)\n\n>    ->  Nested Loop  (cost=0.00..414246.81 rows=25155 width=16) (actual time=19.578..4922.680 rows=5120582 loops=1)\n\nThe new table may be that much smaller than the old table, but you're \nselecting exactly the same amount of data from it. The data is fetched by \nindexes, which means random access, so the overall size of the data that \nyou don't fetch doesn't make any difference.\n\nMatthew\n\n-- \nExistence is a convenient concept to designate all of the files that an\nexecutable program can potentially process. -- Fortran77 standard", "msg_date": "Wed, 1 Oct 2008 11:49:54 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confusing Query Performance" }, { "msg_contents": "On Wednesday 01 October 2008 03:34, Gauri Kanekar wrote:\n>    ->  Nested Loop  (cost=186.26..647160.32 rows=42543 width=16) (actual\n> time=655.832..6622.011 rows=5120582 loops=1)\n\nThat nested loop estimate is off by 100x, which is why the DB is using a \nslow nested loop for a large amount of data. I'd try increasing your \nstatistics collection, analyze, and re-run the query.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL\nSan Francisco\n", "msg_date": "Wed, 1 Oct 2008 13:12:59 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confusing Query Performance" } ]
[ { "msg_contents": "Hi!\n\n(The question is why this simple select takes me 20 minutes to run...)\n\nI have two tables with address data and result data from two different runs\nof two different geocoding engines. I want the count of result data\ndifferences when the output address data matches.\n\nIn essence, I want:\nEngine 1 engine 2\nId = id\nAddr = Addr\nCity = City\nZip = Zip\n\nAnd then\nResult != Result\n\nWhen I do g.id=m.id, performance is fine. When I add\ng.outaddress=m.out_address, performance is fine. But when I add\ng.city=m.out_city, the query goes from ~10s to ~20 MINUTES for some reason.\nAny ideas why? If I do a subselect rather than just g.outcity=m.out_city,\nit's fine, but if I change it back to the former, it goes back to 20\nminutes. The queries and explains are below.\n\nresultcodes=# explain analyze select g.outresultcode,m.resultcode,count(*)\nfrom geocoderoutput g,mmoutput m where g.id=m.id and\ng.outresultcode<>m.resultcode and g.outaddress=m.out_address and\ng.outcity=m.out_city group by g.outresultcode,m.resultcode order by count(*)\ndesc;\n QUERY\nPLAN\n\n----------------------------------------------------------------------------\n--------------------------------------------------------\n----------------\n Sort (cost=64772.08..64772.09 rows=1 width=22) (actual\ntime=1194603.363..1194604.099 rows=515 loops=1)\n Sort Key: (count(*))\n Sort Method: quicksort Memory: 53kB\n -> HashAggregate (cost=64772.06..64772.07 rows=1 width=22) (actual\ntime=1194601.374..1194602.316 rows=515 loops=1)\n -> Hash Join (cost=24865.31..64772.05 rows=1 width=22) (actual\ntime=373146.994..1194475.482 rows=52179 loops=1)\n Hash Cond: ((m.id = g.id) AND ((m.out_address)::text =\n(g.outaddress)::text) AND ((m.out_city)::text = (g.outcity)::text))\n Join Filter: (g.outresultcode <> m.resultcode)\n -> Seq Scan on mmoutput m (cost=0.00..15331.84 rows=502884\nwidth=42) (actual time=0.010..1324.974 rows=502884 loops=1)\n -> Hash (cost=11644.84..11644.84 rows=502884 width=41)\n(actual time=370411.043..370411.043 rows=502704 loops=1)\n -> Seq Scan on geocoderoutput g (cost=0.00..11644.84\nrows=502884 width=41) (actual time=0.010..1166.141 rows=502884 loops=1)\n Total runtime: 1194605.011 ms\n(11 rows)\n\nresultcodes=# explain analyze select g.outresultcode,m.resultcode,count(*)\nfrom geocoderoutput g,mmoutput m where g.id=m.id and\ng.outresultcode<>m.resultcode and g.outaddress=m.out_address and\ng.outcity=(select out_city from mmoutput where id=g.id and\nout_city=g.outcity) group by g.outresultcode,m.resultcode order by count(*)\ndesc;\n QUERY\nPLAN\n\n----------------------------------------------------------------------------\n--------------------------------------------------------\n--------------------\n Sort (cost=4218017.33..4218017.34 rows=1 width=22) (actual\ntime=23095.890..23096.632 rows=515 loops=1)\n Sort Key: (count(*))\n Sort Method: quicksort Memory: 53kB\n -> HashAggregate (cost=4218017.31..4218017.32 rows=1 width=22) (actual\ntime=23093.901..23094.839 rows=515 loops=1)\n -> Nested Loop (cost=0.00..4218017.30 rows=1 width=22) (actual\ntime=102.356..22930.141 rows=52179 loops=1)\n Join Filter: ((g.outresultcode <> m.resultcode) AND\n((g.outaddress)::text = (m.out_address)::text))\n -> Seq Scan on geocoderoutput g (cost=0.00..4202690.15\nrows=2514 width=41) (actual time=98.045..15040.142 rows=468172 loops=1)\n Filter: ((outcity)::text = ((subplan))::text)\n SubPlan\n -> Index Scan using mmoutput_pkey on mmoutput\n(cost=0.00..8.33 rows=1 width=10) (actual time=0.018..0.021 rows=1\nloops=502884)\n Index Cond: (id = $0)\n Filter: ((out_city)::text = ($1)::text)\n -> Index Scan using mmoutput_pkey on mmoutput m\n(cost=0.00..6.08 rows=1 width=32) (actual time=0.006..0.008 rows=1\nloops=468172)\n Index Cond: (m.id = g.id)\n Total runtime: 23097.548 ms\n(15 rows)\n\n\n", "msg_date": "Wed, 1 Oct 2008 06:11:33 -0600", "msg_from": "\"David logan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Mystefied at poor performance of a standard query" }, { "msg_contents": "\"David logan\" <[email protected]> writes:\n> (The question is why this simple select takes me 20 minutes to run...)\n\nWhat have you got work_mem set to? The hash join is not going to be\nreal fast if it has to split the join into multiple batches, so you\nwant work_mem large enough to hold the whole inner relation. That would\nbe at least 20MB in this example, probably quite a bit more after\nallowing for per-row overhead in the table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Oct 2008 09:30:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mystefied at poor performance of a standard query " }, { "msg_contents": "Looks like that worked. I set work_mem to 256MB, and it looks like my\nstandard sql came back in just a couple of seconds.\n\nThanks!\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tom Lane\nSent: Wednesday, October 01, 2008 07:30\nTo: David logan\nCc: [email protected]\nSubject: Re: [PERFORM] Mystefied at poor performance of a standard query \n\n\"David logan\" <[email protected]> writes:\n> (The question is why this simple select takes me 20 minutes to run...)\n\nWhat have you got work_mem set to? The hash join is not going to be\nreal fast if it has to split the join into multiple batches, so you\nwant work_mem large enough to hold the whole inner relation. That would\nbe at least 20MB in this example, probably quite a bit more after\nallowing for per-row overhead in the table.\n\n\t\t\tregards, tom lane\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 1 Oct 2008 10:38:21 -0600", "msg_from": "\"David logan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mystefied at poor performance of a standard query " } ]
[ { "msg_contents": "I have two fairly simple tables as described below. The relationship \nbetween them is through assignment_id. The problem is when I try to \njoin these two tables the planner does a sequential scan on \nfa_assignment_detail and the query takes forever to resolve. I've run \nthe usual vacuum and analyze commands with no changes. I'm not sure how \nlong the query actually takes to resolve as its been running for over 30 \nminutes now (FYI this is on a 8 core IBM Power5 550 with 8 GB of RAM) \nrunning RedHat Enterprise 9 and postgresql 8.3.3. Any thoughts?\n\n\\d fa_assignment\n Table \"public.fa_assignment\"\n Column | Type | Modifiers\n-----------------+-----------------------------+------------------------\n scenario_id | integer | not null\n prospect_id | integer | not null\n assignment_id | integer | not null\n valid | boolean | not null default false\n modified | boolean | not null default true\n modify_ts | timestamp without time zone |\n modify_username | character varying(32) |\nIndexes:\n \"pk_fa_assignment\" PRIMARY KEY, btree (scenario_id, prospect_id)\n \"fa_assignment_idx1\" btree (assignment_id) CLUSTER\n \"fa_assignment_idx2\" btree (scenario_id, assignment_id)\n \"fa_assignment_idx3\" btree (prospect_id)\nForeign-key constraints:\n \"fk_fa_prospect\" FOREIGN KEY (prospect_id) REFERENCES \nfa_prospect(prospect_id) DEFERRABLE\n \"fk_fa_scenario\" FOREIGN KEY (scenario_id) REFERENCES \nfa_scenario(scenario_id) DEFERRABLE\n\n\n\n\\d fa_assignment_detail\n Table \"public.fa_assignment_detail\"\n Column | Type | Modifiers\n-----------------+-----------------------------+------------------------\n assignment_id | integer | not null\n type | character varying(8) | not null\n resource_id | integer |\n create_ts | timestamp without time zone | not null\n create_username | character varying(32) | not null\n modify_ts | timestamp without time zone |\n modify_username | character varying(32) |\n locked | boolean | not null default false\n locked_ts | timestamp without time zone |\n locked_username | character varying(32) |\nIndexes:\n \"pk_fa_assignment_detail\" PRIMARY KEY, btree (assignment_id, type)\n \"fa_assignment_detail_idx1\" btree (resource_id)\n \"fa_assignment_detail_idx2\" btree (assignment_id)\nForeign-key constraints:\n \"fk_fa_resource1\" FOREIGN KEY (resource_id) REFERENCES \nfa_resource(resource_id) DEFERRABLE\n\n\n\nfa_assignment has 44184945 records\nfa_assignment_detail has 82196027 records\n\n\n\nexplain select * from fa_assignment fa JOIN fa_assignment_detail fad ON \n(fad.assignment_id = fa.assignment_id) where fa.scenario_id = 0;\n\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------\n Hash Join (cost=581289.72..4940729.76 rows=9283104 width=91)\n Hash Cond: (fad.assignment_id = fa.assignment_id)\n -> Seq Scan on fa_assignment_detail fad (cost=0.00..1748663.60 \nrows=82151360 width=61)\n -> Hash (cost=484697.74..484697.74 rows=4995439 width=30)\n -> Bitmap Heap Scan on fa_assignment fa \n(cost=93483.75..484697.74 rows=4995439 width=30)\n Recheck Cond: (scenario_id = 0)\n -> Bitmap Index Scan on fa_assignment_idx2 \n(cost=0.00..92234.89 rows=4995439 width=0)\n Index Cond: (scenario_id = 0)\n(8 rows)\n\n", "msg_date": "Wed, 01 Oct 2008 16:34:49 -0400", "msg_from": "\"H. William Connors II\" <[email protected]>", "msg_from_op": true, "msg_subject": "bizarre query performance question" }, { "msg_contents": "Lennin Caro wrote:\n>\n> --- On Wed, 10/1/08, H. William Connors II <[email protected]> wrote:\n>\n> \n>> From: H. William Connors II <[email protected]>\n>> Subject: [PERFORM] bizarre query performance question\n>> To: [email protected]\n>> Date: Wednesday, October 1, 2008, 8:34 PM\n>> I have two fairly simple tables as described below. The\n>> relationship \n>> between them is through assignment_id. The problem is when\n>> I try to \n>> join these two tables the planner does a sequential scan on\n>>\n>> fa_assignment_detail and the query takes forever to\n>> resolve. I've run \n>> the usual vacuum and analyze commands with no changes. \n>> I'm not sure how \n>> long the query actually takes to resolve as its been\n>> running for over 30 \n>> minutes now (FYI this is on a 8 core IBM Power5 550 with 8\n>> GB of RAM) \n>> running RedHat Enterprise 9 and postgresql 8.3.3. Any\n>> thoughts?\n>>\n>> \\d fa_assignment\n>> Table\n>> \"public.fa_assignment\"\n>> Column | Type | \n>> Modifiers\n>> -----------------+-----------------------------+------------------------\n>> scenario_id | integer | not null\n>> prospect_id | integer | not null\n>> assignment_id | integer | not null\n>> valid | boolean | not null\n>> default false\n>> modified | boolean | not null\n>> default true\n>> modify_ts | timestamp without time zone |\n>> modify_username | character varying(32) |\n>> Indexes:\n>> \"pk_fa_assignment\" PRIMARY KEY, btree\n>> (scenario_id, prospect_id)\n>> \"fa_assignment_idx1\" btree (assignment_id)\n>> CLUSTER\n>> \"fa_assignment_idx2\" btree (scenario_id,\n>> assignment_id)\n>> \"fa_assignment_idx3\" btree (prospect_id)\n>> Foreign-key constraints:\n>> \"fk_fa_prospect\" FOREIGN KEY (prospect_id)\n>> REFERENCES \n>> fa_prospect(prospect_id) DEFERRABLE\n>> \"fk_fa_scenario\" FOREIGN KEY (scenario_id)\n>> REFERENCES \n>> fa_scenario(scenario_id) DEFERRABLE\n>>\n>>\n>>\n>> \\d fa_assignment_detail\n>> Table\n>> \"public.fa_assignment_detail\"\n>> Column | Type | \n>> Modifiers\n>> -----------------+-----------------------------+------------------------\n>> assignment_id | integer | not null\n>> type | character varying(8) | not null\n>> resource_id | integer |\n>> create_ts | timestamp without time zone | not null\n>> create_username | character varying(32) | not null\n>> modify_ts | timestamp without time zone |\n>> modify_username | character varying(32) |\n>> locked | boolean | not null\n>> default false\n>> locked_ts | timestamp without time zone |\n>> locked_username | character varying(32) |\n>> Indexes:\n>> \"pk_fa_assignment_detail\" PRIMARY KEY, btree\n>> (assignment_id, type)\n>> \"fa_assignment_detail_idx1\" btree\n>> (resource_id)\n>> \"fa_assignment_detail_idx2\" btree\n>> (assignment_id)\n>> Foreign-key constraints:\n>> \"fk_fa_resource1\" FOREIGN KEY (resource_id)\n>> REFERENCES \n>> fa_resource(resource_id) DEFERRABLE\n>>\n>>\n>>\n>> fa_assignment has 44184945 records\n>> fa_assignment_detail has 82196027 records\n>>\n>>\n>>\n>> explain select * from fa_assignment fa JOIN\n>> fa_assignment_detail fad ON \n>> (fad.assignment_id = fa.assignment_id) where fa.scenario_id\n>> = 0;\n>>\n>> QUERY \n>> PLAN \n>> -------------------------------------------------------------------------------------------------------\n>> Hash Join (cost=581289.72..4940729.76 rows=9283104\n>> width=91)\n>> Hash Cond: (fad.assignment_id = fa.assignment_id)\n>> -> Seq Scan on fa_assignment_detail fad \n>> (cost=0.00..1748663.60 \n>> rows=82151360 width=61)\n>> -> Hash (cost=484697.74..484697.74 rows=4995439\n>> width=30)\n>> -> Bitmap Heap Scan on fa_assignment fa \n>> (cost=93483.75..484697.74 rows=4995439 width=30)\n>> Recheck Cond: (scenario_id = 0)\n>> -> Bitmap Index Scan on\n>> fa_assignment_idx2 \n>> (cost=0.00..92234.89 rows=4995439 width=0)\n>> Index Cond: (scenario_id = 0)\n>> (8 rows)\n>>\n>>\n>> \n>\n> The Fk for the table fa_assignment_detail to fa_assignment is nor relationate whit the column assignment_id\n>\n>\n> \n>\n>\n> \nThat is because assignment_id is because there can be many records in \nfa_assignment that use the same assignment_id and thus it isn't unique \nthere. I can join other tables not related through a foreign key using \nan index so I'm unclear why this situation is different.\n\n", "msg_date": "Wed, 01 Oct 2008 17:17:19 -0400", "msg_from": "\"H. William Connors II\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bizarre query performance question" }, { "msg_contents": "\n\n\n--- On Wed, 10/1/08, H. William Connors II <[email protected]> wrote:\n\n> From: H. William Connors II <[email protected]>\n> Subject: [PERFORM] bizarre query performance question\n> To: [email protected]\n> Date: Wednesday, October 1, 2008, 8:34 PM\n> I have two fairly simple tables as described below. The\n> relationship \n> between them is through assignment_id. The problem is when\n> I try to \n> join these two tables the planner does a sequential scan on\n> \n> fa_assignment_detail and the query takes forever to\n> resolve. I've run \n> the usual vacuum and analyze commands with no changes. \n> I'm not sure how \n> long the query actually takes to resolve as its been\n> running for over 30 \n> minutes now (FYI this is on a 8 core IBM Power5 550 with 8\n> GB of RAM) \n> running RedHat Enterprise 9 and postgresql 8.3.3. Any\n> thoughts?\n> \n> \\d fa_assignment\n> Table\n> \"public.fa_assignment\"\n> Column | Type | \n> Modifiers\n> -----------------+-----------------------------+------------------------\n> scenario_id | integer | not null\n> prospect_id | integer | not null\n> assignment_id | integer | not null\n> valid | boolean | not null\n> default false\n> modified | boolean | not null\n> default true\n> modify_ts | timestamp without time zone |\n> modify_username | character varying(32) |\n> Indexes:\n> \"pk_fa_assignment\" PRIMARY KEY, btree\n> (scenario_id, prospect_id)\n> \"fa_assignment_idx1\" btree (assignment_id)\n> CLUSTER\n> \"fa_assignment_idx2\" btree (scenario_id,\n> assignment_id)\n> \"fa_assignment_idx3\" btree (prospect_id)\n> Foreign-key constraints:\n> \"fk_fa_prospect\" FOREIGN KEY (prospect_id)\n> REFERENCES \n> fa_prospect(prospect_id) DEFERRABLE\n> \"fk_fa_scenario\" FOREIGN KEY (scenario_id)\n> REFERENCES \n> fa_scenario(scenario_id) DEFERRABLE\n> \n> \n> \n> \\d fa_assignment_detail\n> Table\n> \"public.fa_assignment_detail\"\n> Column | Type | \n> Modifiers\n> -----------------+-----------------------------+------------------------\n> assignment_id | integer | not null\n> type | character varying(8) | not null\n> resource_id | integer |\n> create_ts | timestamp without time zone | not null\n> create_username | character varying(32) | not null\n> modify_ts | timestamp without time zone |\n> modify_username | character varying(32) |\n> locked | boolean | not null\n> default false\n> locked_ts | timestamp without time zone |\n> locked_username | character varying(32) |\n> Indexes:\n> \"pk_fa_assignment_detail\" PRIMARY KEY, btree\n> (assignment_id, type)\n> \"fa_assignment_detail_idx1\" btree\n> (resource_id)\n> \"fa_assignment_detail_idx2\" btree\n> (assignment_id)\n> Foreign-key constraints:\n> \"fk_fa_resource1\" FOREIGN KEY (resource_id)\n> REFERENCES \n> fa_resource(resource_id) DEFERRABLE\n> \n> \n> \n> fa_assignment has 44184945 records\n> fa_assignment_detail has 82196027 records\n> \n> \n> \n> explain select * from fa_assignment fa JOIN\n> fa_assignment_detail fad ON \n> (fad.assignment_id = fa.assignment_id) where fa.scenario_id\n> = 0;\n> \n> QUERY \n> PLAN \n> -------------------------------------------------------------------------------------------------------\n> Hash Join (cost=581289.72..4940729.76 rows=9283104\n> width=91)\n> Hash Cond: (fad.assignment_id = fa.assignment_id)\n> -> Seq Scan on fa_assignment_detail fad \n> (cost=0.00..1748663.60 \n> rows=82151360 width=61)\n> -> Hash (cost=484697.74..484697.74 rows=4995439\n> width=30)\n> -> Bitmap Heap Scan on fa_assignment fa \n> (cost=93483.75..484697.74 rows=4995439 width=30)\n> Recheck Cond: (scenario_id = 0)\n> -> Bitmap Index Scan on\n> fa_assignment_idx2 \n> (cost=0.00..92234.89 rows=4995439 width=0)\n> Index Cond: (scenario_id = 0)\n> (8 rows)\n> \n> \n\nThe Fk for the table fa_assignment_detail to fa_assignment is nor relationate whit the column assignment_id\n\n\n \n\n", "msg_date": "Wed, 1 Oct 2008 14:18:08 -0700 (PDT)", "msg_from": "Lennin Caro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bizarre query performance question" }, { "msg_contents": "H. William Connors II wrote:\n> fa_assignment has 44184945 records\n> fa_assignment_detail has 82196027 records\n> \n> explain select * from fa_assignment fa JOIN fa_assignment_detail fad ON\n> (fad.assignment_id = fa.assignment_id) where fa.scenario_id = 0;\n> \n> QUERY\n> PLAN \n> -------------------------------------------------------------------------------------------------------\n> \n> Hash Join (cost=581289.72..4940729.76 rows=9283104 width=91)\n\nAre you really expecting 9 million rows in the result? If so, this is\nprobably a reasonable plan.\n\n> Hash Cond: (fad.assignment_id = fa.assignment_id)\n> -> Seq Scan on fa_assignment_detail fad (cost=0.00..1748663.60\n> rows=82151360 width=61)\n> -> Hash (cost=484697.74..484697.74 rows=4995439 width=30)\n> -> Bitmap Heap Scan on fa_assignment fa \n> (cost=93483.75..484697.74 rows=4995439 width=30)\n> Recheck Cond: (scenario_id = 0)\n> -> Bitmap Index Scan on fa_assignment_idx2 \n> (cost=0.00..92234.89 rows=4995439 width=0)\n> Index Cond: (scenario_id = 0)\n\nIt's restricting on scenario_id, building a bitmap to identify which\ndisk-blocks will contain one or more matching rows and then scanning\nthose. If those 5 million scenario_id=0 rows are spread over 10% of the\nblocks then that's a good idea.\n\nIf it was expecting only a handful of rows with scenario_id=0 then I'd\nexpect it to switch to a \"standard\" index scan.\n\nIf your work_mem is small try something like:\n set work_mem = '50MB';\nbefore running the query - maybe even larger.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 02 Oct 2008 08:26:46 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bizarre query performance question" } ]
[ { "msg_contents": "Hello.\n\nI have a database with company table that have a bunch of related\n(delete=cascade) tables.\nAlso it has 1<->M relation to company_descr table.\nOnce we've found that ~half of our companies do not have any description and\nwe would like to remove them.\nFirst this I've tried was\ndelete from company where id not in (select company_id from company_descr);\nI've tried to analyze command, but unlike to other RDBM I've used it did not\ninclude cascade deletes/checks into query plan. That is first problem.\nIt was SLOW. To make it faster I've done next thing:\n\ncreate temporary table comprm(id) as select id from company;\ndelete from comprm where id in (select company_id from company_descr);\ndelete from company where id in (select id from comprm);\n\nThat was much better. So the question is why postgresql can't do such a\nthing.\nBut it was better only until \"removing\" dataset was small (~5% of all\ntable).\nAs soon as I've tried to remove 50% I've got speed problems. I've ensured I\nhave all indexes for both ends of foreign key.\nI've tried to remove all cascaded entries by myself, e.g.:\n\ncreate temporary table comprm(id) as select id from company;\ndelete from comprm where id in (select company_id from company_descr);\ndelete from company_alias where company_id in (select id from comprm);\n...\ndelete from company where id in (select id from comprm);\n\nIt did not help until I drop all constraints before and recreate all\nconstraints after.\nNow I have it work for 15minutes, while previously it could not do in a day.\n\nIs it OK? I'd say, some (if not all) of the optimizations could be done by\npostgresql optimizer.\n\nHello.I have a database with company table that have a bunch of related (delete=cascade) tables.Also it has 1<->M relation to company_descr table.Once we've found that ~half of our companies do not have any description and we would like to remove them.\nFirst this I've tried wasdelete from company where id not in (select company_id from company_descr);I've tried to analyze command, but unlike to other RDBM I've used it did not include cascade deletes/checks into query plan. That is first problem.\nIt was SLOW. To make it faster I've done next thing:create temporary table comprm(id) as select id from company;delete from comprm where id in (select company_id from company_descr);delete from company where id in (select id from comprm);\nThat was much better. So the question is why postgresql can't do such a thing.But it was better only until \"removing\" dataset was small (~5% of all table). As soon as I've tried to remove 50% I've got speed problems. I've ensured I have all indexes for both ends of foreign key.\nI've tried to remove all cascaded entries by myself, e.g.:create temporary table comprm(id) as select id from company;\ndelete from comprm where id in (select company_id from company_descr);delete from company_alias where company_id in (select id from comprm);...delete from company where id in (select id from comprm);\nIt did not help until I drop all constraints before and recreate all constraints after.Now I have it work for 15minutes, while previously it could not do in a day.Is it OK? I'd say, some (if not all) of the optimizations could be done by postgresql optimizer.", "msg_date": "Thu, 2 Oct 2008 12:42:15 +0300", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Delete performance again" }, { "msg_contents": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]> writes:\n> delete from company where id not in (select company_id from company_descr);\n> I've tried to analyze command, but unlike to other RDBM I've used it did not\n> include cascade deletes/checks into query plan. That is first problem.\n> It was SLOW.\n\nUsually the reason for that is having forgotten to make an index on the\nreferencing column(s) ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Oct 2008 08:14:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete performance again " }, { "msg_contents": "2008/10/2 Tom Lane <[email protected]>\n\n> \"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]> writes:\n> > delete from company where id not in (select company_id from\n> company_descr);\n> > I've tried to analyze command, but unlike to other RDBM I've used it did\n> not\n> > include cascade deletes/checks into query plan. That is first problem.\n> > It was SLOW.\n>\n> Usually the reason for that is having forgotten to make an index on the\n> referencing column(s) ?\n>\n\nNot at all. As you can see below in original message, simply \"extending\" the\nquery to what should have been done by optimizer helps. I'd say optimizer\nalways uses fixed plan not taking into account that this is massive update\nand id doing index lookup of children records for each parent record, while\nit would be much more effective to perform removal of all children records\nin single table scan.\n\nIt's like trigger \"for each record\" instead of \"for each statement\".\n\n2008/10/2 Tom Lane <[email protected]>\n\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]> writes:\n> delete from company where id not in (select company_id from company_descr);\n> I've tried to analyze command, but unlike to other RDBM I've used it did not\n> include cascade deletes/checks into query plan. That is first problem.\n> It was SLOW.\n\nUsually the reason for that is having forgotten to make an index on the\nreferencing column(s) ?\nNot at all. As you can see below in original message, simply \"extending\" the query to what should have been done by optimizer helps. I'd say optimizer always uses fixed plan not taking into account that this is massive update and id doing index lookup of children records for each parent record, while it would be much more effective to perform removal of all children records in single table scan.\nIt's like trigger \"for each record\" instead of \"for each statement\".", "msg_date": "Thu, 2 Oct 2008 18:21:59 +0300", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Delete performance again" }, { "msg_contents": "Hi,\n \nMaybe you can try this syntax. I'm not sure, but it eventually perform better:\n \n \ndelete from company_alias USING comprm\nwhere company_alias.company_id =comprm.id\n\n\nCheers,\n\nMarc\n", "msg_date": "Fri, 3 Oct 2008 22:55:00 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete performance again" }, { "msg_contents": "OK, I did try you proposal and correlated subselect.\nI have a database ~900000 companies.\nFirst try was to remove randomly selected 1000 companies\nUncorrelated subselect: 65899ms\nCorrelated subselect: 97467ms\nusing: 9605ms\nmy way: 104979ms. (without constraints recreate)\nMy is the worst because it is oriented on massive delete.\nSo I thought USING would perform better, so I did try 10000 companies\nmy way: 190527ms. (without constraints recreate)\nusing: 694144ms\nI was a little shocked, but I did check plans and found out that it did\nswitch from Nested Loop to Hash Join.\nI did disable Hash Join, it not show Merge Join. This was also disabled....\nand I've got 747253ms.\nThen I've tried combinations: Without hash join it was the best result of\n402629ms, without merge join it was 1096116ms.\n\nMy conclusion: Until optimizer would take into account additional actions\nneeded (like constraints check/cascade deletes/triggers), it can not make\ngood plan.\n\nOK, I did try you proposal and correlated subselect.I have a database ~900000 companies.First try was to remove randomly selected 1000 companiesUncorrelated subselect: 65899msCorrelated subselect: 97467ms\nusing: 9605msmy way: 104979ms. (without constraints recreate)My is the worst because it is oriented on massive delete. So I thought USING would perform better, so I did try 10000 companiesmy way: 190527ms. (without constraints recreate)\nusing: 694144msI was a little shocked, but I did check plans and found out that it did switch from Nested Loop to Hash Join.I did disable Hash Join, it not show Merge Join. This was also disabled.... and I've got 747253ms. \nThen I've tried combinations: Without hash join it was the best result of 402629ms, without merge join it was 1096116ms.My conclusion: Until optimizer would take into account additional actions needed (like constraints check/cascade deletes/triggers), it can not make good plan.", "msg_date": "Thu, 9 Oct 2008 15:54:38 +0300", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Delete performance again" }, { "msg_contents": "BTW: Have just tried \"clean\" (without any foreign keys constraints)\npeformance of\n\"delete from tbl where field not in (select)\"\nvs\n\"create temporary table tmp(id) as select distinct field from tbl; delete\nfrom tmp where id in (select); delete from tbl where field in (select id\nfrom tmp)\".\nboth tbl and select are huge.\ntbl cardinality is ~5 million, select is ~1 milliion. Number of records to\ndelete is small.\nselect is simply \"select id from table2\".\n\nFirst (simple) one could not do in a night, second did in few seconds.\n\nBTW: Have just tried \"clean\" (without any foreign keys constraints) peformance of \"delete from tbl where field not in (select)\" vs \"create temporary table tmp(id)  as select distinct field from tbl; delete from tmp where id in (select); delete from tbl where field in (select id from tmp)\".\nboth tbl and select are huge. tbl cardinality is ~5 million, select is ~1 milliion. Number of records to delete is small.select is simply \"select id from table2\".First (simple) one could not do in a night, second did in few seconds.", "msg_date": "Fri, 10 Oct 2008 11:24:08 +0300", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Delete performance again" } ]
[ { "msg_contents": "I have a problem where by an insert on a \"large\" table will sometimes\ntake longer than usual.\n\nUsually the inserts are quick then from time to time they will take a\nlong time sometimes as much as 10seconds or longer. (But usually under\n500ms which is when I start logging them)\n\nThe queries are slow drip fed so bulk loading really is not an option,\nIts logging data. Used in analysis and for historical purposes mostly.\n\nI think the problem might have something to do with checkpoints, I'm\nrelatively sure its not when the table expands as I've run a vacuum\nverbose straight away after a longer insert and not found loads of\nspace in the fsm.\n\nI'm using 8.3.1 (I thought I'd upgraded to 8.3.3 but it does not look\nlike the upgrade worked) I'm more than happy to upgrade just have to\nfind the down time (even a few seconds can be difficult)\n\nAny help would be appreciated.\n\nRegards\n\nPeter Childs\n", "msg_date": "Fri, 3 Oct 2008 08:25:42 +0100", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Inserts on large tables" }, { "msg_contents": "Peter Childs wrote:\n> I have a problem where by an insert on a \"large\" table will sometimes\n> take longer than usual.\n\n> I think the problem might have something to do with checkpoints,\n\nThen show us your checkpointing-related parameters. Or try to set them \nto a lot higher values so checkpoints happen more rarely and see if that \nmakes a difference.\n\n", "msg_date": "Fri, 03 Oct 2008 11:22:05 +0300", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Inserts on large tables" }, { "msg_contents": "2008/10/3 Peter Eisentraut <[email protected]>:\n> Peter Childs wrote:\n>>\n>> I have a problem where by an insert on a \"large\" table will sometimes\n>> take longer than usual.\n>\n>> I think the problem might have something to do with checkpoints,\n>\n> Then show us your checkpointing-related parameters. Or try to set them to a\n> lot higher values so checkpoints happen more rarely and see if that makes a\n> difference.\n>\n>\n\nMore often or less often?\n\nI've currently got them set to\n\ncheckpoint_segments = 3\ncheckpoint_timeout = 180s\ncheckpoint_completion_target = 0.5\n\nafter reading that doing more smaller checkpoints might make each\ncheckpoint work quicker and hence less of a performance hit when they\nactually happen.\n\nRegards\n\nPeter\n", "msg_date": "Fri, 3 Oct 2008 09:47:38 +0100", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Inserts on large tables" }, { "msg_contents": "Peter,\n\n(please take this with a pinch of salt as I am no expert)\n\nHere is  a possible scenario:\nEach of your checkpoints takes 90 seconds or more (you told it  so with the checkpoint_completion_target). \nIf your insert fills 3 checkpoint segments (48 megs ) in less than 90 seconds then a new checkpoint request is issued. And maybe a third one, and so on. I imagine that this can flood the disk cache with write requests at some point although I can't explain how.\nHave a look at the log, see the interval between the checkpoint requests and try to make this (a lot) larger than the checkpoint duration.\nStart by increasing your checkpoint_segments (to, say, 16). If this doesn't work, maybe the timeout is too short, or the 90 seconds target to generous.\n\nRegards,\n\nIulian\n\n--- On Fri, 10/3/08, Peter Childs <[email protected]> wrote:\nFrom: Peter Childs <[email protected]>\nSubject: Re: [PERFORM] Slow Inserts on large tables\nTo: \nCc: \"Postgresql Performance\" <[email protected]>\nDate: Friday, October 3, 2008, 9:47 AM\n\n2008/10/3 Peter Eisentraut <[email protected]>:\n> Peter Childs wrote:\n>>\n>> I have a problem where by an insert on a \"large\" table will\nsometimes\n>> take longer than usual.\n>\n>> I think the problem might have something to do with checkpoints,\n>\n> Then show us your checkpointing-related parameters. Or try to set them to\na\n> lot higher values so checkpoints happen more rarely and see if that makes\na\n> difference.\n>\n>\n\nMore often or less often?\n\nI've currently got them set to\n\ncheckpoint_segments = 3\ncheckpoint_timeout = 180s\ncheckpoint_completion_target = 0.5\n\nafter reading that doing more smaller checkpoints might make each\ncheckpoint work quicker and hence less of a performance hit when they\nactually happen.\n\nRegards\n\nPeter\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nPeter,(please take this with a pinch of salt as I am no expert)Here is  a possible scenario:Each of your checkpoints takes 90 seconds or more (you told it  so with the checkpoint_completion_target). If your insert fills 3 checkpoint segments (48 megs ) in less than 90 seconds then a new checkpoint request is issued. And maybe a third one, and so on. I imagine that this can flood the disk cache with write requests at some point although I can't explain how.Have a look at the log, see the interval between the checkpoint requests and try to make this (a lot) larger than the checkpoint duration.Start by increasing your checkpoint_segments (to, say, 16). If this doesn't work, maybe the timeout is too short, or the 90 seconds target to generous.Regards,Iulian--- On Fri, 10/3/08, Peter Childs\n <[email protected]> wrote:From: Peter Childs <[email protected]>Subject: Re: [PERFORM] Slow Inserts on large tablesTo: Cc: \"Postgresql Performance\" <[email protected]>Date: Friday, October 3, 2008, 9:47 AM2008/10/3 Peter Eisentraut <[email protected]>:> Peter Childs wrote:>>>> I have a problem where by an insert on a \"large\" table willsometimes>> take longer than usual.>>> I think the problem might have something to do with checkpoints,>> Then show us your checkpointing-related parameters. Or try to set them toa> lot higher values so checkpoints happen more rarely and see if that makesa> difference.>>More often or less often?I've currently got\n them set tocheckpoint_segments = 3checkpoint_timeout = 180scheckpoint_completion_target = 0.5after reading that doing more smaller checkpoints might make eachcheckpoint work quicker and hence less of a performance hit when theyactually happen.RegardsPeter-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 3 Oct 2008 03:09:37 -0700 (PDT)", "msg_from": "Iulian Dragan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Inserts on large tables" }, { "msg_contents": "\"Peter Childs\" <[email protected]> writes:\n> 2008/10/3 Peter Eisentraut <[email protected]>:\n>> Then show us your checkpointing-related parameters.\n\n> I've currently got them set to\n\n> checkpoint_segments = 3\n> checkpoint_timeout = 180s\n> checkpoint_completion_target = 0.5\n\n> after reading that doing more smaller checkpoints might make each\n> checkpoint work quicker and hence less of a performance hit when they\n> actually happen.\n\nThat concept is actually pretty obsolete in 8.3: with spread-out\ncheckpoints it basically shouldn't hurt to increase the checkpoint\ninterval, and could actually help because the bgwriter doesn't have\nsuch a tight deadline to finish the checkpoint. In any case you\n*definitely* need to increase checkpoint_segments --- the value\nyou've got could be forcing a checkpoint every few seconds not\nevery few minutes.\n\nWhat I would suggest is turning on log_checkpoints and then seeing\nif there's any correlation between your slow insert commands and the\ncheckpoints. I'm suspicious that the problem is somewhere else.\n(For instance, have you got anything that might take a lock on the\ntable? Maybe enabling log_lock_waits would be a good idea too.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Oct 2008 08:19:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Inserts on large tables " } ]
[ { "msg_contents": "Hi,\n\nwe have a log table on one server with 1.9 million records.\n\nOne column \"event\" (type text) in that table is a string that (currently) \ntakes a small number of distinct values (~43) (hmm that could have been \nnormalised better).\n\nWe noted on querying for events of a specific type, that the queries were \nslower than expected. It simply wasn't using the index (btree, default \nsettings) on this column on this server (the test server, with less records, \nwas fine).\n\nUsing \"ALTER TABLE SET STATISTICS\" to increase the number of buckets to 50 \nresolved the issue, we went pretty much straight there on discovering there \nare no \"HINTS\".\n\nHowever we aren't quite sure why this case was pathological, and my brain \ndoesn't grok the documentation quite.\n\nI assume that the histogram_bounds for strings are alphabetical in order, so \nthat \"DEMOSTART\" falls between \"DELETE\" and \"IDEMAIL\". Even on a worst case \nof including both these common values, the planner ought to have assumed that \nless than <10% of records were likely covered by the value selected, so it \nseems unlikely to me that not using the index would be a good idea.\n\nWhat am I missing? (and yes there is a plan to upgrade!).\n\n\n=> SELECT COUNT(*) FROM log WHERE event='DEMOSTART';\n(...lots of time passes...)\n count\n-------\n 1432\n(1 row)\n\n\n=> SELECT COUNT(*), event FROM log GROUP BY event ORDER BY count;\n\n count | event\n--------+-----------\n 6 | DNRFAIL\n 14 | ADMDNR\n 14 | UPGRADE\n 18 | FOCRENEW\n 21 | AUTOCN\n 25 | ADMCC\n 27 | TEMPIN\n 31 | DNRCANCEL\n 43 | EXPIRED\n 128 | DIRECTBUY\n 130 | CANCEL\n 130 | CANCELQ\n 154 | FOCBUY\n 173 | EXPCCWARN\n 179 | OFFER\n 209 | DNROK\n 214 | TEMPRE\n 356 | CCWARN\n 429 | ADMLOGIN\n 719 | SUBSCRIBE\n 787 | CCSUCCESS\n 988 | CCFAILURE\n 1217 | TEMPNEW\n 1298 | PAYPAL\n 1431 | DEMOSTART\n 1776 | CCREQUEST\n 2474 | ACCTUPD\n 15169 | SYSMAINT\n 42251 | IDEMAIL\n 46964 | DELETE\n 50764 | RELOGIN\n 57022 | NEWUSR\n 64907 | PUBREC0\n 65449 | UNPUBLISH\n 92843 | LOGOUT\n 99018 | KILLSESS\n 128900 | UPLOAD\n 134994 | LOGIN\n 137608 | NEWPAGE\n 447556 | PUBREC1\n 489572 | PUBLISH\n\n\n=> EXPLAIN SELECT * FROM log WHERE event='DEMOSTART';\n QUERY PLAN\n------------------------------------------------------------\n Seq Scan on log (cost=0.00..54317.14 rows=20436 width=93)\n Filter: (event = 'DEMOSTART'::text)\n(2 rows)\n\n\n=> ALTER TABLE log ALTER COLUMN events SET STATISTICS 50; ANALYSE\nLOG(event);\nALTER TABLE\nANALYZE\n\n\n=> EXPLAIN SELECT COUNT(*) FROM log WHERE event='DEMOSTART';\n QUERY PLAN\n----------------------------------------------------------------------------\n-------\n Aggregate (cost=5101.43..5101.43 rows=1 width=0)\n -> Index Scan using log_event on log (cost=0.00..5098.15 rows=1310\nwidth=0)\n Index Cond: (event = 'DEMOSTART'::text)\n(3 rows)\n\n\n=> SELECT COUNT(*) FROM log WHERE event='DEMOSTART';\n(...almost no time passes...)\n count\n-------\n 1432\n(1 row)\n\n\nBEFORE\npajax=> select * from pg_stats where tablename = 'log' and attname='event';\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | \nmost_common_vals | \nmost_common_freqs | \nhistogram_bounds | correlation\n------------+-----------+---------+-----------+-----------+------------+--------------------------------------------------------+-------------------------------------------------------------------+---------------------------------------------------------------------------------------------+-------------\n public | log | event | 0 | 10 | 25 | \n{PUBLISH,PUBREC1,NEWPAGE,UPLOAD,LOGIN,KILLSESS,LOGOUT} | \n{0.257333,0.248333,0.072,0.0696667,0.0613333,0.0543333,0.0506667} | \n{ACCTUPD,DELETE,IDEMAIL,NEWUSR,NEWUSR,PUBREC0,PUBREC0,RELOGIN,SYSMAINT,UNPUBLISH,UNPUBLISH} | \n0.120881\n(1 row)\n\nAFTER\npajax=> select * from pg_stats where tablename='log' and attname='event';\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | \nmost_common_vals | \nmost_common_freqs | \nhistogram_bounds | \ncorrelation\n------------+-----------+---------+-----------+-----------+------------+--------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public | log | event | 0 | 10 | 32 | \n{PUBLISH,PUBREC1,NEWPAGE,LOGIN,UPLOAD,KILLSESS,LOGOUT,PUBREC0,UNPUBLISH,NEWUSR,RELOGIN,DELETE,IDEMAIL} | \n{0.249067,0.248533,0.0761333,0.0719333,0.0685333,0.0526,0.045,0.0368,0.0348667,0.029,0.0255333,0.0254667,0.0238667} | \n{ACCTUPD,ACCTUPD,ACCTUPD,ADMLOGIN,CCREQUEST,CCSUCCESS,DEMOSTART,FOCBUY,PAYPAL,SYSMAINT,SYSMAINT,SYSMAINT,SYSMAINT,SYSMAINT,SYSMAINT,SYSMAINT,SYSMAINT,TEMPNEW,TEMPRE} | \n0.106671\n(1 row)\n\n", "msg_date": "Fri, 3 Oct 2008 14:37:35 +0100", "msg_from": "Simon Waters <[email protected]>", "msg_from_op": true, "msg_subject": "7.4 - basic tuning question" }, { "msg_contents": "Simon Waters wrote:\n\nThe best advice is to \"upgrade at your earliest convenience\" with\nperformance questions and 7.4 - you're missing a *lot* of improvements.\nYou say you're planning to anyway, and I'd recommend putting effort into\nthe upgrade rather than waste effort on tuning a system you're leaving.\n\n> I assume that the histogram_bounds for strings are alphabetical in order, so \n> that \"DEMOSTART\" falls between \"DELETE\" and \"IDEMAIL\". Even on a worst case \n> of including both these common values, the planner ought to have assumed that \n> less than <10% of records were likely covered by the value selected, so it \n> seems unlikely to me that not using the index would be a good idea.\n\nWell, the real question is how many blocks need to be read to find those\nDEMOSTART rows. At some point around 5-10% of the table it's easier just\nto read the whole table than go back and fore between index and table.\nThe precise point will depend on how much RAM you have, disk speeds etc.\n\n> => SELECT COUNT(*) FROM log WHERE event='DEMOSTART';\n> (...lots of time passes...)\n> count\n> -------\n> 1432\n> (1 row)\n\nOK, not many. The crucial bit is below though. These are the 10 values\nit will hold stats on, and all it knows is that DEMOSTART has less than\n57000 entries. OK, it's more complicated than that, but basically there\nare values it tracks and everything else. So - it assumes that all other\n values have the same chance of occuring.\n\n> => SELECT COUNT(*), event FROM log GROUP BY event ORDER BY count;\n> \n> count | event\n> --------+-----------\n[snip]\n> 57022 | NEWUSR\n> 64907 | PUBREC0\n> 65449 | UNPUBLISH\n> 92843 | LOGOUT\n> 99018 | KILLSESS\n> 128900 | UPLOAD\n> 134994 | LOGIN\n> 137608 | NEWPAGE\n> 447556 | PUBREC1\n> 489572 | PUBLISH\n\nWhich is why it guesses 20436 rows below. If you'd done \"SET\nenable_seqscan = off\" then run the explain again it should have\nestimated a cost for the index that was more than 54317.14\n\n> => EXPLAIN SELECT * FROM log WHERE event='DEMOSTART';\n> QUERY PLAN\n> ------------------------------------------------------------\n> Seq Scan on log (cost=0.00..54317.14 rows=20436 width=93)\n> Filter: (event = 'DEMOSTART'::text)\n> (2 rows)\n> \n> \n> => ALTER TABLE log ALTER COLUMN events SET STATISTICS 50; ANALYSE\n> LOG(event);\n> ALTER TABLE\n> ANALYZE\n> \n> \n> => EXPLAIN SELECT COUNT(*) FROM log WHERE event='DEMOSTART';\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> -------\n> Aggregate (cost=5101.43..5101.43 rows=1 width=0)\n> -> Index Scan using log_event on log (cost=0.00..5098.15 rows=1310\n> width=0)\n> Index Cond: (event = 'DEMOSTART'::text)\n> (3 rows)\n\nNot bad - now it knows how many rows it will find, and it sees that the\nindex is cheaper. It's not completely accurate - it uses a statistical\nsampling (and of course it's out of date as soon as you update the table).\n\nHTH\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 03 Oct 2008 17:08:22 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4 - basic tuning question" } ]
[ { "msg_contents": "Hi,\n\nI have a table sct_descriptions which I have vacuumed, analyzed and\nreindexed. The index is on term_index\n\nINFO: analyzing \"public.sct_descriptions\"\nINFO: \"sct_descriptions\": scanned 3000 of 22861 pages, containing\n91877 live rows and 0 dead rows; 3000 rows in sample, 700133 estimated\ntotal rows\n\nI get an index scan if I do where term_index =\n\nselect * from sct_descriptions where term_index = 'CHILLS AND FEVER (FINDING)'\n\nbut I get a sequential scan when I do where term_index like\n\nselect * from sct_descriptions where term_index like 'CHILLS AND FEVER\n(FINDING)'\n\nThis is despite my not using a wildcard, and there being on one row\nreturned in either case.\n\nThe sequential scan costs 400 ms compared to the index scans 15 ms\n\nI changed enable_seqscan = off and put random_page_cost = 0.1 BUT\nSTILL NO USE - IT DOES A SEQUENTIAL SCAN !\n\nIs there anything else I can do? Settings below, this is PostgreSQL 8.3\n\nthanks!\n\nGreg\n\n\"add_missing_from\";\"off\"\n\"allow_system_table_mods\";\"off\"\n\"archive_command\";\"(disabled)\"\n\"archive_mode\";\"off\"\n\"archive_timeout\";\"0\"\n\"array_nulls\";\"on\"\n\"authentication_timeout\";\"1min\"\n\"autovacuum\";\"on\"\n\"autovacuum_analyze_scale_factor\";\"0.1\"\n\"autovacuum_analyze_threshold\";\"50\"\n\"autovacuum_freeze_max_age\";\"200000000\"\n\"autovacuum_max_workers\";\"3\"\n\"autovacuum_naptime\";\"1min\"\n\"autovacuum_vacuum_cost_delay\";\"20ms\"\n\"autovacuum_vacuum_cost_limit\";\"-1\"\n\"autovacuum_vacuum_scale_factor\";\"0.2\"\n\"autovacuum_vacuum_threshold\";\"50\"\n\"backslash_quote\";\"safe_encoding\"\n\"bgwriter_delay\";\"200ms\"\n\"bgwriter_lru_maxpages\";\"100\"\n\"bgwriter_lru_multiplier\";\"2\"\n\"block_size\";\"8192\"\n\"bonjour_name\";\"\"\n\"check_function_bodies\";\"on\"\n\"checkpoint_completion_target\";\"0.5\"\n\"checkpoint_segments\";\"3\"\n\"checkpoint_timeout\";\"5min\"\n\"checkpoint_warning\";\"30s\"\n\"client_encoding\";\"UNICODE\"\n\"client_min_messages\";\"notice\"\n\"commit_delay\";\"0\"\n\"commit_siblings\";\"5\"\n\"config_file\";\"C:/Program Files/PostgreSQL/8.3/data/postgresql.conf\"\n\"constraint_exclusion\";\"off\"\n\"cpu_index_tuple_cost\";\"0.005\"\n\"cpu_operator_cost\";\"0.0025\"\n\"cpu_tuple_cost\";\"0.01\"\n\"custom_variable_classes\";\"\"\n\"data_directory\";\"C:/Program Files/PostgreSQL/8.3/data\"\n\"DateStyle\";\"ISO, MDY\"\n\"db_user_namespace\";\"off\"\n\"deadlock_timeout\";\"1s\"\n\"debug_assertions\";\"off\"\n\"debug_pretty_print\";\"off\"\n\"debug_print_parse\";\"off\"\n\"debug_print_plan\";\"off\"\n\"debug_print_rewritten\";\"off\"\n\"default_statistics_target\";\"10\"\n\"default_tablespace\";\"\"\n\"default_text_search_config\";\"pg_catalog.english\"\n\"default_transaction_isolation\";\"read committed\"\n\"default_transaction_read_only\";\"off\"\n\"default_with_oids\";\"off\"\n\"dynamic_library_path\";\"$libdir\"\n\"effective_cache_size\";\"128MB\"\n\"enable_bitmapscan\";\"on\"\n\"enable_hashagg\";\"on\"\n\"enable_hashjoin\";\"on\"\n\"enable_indexscan\";\"on\"\n\"enable_mergejoin\";\"on\"\n\"enable_nestloop\";\"on\"\n\"enable_seqscan\";\"off\"\n\"enable_sort\";\"on\"\n\"enable_tidscan\";\"on\"\n\"escape_string_warning\";\"on\"\n\"explain_pretty_print\";\"on\"\n\"external_pid_file\";\"\"\n\"extra_float_digits\";\"0\"\n\"from_collapse_limit\";\"8\"\n\"fsync\";\"on\"\n\"full_page_writes\";\"on\"\n\"geqo\";\"on\"\n\"geqo_effort\";\"5\"\n\"geqo_generations\";\"0\"\n\"geqo_pool_size\";\"0\"\n\"geqo_selection_bias\";\"2\"\n\"geqo_threshold\";\"12\"\n\"gin_fuzzy_search_limit\";\"0\"\n\"hba_file\";\"C:/Program Files/PostgreSQL/8.3/data/pg_hba.conf\"\n\"ident_file\";\"C:/Program Files/PostgreSQL/8.3/data/pg_ident.conf\"\n\"ignore_system_indexes\";\"off\"\n\"integer_datetimes\";\"off\"\n\"join_collapse_limit\";\"8\"\n\"krb_caseins_users\";\"off\"\n\"krb_realm\";\"\"\n\"krb_server_hostname\";\"\"\n\"krb_server_keyfile\";\"\"\n\"krb_srvname\";\"postgres\"\n\"lc_collate\";\"English_United States.1252\"\n\"lc_ctype\";\"English_United States.1252\"\n\"lc_messages\";\"English_United States\"\n\"lc_monetary\";\"English_United States\"\n\"lc_numeric\";\"English_United States\"\n\"lc_time\";\"English_United States\"\n\"listen_addresses\";\"localhost\"\n\"local_preload_libraries\";\"\"\n\"log_autovacuum_min_duration\";\"-1\"\n\"log_checkpoints\";\"off\"\n\"log_connections\";\"off\"\n\"log_destination\";\"stderr\"\n\"log_directory\";\"pg_log\"\n\"log_disconnections\";\"off\"\n\"log_duration\";\"off\"\n\"log_error_verbosity\";\"default\"\n\"log_executor_stats\";\"off\"\n\"log_filename\";\"postgresql-%Y-%m-%d_%H%M%S.log\"\n\"log_hostname\";\"off\"\n\"log_line_prefix\";\"%t \"\n\"log_lock_waits\";\"off\"\n\"log_min_duration_statement\";\"-1\"\n\"log_min_error_statement\";\"error\"\n\"log_min_messages\";\"notice\"\n\"log_parser_stats\";\"off\"\n\"log_planner_stats\";\"off\"\n\"log_rotation_age\";\"1d\"\n\"log_rotation_size\";\"10MB\"\n\"log_statement\";\"none\"\n\"log_statement_stats\";\"off\"\n\"log_temp_files\";\"-1\"\n\"log_timezone\";\"US/Eastern\"\n\"log_truncate_on_rotation\";\"off\"\n\"logging_collector\";\"on\"\n\"maintenance_work_mem\";\"16MB\"\n\"max_connections\";\"100\"\n\"max_files_per_process\";\"1000\"\n\"max_fsm_pages\";\"204800\"\n\"max_fsm_relations\";\"1000\"\n\"max_function_args\";\"100\"\n\"max_identifier_length\";\"63\"\n\"max_index_keys\";\"32\"\n\"max_locks_per_transaction\";\"64\"\n\"max_prepared_transactions\";\"5\"\n\"max_stack_depth\";\"2MB\"\n\"password_encryption\";\"on\"\n\"port\";\"5432\"\n\"post_auth_delay\";\"0\"\n\"pre_auth_delay\";\"0\"\n\"random_page_cost\";\"0.1\"\n\"regex_flavor\";\"advanced\"\n\"search_path\";\"\"$user\",public\"\n\"seq_page_cost\";\"1\"\n\"server_encoding\";\"UTF8\"\n\"server_version\";\"8.3.1\"\n\"server_version_num\";\"80301\"\n\"session_replication_role\";\"origin\"\n\"shared_buffers\";\"32MB\"\n\"shared_preload_libraries\";\"$libdir/plugins/plugin_debugger.dll\"\n\"silent_mode\";\"off\"\n\"sql_inheritance\";\"on\"\n\"ssl\";\"off\"\n\"ssl_ciphers\";\"ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH\"\n\"standard_conforming_strings\";\"off\"\n\"statement_timeout\";\"0\"\n\"superuser_reserved_connections\";\"3\"\n\"synchronize_seqscans\";\"on\"\n\"synchronous_commit\";\"on\"\n\"tcp_keepalives_count\";\"0\"\n\"tcp_keepalives_idle\";\"0\"\n\"tcp_keepalives_interval\";\"0\"\n\"temp_buffers\";\"1024\"\n\"temp_tablespaces\";\"\"\n\"TimeZone\";\"US/Eastern\"\n\"timezone_abbreviations\";\"Default\"\n\"trace_notify\";\"off\"\n\"trace_sort\";\"off\"\n\"track_activities\";\"on\"\n\"track_counts\";\"on\"\n\"transaction_isolation\";\"read committed\"\n\"transaction_read_only\";\"off\"\n\"transform_null_equals\";\"off\"\n\"unix_socket_directory\";\"\"\n\"unix_socket_group\";\"\"\n\"unix_socket_permissions\";\"511\"\n\"update_process_title\";\"on\"\n\"vacuum_cost_delay\";\"0\"\n\"vacuum_cost_limit\";\"200\"\n\"vacuum_cost_page_dirty\";\"20\"\n\"vacuum_cost_page_hit\";\"1\"\n\"vacuum_cost_page_miss\";\"10\"\n\"vacuum_freeze_min_age\";\"100000000\"\n\"wal_buffers\";\"64kB\"\n\"wal_sync_method\";\"open_datasync\"\n\"wal_writer_delay\";\"200ms\"\n\"work_mem\";\"1MB\"\n\"xmlbinary\";\"base64\"\n\"xmloption\";\"content\"\n\"zero_damaged_pages\";\"off\"\n", "msg_date": "Mon, 6 Oct 2008 19:47:17 -0400", "msg_from": "\"Greg Caulton\" <[email protected]>", "msg_from_op": true, "msg_subject": "cant get an index scan with a LIKE" }, { "msg_contents": ">>> \"Greg Caulton\" <[email protected]> wrote: \n \n> but I get a sequential scan when I do where term_index like\n> \n> select * from sct_descriptions where term_index like 'CHILLS AND\nFEVER\n> (FINDING)'\n \n> Is there anything else I can do? Settings below, this is PostgreSQL\n8.3\n \n> \"lc_collate\";\"English_United States.1252\"\n> \"lc_ctype\";\"English_United States.1252\"\n> \"lc_messages\";\"English_United States\"\n> \"lc_monetary\";\"English_United States\"\n> \"lc_numeric\";\"English_United States\"\n> \"lc_time\";\"English_United States\"\n \nThis issue is discussed here:\n \nhttp://www.postgresql.org/docs/8.2/interactive/locale.html\n \nwith a solution to your specific problem mentioned here:\n \nhttp://www.postgresql.org/docs/8.2/interactive/indexes-opclass.html\n \nYou can create an index with the appropriate operator type to get LIKE\nto work as you want. I hope this helps.\n \n-Kevin\n", "msg_date": "Mon, 06 Oct 2008 18:59:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cant get an index scan with a LIKE" }, { "msg_contents": "That worked great - THANKS!\n\nCREATE INDEX sct_descriptions_k2\n ON sct_descriptions\n USING btree\n (term_index varchar_pattern_ops);\n\nI noticed I had to keep the original index for the non-like operator\nbut that is not a big deal\n\nCREATE INDEX sct_descriptions_k1\n ON sct_descriptions\n USING btree\n (term_index );\n\n\nthanks again\n\nGreg\n\n\nOn Mon, Oct 6, 2008 at 7:59 PM, Kevin Grittner\n<[email protected]> wrote:\n>>>> \"Greg Caulton\" <[email protected]> wrote:\n>\n>> but I get a sequential scan when I do where term_index like\n>>\n>> select * from sct_descriptions where term_index like 'CHILLS AND\n> FEVER\n>> (FINDING)'\n>\n>> Is there anything else I can do? Settings below, this is PostgreSQL\n> 8.3\n>\n>> \"lc_collate\";\"English_United States.1252\"\n>> \"lc_ctype\";\"English_United States.1252\"\n>> \"lc_messages\";\"English_United States\"\n>> \"lc_monetary\";\"English_United States\"\n>> \"lc_numeric\";\"English_United States\"\n>> \"lc_time\";\"English_United States\"\n>\n> This issue is discussed here:\n>\n> http://www.postgresql.org/docs/8.2/interactive/locale.html\n>\n> with a solution to your specific problem mentioned here:\n>\n> http://www.postgresql.org/docs/8.2/interactive/indexes-opclass.html\n>\n> You can create an index with the appropriate operator type to get LIKE\n> to work as you want. I hope this helps.\n>\n> -Kevin\n>\n", "msg_date": "Mon, 6 Oct 2008 20:15:50 -0400", "msg_from": "\"Greg Caulton\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cant get an index scan with a LIKE" } ]
[ { "msg_contents": "\nOne of our build servers recently ran out of disc space while trying to \ncopy an entire database. This led me to investigate the database cluster, \nwhich is stored on a RAID array with a total size of 1TB. Running a query \nto list all databases and their sizes did not add up to the amount of \nspace being used by Postgres, so I had a look at the pgsql/base directory. \nIt appears that there are a few large directories that do not correspond \nto any database. I wonder if these have been left behind accidentally by \nPostgres.\n\nHere are the database directories:\n\n Size (kB)\tDirectory\tDatabase\n\n 32\t\tpgsql_tmp\n 4352\t\t11510\t\ttemplate0\n 4368\t\t1\t\ttemplate1\n 4464\t\t11511\t\tpostgres\n 5368\t\t30103627\txav-userprofile-test\n 6096\t\t8088167\t\touterjoins-userprofile-12.0-copy\n 8676\t\t30103406\txav-test\n 10052\t\t31313164\tcommon-tgt-items-kmr-modmine\n 19956\t\t1108178\t\tmodmine-3-preview-18-feb-2008\n 89452\t\t14578911\tcommon-tgt-items-kmr\n 118940\t\t9952565\t\tproduction-xav-13\n 201192\t\t1257481\t\tcommon-tgt-items-gtocmine-rns\n 296552\t\t7040137\t\tcommon-tgt-items-flyminebuild\n 1557160\t9843085\n 1699624\t18456655\tcommon-src-items-flyminebuild\n 3376096\t278561\n 3995276\t9064702\t\tproduction-unimine-pride-beta5\n 8528136\t1257482\t\tgtocmine-rns\n 40815456\t29233051\n 42278196\t27473906\n 47112412\t28110832\n 47913532\t32728815\tproduction-flyminebuild:ensembl-anopheles\n 60519524\t32841289\tproduction-flyminebuild:go\n 67626328\t27377902\n 69513844\t32856736\tproduction-flyminebuild:flybase-dmel-gene-fasta\n 74289908\t32938724\tproduction-flyminebuild:pubmed-gene\n 75786720\t32941684\tproduction-flyminebuild:biogrid\n 77361800\t32944072\tproduction-flyminebuild:update-publications\n 80160256\t32947141\tproduction-flyminebuild:create-references\n 81333908\t32574190\tflybasemine-production\n 86356140\t12110825\n 87544200\t33049747\tproduction-flyminebuild\n\nSo on this server, the wasted space takes up 276GB, which is not \nacceptable. I believe that if we re-initialise the cluster and re-create \nthe databases, these directories would disappear. Taking a look at the \ndirectory 12110825, all the files inside were last accessed several \nmonths ago. So, I have a few questions:\n\n1. Is this space used for anything, or is it just abandoned? Is this a\n bug?\n2. How do I reclaim this wasted space in a safe manner?\n3. How do I prevent this happening again?\n\nMatthew\n\n-- \nAn ant doesn't have a lot of processing power available to it. I'm not trying\nto be speciesist - I wouldn't want to detract you from such a wonderful\ncreature, but, well, there isn't a lot there, is there?\n -- Computer Science Lecturer\n", "msg_date": "Wed, 8 Oct 2008 12:55:52 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Disc space usage" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> One of our build servers recently ran out of disc space while trying to \n> copy an entire database. This led me to investigate the database cluster, \n> which is stored on a RAID array with a total size of 1TB. Running a query \n> to list all databases and their sizes did not add up to the amount of \n> space being used by Postgres, so I had a look at the pgsql/base directory. \n> It appears that there are a few large directories that do not correspond \n> to any database. I wonder if these have been left behind accidentally by \n> Postgres.\n\nAnything under $PGDATA/base that doesn't correspond to a live row in\npg_database is junk. The interesting question is how it got that way,\nand in particular how you seem to have managed to have repeated\ninstances of it.\n\nI gather that you're in the habit of using CREATE DATABASE to copy\nlarge existing databases, so the most likely theory is that these are\nleftovers from previous failed copy attempts. Now CREATE DATABASE\ndoes attempt to clean up if its copying fails, but there are various\nways to break that, for instance hitting control-C partway through the\ncleanup phase. So I'm wondering if maybe that's been done a few times.\n\nWhat PG version is this, anyway?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Oct 2008 09:16:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disc space usage " }, { "msg_contents": "On Wed, 8 Oct 2008, Tom Lane wrote:\n>> It appears that there are a few large directories that do not correspond\n>> to any database. I wonder if these have been left behind accidentally by\n>> Postgres.\n>\n> Anything under $PGDATA/base that doesn't correspond to a live row in\n> pg_database is junk.\n\nSo I can delete it? Might be safer to stop the db server while I do that \nthough.\n\n> The interesting question is how it got that way, and in particular how \n> you seem to have managed to have repeated instances of it.\n>\n> I gather that you're in the habit of using CREATE DATABASE to copy\n> large existing databases, so the most likely theory is that these are\n> leftovers from previous failed copy attempts. Now CREATE DATABASE\n> does attempt to clean up if its copying fails, but there are various\n> ways to break that, for instance hitting control-C partway through the\n> cleanup phase. So I'm wondering if maybe that's been done a few times.\n\nYes, we do copy large databases quite often, and drop them again. The \ndatabase cluster was initialised back in March.\n\n> What PG version is this, anyway?\n\nPostgres 8.3.0\n\nMatthew\n\n-- \nUnfortunately, university regulations probably prohibit me from eating\nsmall children in front of the lecture class.\n -- Computer Science Lecturer\n", "msg_date": "Wed, 8 Oct 2008 14:46:49 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disc space usage " }, { "msg_contents": "On Wed, 8 Oct 2008, Tom Lane wrote:\n> The interesting question is how it got that way, and in particular how \n> you seem to have managed to have repeated instances of it.\n\nSpeaking to some of my colleagues, sometimes the createdb process fails \nwith a very specific error message. If we wait five seconds and try again, \nthen it succeeds. So, maybe the duff directories are from those failures.\n\nThe error message is always something like this:\n\ncreatedb: database creation failed: ERROR: could not stat file \"base/32285287/32687035\": No such file or directory\n\nJust before running createdb, we always have some quite heavy write \ntraffic. Is it possible that the changes that we just wrote haven't been \ncheckpointed properly yet, resulting in some of those files being missing \nfrom the template database, and therefore the createdb to fail?\n\nMatthew\n\n-- \nNow, you would have thought these coefficients would be integers, given that\nwe're working out integer results. Using a fraction would seem really\nstupid. Well, I'm quite willing to be stupid here - in fact, I'm going to\nuse complex numbers. -- Computer Science Lecturer\n", "msg_date": "Wed, 8 Oct 2008 15:00:24 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disc space usage " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Wed, 8 Oct 2008, Tom Lane wrote:\n>> Anything under $PGDATA/base that doesn't correspond to a live row in\n>> pg_database is junk.\n\n> So I can delete it? Might be safer to stop the db server while I do that \n> though.\n\nIn principle, at least, you shouldn't need to --- there shouldn't be any\nbuffers representing such files.\n\n>> What PG version is this, anyway?\n\n> Postgres 8.3.0\n\nYou should consider an update to 8.3.4. A quick look in the post-8.3.0\nCVS logs shows a couple of possibly relevant fixes:\n\n2008-04-18 13:05 tgl\n\n\t* src/: backend/commands/dbcommands.c, include/port.h,\n\tport/dirmod.c (REL8_3_STABLE): Fix rmtree() so that it keeps going\n\tafter failure to remove any individual file; the idea is that we\n\tshould clean up as much as we can, even if there's some problem\n\tremoving one file. Make the error messages a bit less misleading,\n\ttoo. In passing, const-ify function arguments.\n\n2008-04-16 19:59 tgl\n\n\t* src/: backend/access/nbtree/nbtree.c,\n\tbackend/access/nbtree/nbtutils.c, backend/access/transam/xlog.c,\n\tbackend/commands/dbcommands.c, backend/port/ipc_test.c,\n\tbackend/storage/ipc/ipc.c, include/access/nbtree.h,\n\tinclude/storage/ipc.h, include/utils/elog.h (REL8_3_STABLE): Repair\n\ttwo places where SIGTERM exit could leave shared memory state\n\tcorrupted. (Neither is very important if SIGTERM is used to shut\n\tdown the whole database cluster together, but there's a problem if\n\tsomeone tries to SIGTERM individual backends.)\tTo do this,\n\tintroduce new infrastructure macros\n\tPG_ENSURE_ERROR_CLEANUP/PG_END_ENSURE_ERROR_CLEANUP that take care\n\tof transiently pushing an on_shmem_exit cleanup hook. Also use\n\tthis method for createdb cleanup --- that wasn't a\n\tshared-memory-corruption problem, but SIGTERM abort of createdb\n\tcould leave orphaned files lying around.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Oct 2008 10:01:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disc space usage " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> Speaking to some of my colleagues, sometimes the createdb process fails \n> with a very specific error message. If we wait five seconds and try again, \n> then it succeeds. So, maybe the duff directories are from those failures.\n\n> The error message is always something like this:\n\n> createdb: database creation failed: ERROR: could not stat file \"base/32285287/32687035\": No such file or directory\n\n> Just before running createdb, we always have some quite heavy write \n> traffic.\n\nHmm, would that include dropping tables in the database you are about to\ncopy? If so, this error is fairly readily explainable as a side effect\nof the delayed dropping of physical files in recent PG versions.\n\n(As noted in the manual, CREATE DATABASE isn't really intended as a COPY\nDATABASE operation --- it is expecting the source database to be pretty\nstatic. I think you could make this more reliable if you do a manual\ncheckpoint between modifying the source database and copying it.)\n\nHowever, that still leaves me wondering why the leftover copied\ndirectories stick around. If the copying step failed that way,\nCREATE DATABASE *should* try to clean up the target tree before\nexiting. And AFAICS it wouldn't even report the error until after\ncompleting that cleanup. So there's still some piece of the puzzle\nthat's missing.\n\nDo you have some specific examples of this error message at hand?\nCan you try to confirm whether the reported path corresponds to\nsomething in the CREATE's source database? If it's actually\ncomplaining about a stat failure in the target tree, then there's\nsomething else going on altogether. I don't see anything in that\npath that would give this message, but I might be missing it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Oct 2008 10:38:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disc space usage " }, { "msg_contents": "On Wed, Oct 8, 2008 at 8:00 AM, Matthew Wakeling <[email protected]> wrote:\n> The error message is always something like this:\n>\n> createdb: database creation failed: ERROR: could not stat file\n> \"base/32285287/32687035\": No such file or directory\n\nBy any chance are you running on windows with virus protection\nsoftware on the server?\n", "msg_date": "Wed, 8 Oct 2008 09:14:37 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disc space usage" }, { "msg_contents": "On Wed, 8 Oct 2008, Scott Marlowe wrote:\n>> The error message is always something like this:\n>>\n>> createdb: database creation failed: ERROR: could not stat file\n>> \"base/32285287/32687035\": No such file or directory\n>\n> By any chance are you running on windows with virus protection\n> software on the server?\n\nYou insult me, sir! ;)\n\nNo, it's Linux.\n\nMatthew\n\n-- \nFor every complex problem, there is a solution that is simple, neat, and wrong.\n -- H. L. Mencken \n", "msg_date": "Wed, 8 Oct 2008 16:23:24 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disc space usage" }, { "msg_contents": "On Wed, 8 Oct 2008, Tom Lane wrote:\n> Hmm, would that include dropping tables in the database you are about to\n> copy? If so, this error is fairly readily explainable as a side effect\n> of the delayed dropping of physical files in recent PG versions.\n\nIt could quite possibly include dropping tables. We're running quite a \ncomplex system with lots going on all the time.\n\n> (As noted in the manual, CREATE DATABASE isn't really intended as a COPY\n> DATABASE operation --- it is expecting the source database to be pretty\n> static. I think you could make this more reliable if you do a manual\n> checkpoint between modifying the source database and copying it.)\n\nI gather this. However, I think it would be sensible to make sure it can \nnever \"corrupt the database\" as it were. It's fine for it to lock everyone \nout of the database while the copying is happening though. The only reason \nfor it to fail should be if someone is logged into the template database.\n\n> Do you have some specific examples of this error message at hand?\n> Can you try to confirm whether the reported path corresponds to\n> something in the CREATE's source database? If it's actually\n> complaining about a stat failure in the target tree, then there's\n> something else going on altogether. I don't see anything in that\n> path that would give this message, but I might be missing it.\n\nThe oid in the error message is of a database that no longer exists, which \nindicates that it is *probably* referring to the template database. \nUnfortunately my colleagues just wrote the script so that it retries, so \nwe don't have a decent log of the failures, which were a while back. \nHowever, I have now altered the script so that it fails with a message \nsaying \"Report this to Matthew\", so if it happens again I'll be able to \ngive you some more detail.\n\nMatthew\n\n-- \nYou will see this is a 3-blackboard lecture. This is the closest you are going\nto get from me to high-tech teaching aids. Hey, if they put nooses on this, it\nwould be fun! -- Computer Science Lecturer\n", "msg_date": "Wed, 8 Oct 2008 16:38:10 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disc space usage " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> The oid in the error message is of a database that no longer exists, which \n> indicates that it is *probably* referring to the template database. \n> Unfortunately my colleagues just wrote the script so that it retries, so \n> we don't have a decent log of the failures, which were a while back. \n> However, I have now altered the script so that it fails with a message \n> saying \"Report this to Matthew\", so if it happens again I'll be able to \n> give you some more detail.\n\nOne other bit of possibly useful data would be to eyeball the file mod\ntimes in the orphaned subdirectories. If they were from failed CREATE\nDATABASEs then I'd expect every file in a given directory to have the\nsame mod time (modulo the amount of time it takes to copy the DB, which\nis probably not trivial for the DB sizes you're dealing with). If you\ncould also correlate that to the times you saw CREATE failures then it'd\nbe pretty convincing that we know failed CREATEs are the issue.\n\nAlso, I would definitely urge you to update to 8.3.4. Although I'm not\nseeing a mechanism for CREATE to fail to clean up like this, I'm looking\nat the 8.3 branch tip code, not 8.3.0 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Oct 2008 11:51:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disc space usage " }, { "msg_contents": "On Wed, 8 Oct 2008, Tom Lane wrote:\n> One other bit of possibly useful data would be to eyeball the file mod\n> times in the orphaned subdirectories. If they were from failed CREATE\n> DATABASEs then I'd expect every file in a given directory to have the\n> same mod time (modulo the amount of time it takes to copy the DB, which\n> is probably not trivial for the DB sizes you're dealing with).\n\nYes, I did that, and the file modification times were in such a pattern.\n\n> If you could also correlate that to the times you saw CREATE failures \n> then it'd be pretty convincing that we know failed CREATEs are the \n> issue.\n\nCan't do that until next time it happens, because we don't have the logs \nfrom when it did happen any more.\n\nMatthew\n\n-- \nJadzia: Don't forget the 34th rule of acquisition: Peace is good for business.\nQuark: That's the 35th.\nJadzia: Oh yes, that's right. What's the 34th again?\nQuark: War is good for business. It's easy to get them mixed up.\n", "msg_date": "Thu, 9 Oct 2008 11:00:31 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disc space usage " } ]
[ { "msg_contents": "Hi there,\n\nI use different functions returning setof record, and they are working well. \nThe problem is the performance when I use those functions in joins, for \ninstance:\n\n SELECT *\n FROM \"Table1\" t1\n JOIN \"Function1\"( a1, a2, ... aN ) AS f1( ColA int4, ColB \nvarchar, ... )\n ON t1.ColX = f1.ColA\n\nThe problem is I'm not able to make indexes on the function, even inside I \nhave just another select statement from different permanent tables, with \nsome where clauses depending on the function arguments.\n\nDo you know a way to build such a function, returning something I can join \nin an outer select statement like above, using indexes or another way to run \nit faster ?\n\nTIA,\nSabin \n\n\n", "msg_date": "Thu, 9 Oct 2008 17:47:30 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "low performance on functions returning setof record" }, { "msg_contents": "\"Sabin Coanda\" <[email protected]> writes:\n> I use different functions returning setof record, and they are working well. \n> The problem is the performance when I use those functions in joins, for \n> instance:\n\n> SELECT *\n> FROM \"Table1\" t1\n> JOIN \"Function1\"( a1, a2, ... aN ) AS f1( ColA int4, ColB \n> varchar, ... )\n> ON t1.ColX = f1.ColA\n\n> The problem is I'm not able to make indexes on the function, even inside I \n> have just another select statement from different permanent tables, with \n> some where clauses depending on the function arguments.\n\nThere's not a lot you can do about that at the moment. 8.4 will have\nthe ability to inline functions returning sets, if they're SQL-language\nand consist of just a single SELECT, but existing releases won't do it.\n\nYou might consider trying to refactor your stuff to use views ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Oct 2008 15:30:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: low performance on functions returning setof record " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n\nLe 9 oct. 08 � 21:30, Tom Lane a �crit :\n> There's not a lot you can do about that at the moment. 8.4 will have\n> the ability to inline functions returning sets, if they're SQL- \n> language\n> and consist of just a single SELECT, but existing releases won't do \n> it.\n\n\nI'm actually using 8.3 functions cost/rows planner estimation to trick \nit into avoiding nestloop into some INNER JOIN situations where any \namount of up-to-date statistics won't help.\n\nWill the 8.4 ability to inline plain SQL still consider the given \nhardcoded ROWS estimation?\n\nFWIW the difference of timing of one of the queries where I'm using \nthis trick is about 35 mins or more against 48 seconds. It allows the \nplanner to choose MergeJoin paths instead of Nestloop ones, where \ninner loop has several millions records, and definitely not just \nseveral records, like planner/stats bet.\n\nRegards,\n- --\ndim\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (Darwin)\n\niEYEARECAAYFAkjuYYcACgkQlBXRlnbh1bmAjgCePkyl9qWTpQ1Gdk/yp3IINK+z\ng8EAoJuAzu9B3GUiPI1J5dCcbzeiSABG\n=5J6b\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 9 Oct 2008 21:54:47 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: low performance on functions returning setof record " } ]
[ { "msg_contents": "\nOk, I know that such an open and vague question like this one\nis... well, open and vague... But still.\n\nThe short story:\n\nJust finished an 8.3.4 installation on a new machine, to replace\nan existing one; the new machine is superior (i.e., higher\nperformance) in virtually every way --- twice as much memory,\nfaster processor, faster drives, etc.\n\nI made an exact copy of the existing database on the new\nmachine, and the exact same queries run on both reveal that\nthe old machine beats the new one by a factor of close to 2 !!!!\n(i.e., the same queries run close to twice as fast on the old\nmachine!!!)\n\nTo make things worse: the old machine is in operation, under\nnormal workload (and right now the system may be around\npeak time), and the new machine is there sitting doing nothing;\njust one user logged in using psql to run the queries --- *no-one\nand nothing* is connecting to the new server.\n\nSo... What's going on???\n\n\nThe details:\n\nCPU:\nNew: Opteron DC 1218HE (1MB cache per core) @2.6GHz\nOld: Athlon64 X2 (512K cache per core) @2.2GHz\n\nRAM:\nNew: 4GB\nOld: 2GB\n\nHD:\nDoesn't matter the capacity, but I have every reason to believe\nthe new one is faster --- hdparm reports 105MB/sec transfer\nrate; the measurement for the old server is meaningless, since\nit is in operation (i.e., there is actual database activity), so it\nmeasures between 50MB/sec and 70MB/sec. Given its age, I\nwould estimate 70 to 80 MB/sec\n\nOS:\nNew: CentOS 5.2 (gcc 4.1.2)\nOld: FC6 (gcc 4.1.2)\n\nPG:\nNew: 8.3.4 installed from source\nOld: 8.2.4 installed from source\n\nPresumably relevant configuration parameters --- shared_buffers\nwas set to 250MB on the old one; I set it to 500MB on the new\none (kinda makes sense, no? 1/8 of the physical memory in both\ncases).\n\nI set max_fsm_pages a little bit higher on the new one (409600\ninstead of 307200 on the old one). The rest is pretty much\nidentical (except for the autovacuum --- I left the defaults in the\nnew one)\n\n\nThe old machine is vacuum-analyzed once a day (around 4AM);\non the new one, I ran a vacuumdb -z -f after populating it.\n\n\nSome interesting outputs:\n\nexplain analyze select count(*) from users;\nNew:\n Aggregate (cost=8507.11..8507.12 rows=1 width=0) (actual\ntime=867.582..867.584 rows=1 loops=1)\n -> Seq Scan on users (cost=0.00..7964.49 rows=217049 width=0)\n(actual time=0.016..450.560 rows=217049 loops=1)\n Total runtime: 867.744 ms\n\nOld:\n Aggregate (cost=17171.22..17171.22 rows=1 width=0) (actual\ntime=559.475..559.476 rows=1 loops=1)\n -> Seq Scan on users (cost=0.00..16628.57 rows=217057 width=0)\n(actual time=0.009..303.026 rows=217107 loops=1)\n Total runtime: 559.536 ms\n\nRunning the same command again several times practically\ndoes not change anything.\n\n\nexplain analyze select count(*) from users where username like 'A%';\nNew:\n Aggregate (cost=6361.28..6361.29 rows=1 width=0) (actual\ntime=87.528..87.530 rows=1 loops=1)\n -> Bitmap Heap Scan on users (cost=351.63..6325.33 rows=14376\nwidth=0) (actual time=6.444..53.426 rows=17739 loops=1)\n Filter: ((username)::text ~~ 'a%'::text)\n -> Bitmap Index Scan on c_username_unique (cost=0.00..348.04\nrows=14376 width=0) (actual time=5.383..5.383 rows=17739 loops=1)\n Index Cond: (((username)::text >= 'a'::text) AND\n((username)::text < 'b'::text))\n Total runtime: 87.638 ms\n\nOld:\n Aggregate (cost=13188.91..13188.92 rows=1 width=0) (actual\ntime=61.743..61.745 rows=1 loops=1)\n -> Bitmap Heap Scan on users (cost=392.07..13157.75 rows=12466\nwidth=0) (actual time=7.433..40.847 rows=17747 loops=1)\n Filter: ((username)::text ~~ 'a%'::text)\n -> Bitmap Index Scan on c_username_unique (cost=0.00..388.96\nrows=12466 width=0) (actual time=5.652..5.652 rows=17861 loops=1)\n Index Cond: (((username)::text >= 'a'::character varying)\nAND ((username)::text < 'b'::character varying))\n Total runtime: 61.824 ms\n\n\nAny ideas?\n\n", "msg_date": "Thu, 09 Oct 2008 19:51:26 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "\"Mysterious\" issues with newly installed 8.3" }, { "msg_contents": "On Thu, Oct 9, 2008 at 4:51 PM, Carlos Moreno <[email protected]> wrote:\n\n>\n> Ok, I know that such an open and vague question like this one\n> is... well, open and vague... But still.\n>\n> The short story:\n>\n> Just finished an 8.3.4 installation on a new machine, to replace\n> an existing one; the new machine is superior (i.e., higher\n> performance) in virtually every way --- twice as much memory,\n> faster processor, faster drives, etc.\n>\n> I made an exact copy of the existing database on the new\n> machine, and the exact same queries run on both reveal that\n> the old machine beats the new one by a factor of close to 2 !!!!\n> (i.e., the same queries run close to twice as fast on the old\n> machine!!!)\n>\n> To make things worse: the old machine is in operation, under\n> normal workload (and right now the system may be around\n> peak time), and the new machine is there sitting doing nothing;\n> just one user logged in using psql to run the queries --- *no-one\n> and nothing* is connecting to the new server.\n>\n> So... What's going on???\n>\n>\n> The details:\n>\n> CPU:\n> New: Opteron DC 1218HE (1MB cache per core) @2.6GHz\n> Old: Athlon64 X2 (512K cache per core) @2.2GHz\n>\n> RAM:\n> New: 4GB\n> Old: 2GB\n>\n> HD:\n> Doesn't matter the capacity, but I have every reason to believe\n> the new one is faster --- hdparm reports 105MB/sec transfer\n> rate; the measurement for the old server is meaningless, since\n> it is in operation (i.e., there is actual database activity), so it\n> measures between 50MB/sec and 70MB/sec. Given its age, I\n> would estimate 70 to 80 MB/sec\n>\n> OS:\n> New: CentOS 5.2 (gcc 4.1.2)\n> Old: FC6 (gcc 4.1.2)\n>\n> PG:\n> New: 8.3.4 installed from source\n> Old: 8.2.4 installed from source\n>\n> Presumably relevant configuration parameters --- shared_buffers\n> was set to 250MB on the old one; I set it to 500MB on the new\n> one (kinda makes sense, no? 1/8 of the physical memory in both\n> cases).\n>\n> I set max_fsm_pages a little bit higher on the new one (409600\n> instead of 307200 on the old one). The rest is pretty much\n> identical (except for the autovacuum --- I left the defaults in the\n> new one)\n>\n>\n> The old machine is vacuum-analyzed once a day (around 4AM);\n> on the new one, I ran a vacuumdb -z -f after populating it.\n>\n>\n> Some interesting outputs:\n>\n> explain analyze select count(*) from users;\n> New:\n> Aggregate (cost=8507.11..8507.12 rows=1 width=0) (actual\n> time=867.582..867.584 rows=1 loops=1)\n> -> Seq Scan on users (cost=0.00..7964.49 rows=217049 width=0)\n> (actual time=0.016..450.560 rows=217049 loops=1)\n> Total runtime: 867.744 ms\n>\n> Old:\n> Aggregate (cost=17171.22..17171.22 rows=1 width=0) (actual\n> time=559.475..559.476 rows=1 loops=1)\n> -> Seq Scan on users (cost=0.00..16628.57 rows=217057 width=0)\n> (actual time=0.009..303.026 rows=217107 loops=1)\n> Total runtime: 559.536 ms\n>\n> Running the same command again several times practically\n> does not change anything.\n>\n>\n> explain analyze select count(*) from users where username like 'A%';\n> New:\n> Aggregate (cost=6361.28..6361.29 rows=1 width=0) (actual\n> time=87.528..87.530 rows=1 loops=1)\n> -> Bitmap Heap Scan on users (cost=351.63..6325.33 rows=14376\n> width=0) (actual time=6.444..53.426 rows=17739 loops=1)\n> Filter: ((username)::text ~~ 'a%'::text)\n> -> Bitmap Index Scan on c_username_unique (cost=0.00..348.04\n> rows=14376 width=0) (actual time=5.383..5.383 rows=17739 loops=1)\n> Index Cond: (((username)::text >= 'a'::text) AND\n> ((username)::text < 'b'::text))\n> Total runtime: 87.638 ms\n>\n> Old:\n> Aggregate (cost=13188.91..13188.92 rows=1 width=0) (actual\n> time=61.743..61.745 rows=1 loops=1)\n> -> Bitmap Heap Scan on users (cost=392.07..13157.75 rows=12466\n> width=0) (actual time=7.433..40.847 rows=17747 loops=1)\n> Filter: ((username)::text ~~ 'a%'::text)\n> -> Bitmap Index Scan on c_username_unique (cost=0.00..388.96\n> rows=12466 width=0) (actual time=5.652..5.652 rows=17861 loops=1)\n> Index Cond: (((username)::text >= 'a'::character varying)\n> AND ((username)::text < 'b'::character varying))\n> Total runtime: 61.824 ms\n>\n>\n> Any ideas?\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nFirst, use iostat or another tool to view the disk usage on the new machine\nduring these queries and validate that it is not using the disk at all.\nThis is most likely the case.\n\nThen, to be sure, set its config parameters to be equal to the old one, and\nturn off auto-vacuum. This will also most likely have no effect.\n\nOnce this is confirmed, we can be pretty sure that the issue is restricted\nto:\nCPU / RAM / Motherboard on the hardware side. There may still be some\nsoftware effects in the OS or drivers, or PostgreSQL to account for, but\nlets drill into the hardware and try and eliminate that first.\n\nSure, the processor should be faster, but Athlon64s / Opterons are very\nsensitive to the RAM used and its performance and tuning.\nSo, you should find some basic CPU benchmarks and RAM benchmarks -- you'll\nwant to measure latency as well as bandwidth.\nAthlon64 and Opteron both typically have two memory busses per processor,\nand it is possible to populate the memory banks in such a way that the\nsystem has half the bandwidth.\nIn any event, you'll first want to identify if simple benchmark software is\nable to prove a disparity between the systems independant of postgres. This\nmay be a bit difficult to do on the live system however.\n\nBut it is my suspicion that Postgres performance is often more dependant on\nthe memory subsystem performance than the CPU Mhz (as are most databases)\nand poor components, configuration, or tuning on that side would show up in\nqueries like the examples here.\n\nOn Thu, Oct 9, 2008 at 4:51 PM, Carlos Moreno <[email protected]> wrote:\n\nOk, I know that such an open and vague question like this one\nis...  well, open and vague...  But still.\n\nThe short story:\n\nJust finished an 8.3.4 installation on a new machine, to replace\nan existing one;  the new machine is superior (i.e., higher\nperformance) in virtually every way --- twice as much memory,\nfaster processor, faster drives, etc.\n\nI made an exact copy of the existing database on the new\nmachine, and the exact same queries run on both reveal that\nthe old machine beats the new one by a factor of close to 2 !!!!\n(i.e., the same queries run close to twice as fast on the old\nmachine!!!)\n\nTo make things worse:  the old machine is in operation, under\nnormal workload  (and right now the system may be around\npeak time), and the new machine is there sitting doing nothing;\njust one user logged in using psql to run the queries --- *no-one\nand nothing* is connecting to the new server.\n\nSo... What's going on???\n\n\nThe details:\n\nCPU:\nNew: Opteron DC 1218HE  (1MB cache per core) @2.6GHz\nOld:  Athlon64 X2  (512K cache per core)  @2.2GHz\n\nRAM:\nNew:  4GB\nOld:   2GB\n\nHD:\nDoesn't matter the capacity, but I have every reason to believe\nthe new one is faster --- hdparm reports 105MB/sec transfer\nrate;  the measurement for the old server is meaningless, since\nit is in operation  (i.e., there is actual database activity), so it\nmeasures between 50MB/sec and 70MB/sec.  Given its age, I\nwould estimate 70 to 80 MB/sec\n\nOS:\nNew:  CentOS 5.2  (gcc 4.1.2)\nOld:  FC6  (gcc 4.1.2)\n\nPG:\nNew:  8.3.4 installed from source\nOld:   8.2.4 installed from source\n\nPresumably relevant configuration parameters --- shared_buffers\nwas set to 250MB on the old one;  I set it to 500MB on the new\none  (kinda makes sense, no?  1/8 of the physical memory in both\ncases).\n\nI set max_fsm_pages a little bit higher on the new one (409600\ninstead of 307200 on the old one).  The rest is pretty much\nidentical  (except for the autovacuum --- I left the defaults in the\nnew one)\n\n\nThe old machine is vacuum-analyzed once a day  (around 4AM);\non the new one, I ran a vacuumdb -z -f after populating it.\n\n\nSome interesting outputs:\n\nexplain analyze select count(*) from users;\nNew:\n Aggregate  (cost=8507.11..8507.12 rows=1 width=0) (actual\ntime=867.582..867.584 rows=1 loops=1)\n   ->  Seq Scan on users  (cost=0.00..7964.49 rows=217049 width=0)\n(actual time=0.016..450.560 rows=217049 loops=1)\n Total runtime: 867.744 ms\n\nOld:\n Aggregate  (cost=17171.22..17171.22 rows=1 width=0) (actual\ntime=559.475..559.476 rows=1 loops=1)\n   ->  Seq Scan on users  (cost=0.00..16628.57 rows=217057 width=0)\n(actual time=0.009..303.026 rows=217107 loops=1)\n Total runtime: 559.536 ms\n\nRunning the same command again several times practically\ndoes not change anything.\n\n\nexplain analyze select count(*) from users where username like 'A%';\nNew:\n Aggregate  (cost=6361.28..6361.29 rows=1 width=0) (actual\ntime=87.528..87.530 rows=1 loops=1)\n   ->  Bitmap Heap Scan on users  (cost=351.63..6325.33 rows=14376\nwidth=0) (actual time=6.444..53.426 rows=17739 loops=1)\n         Filter: ((username)::text ~~ 'a%'::text)\n         ->  Bitmap Index Scan on c_username_unique  (cost=0.00..348.04\nrows=14376 width=0) (actual time=5.383..5.383 rows=17739 loops=1)\n               Index Cond: (((username)::text >= 'a'::text) AND\n((username)::text < 'b'::text))\n Total runtime: 87.638 ms\n\nOld:\n Aggregate  (cost=13188.91..13188.92 rows=1 width=0) (actual\ntime=61.743..61.745 rows=1 loops=1)\n   ->  Bitmap Heap Scan on users  (cost=392.07..13157.75 rows=12466\nwidth=0) (actual time=7.433..40.847 rows=17747 loops=1)\n         Filter: ((username)::text ~~ 'a%'::text)\n         ->  Bitmap Index Scan on c_username_unique  (cost=0.00..388.96\nrows=12466 width=0) (actual time=5.652..5.652 rows=17861 loops=1)\n               Index Cond: (((username)::text >= 'a'::character varying)\nAND ((username)::text < 'b'::character varying))\n Total runtime: 61.824 ms\n\n\nAny ideas?\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nFirst, use iostat or another tool  to view the disk usage on the new\nmachine during these queries and validate that it is not using the disk\nat all.  This is most likely the case.\n\nThen, to be sure, set its config parameters to be equal to the old one,\nand turn off auto-vacuum.  This will also most likely have no effect.\n\nOnce this is confirmed, we can be pretty sure that the issue is restricted to:\nCPU / RAM / Motherboard on the hardware side.  There may still be some\nsoftware effects in the OS or drivers, or PostgreSQL to account for,\nbut lets drill into the hardware and try and eliminate that first.\n\nSure, the processor should be faster, but Athlon64s / Opterons are very\nsensitive to the RAM used and its performance and tuning.\nSo, you should find some basic CPU benchmarks and RAM benchmarks -- you'll want to measure latency as well as bandwidth.\nAthlon64 and Opteron both typically have two memory busses per\nprocessor, and it is possible to populate the memory banks in such a\nway that the system has half the bandwidth.\nIn any event, you'll first want to identify if simple benchmark\nsoftware is able to prove a disparity between the systems independant\nof postgres.  This may be a bit difficult to do on the live system\nhowever.  \n\nBut it is my suspicion that Postgres performance is often more\ndependant on the memory subsystem performance than the CPU Mhz (as are\nmost databases) and poor components, configuration, or tuning on that\nside would show up in queries like the examples here.", "msg_date": "Thu, 9 Oct 2008 17:34:55 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Mysterious\" issues with newly installed 8.3" }, { "msg_contents": "Scott Carey wrote:\n> On Thu, Oct 9, 2008 at 4:51 PM, Carlos Moreno <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> \n> Ok, I know that such an open and vague question like this one\n> is... well, open and vague... But still.\n> \n> The short story:\n> \n> Just finished an 8.3.4 installation on a new machine, to replace\n> an existing one; the new machine is superior (i.e., higher\n> performance) in virtually every way --- twice as much memory,\n> faster processor, faster drives, etc.\n> \n> I made an exact copy of the existing database on the new\n> machine, and the exact same queries run on both reveal that\n> the old machine beats the new one by a factor of close to 2 !!!!\n> (i.e., the same queries run close to twice as fast on the old\n> machine!!!)\n> \n> To make things worse: the old machine is in operation, under\n> normal workload (and right now the system may be around\n> peak time), and the new machine is there sitting doing nothing;\n> just one user logged in using psql to run the queries --- *no-one\n> and nothing* is connecting to the new server.\n> \n> So... What's going on???\n\nDid you do an ANALYZE on the new database after you cloned it? I was suprised by this too, that after doing a pg_dump/pg_restore, the performance sucked. But it was simply because the new database had no statistics yet.\n\nCraig\n", "msg_date": "Thu, 09 Oct 2008 19:45:25 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Mysterious\" issues with newly installed 8.3" }, { "msg_contents": "The first thing I'd try is installing 8.2 on the new server to see if\nthe problem is the server or postgresql. Set up the new server and\nnew pgsql install the same and see how it runs.\n", "msg_date": "Thu, 9 Oct 2008 21:17:00 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Mysterious\" issues with newly installed 8.3" }, { "msg_contents": "On Thu, 9 Oct 2008, Craig James wrote:\n\n> Did you do an ANALYZE on the new database after you cloned it?\n\nHe ran \"vacuumdb -f -z\", the -z does an analyze. Also, he's getting \nnearly identical explain plans out of the two systems, which suggests the \nstats are similar enough in the two cases.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 10 Oct 2008 03:44:36 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Mysterious\" issues with newly installed 8.3" }, { "msg_contents": "On Thu, 9 Oct 2008, Scott Carey wrote:\n\n> Sure, the processor should be faster, but Athlon64s / Opterons are very \n> sensitive to the RAM used and its performance and tuning. So, you should \n> find some basic CPU benchmarks and RAM benchmarks -- you'll want to \n> measure latency as well as bandwidth. Athlon64 and Opteron both \n> typically have two memory busses per processor, and it is possible to \n> populate the memory banks in such a way that the system has half the \n> bandwidth.\n\nThis is really something to watch out for. One quick thing first though: \nwhat frequency does the CPU on the new server show when you look at \n/proc/cpuinfo? If you see \"cpu MHz: 1000.00\" you probably are throttling \nthe CPU down hard with power management which was cause the slowness you \ndescribe. In that case I'd suggest editing /etc/sysconfig/cpuspeed and \nchanging \"GOVERNER=performance\".\n\nBack to memory. What I do with any new, untrusted system is boot with a \nmemtest86+ CD is take a look at the memory speed information it shows, \nwith the most important number being the uncached RAM speed. If you're \nnot running in dual-channel mode and at the maximum frequency the RAM \nsupports, that can run seriously slow things down. You probably can't \ntake down the production server for comparison. I can tell you that on my \nlittle Athlon [email protected] server, I've seen the memtest86+ reported raw \nmemory speed run anywhere from 2093MB/s (with crummy DDR2 667 that doesn't \nmatch the CPU bus frequency very well) to 3367MB/s (using good DDR2 800). \nYou should see even better from your Opteron system.\n\nAnother really handy way to gauge memory speed on Linux, if there are \nsimilar kernels installed on each system like your case, is to use \"hdparm \n-T\". That cached read figure is highly correlated with overall memory \nperformance. The nice part about that is you can probably get a useful \ncomparison result from the old server if you run that a bunch of times \neven with other activity (just take the highest number you ever see), \nwhereas memtest86+ requires some downtime. Those numbers are lower than \nthe I'd expect around 2500MB/s out of your new server here (that's what I \ngot when I just tested an Opteron 2220 system @2.8GHz using the RHEL5 \nhdparm -T).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 10 Oct 2008 04:22:35 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Mysterious\" issues with newly installed 8.3" }, { "msg_contents": "\nThanks Greg and others for your replies,\n\n> This is really something to watch out for. One quick thing first\n> though: what frequency does the CPU on the new server show when you\n> look at /proc/cpuinfo? If you see \"cpu MHz: 1000.00\" \n\nIt was like that in the initial setup --- I just disabled the cpuspeed\nservice (service cpuspeed stop; chkconfig cpuspeed off ), and now\nit shows the full 2600MHz at all times (the installation of PG was\ndone after this change)\n\n> Another really handy way to gauge memory speed on Linux, if there are\n> similar kernels installed on each system like your case, is to use\n> \"hdparm -T\".\n\nGreat tip! I was familiar with the -T switch, but was not clear on the\nnotion that the figure tells you that much about the overall memory\nperformance!\n\nAnyway, I checked on both, and the new system is slightly superior\n(around 2200 for the new, around 1900 for the old one) --- a bit below\nthe figure you mention you'd expect (2500 --- though that was for a\n2.8GHz Opteron, presumably with faster FSB and faster memory??)\n\nI guess my logical next step is what was suggested by Scott --- install\n8.2.4 and repeat the same tests with this one; that should give me\ninteresting information.\n\nAnyway, if I find something interesting or puzzling, I would post again\nwith the results of those tests.\n\nThanks again for the valuable advice and comments!\n\nCarlos\n--\n\n", "msg_date": "Mon, 13 Oct 2008 10:55:23 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"Mysterious\" issues with newly installed 8.3" }, { "msg_contents": "On Mon, Oct 13, 2008 at 8:55 AM, Carlos Moreno <[email protected]> wrote:\n> I guess my logical next step is what was suggested by Scott --- install\n> 8.2.4 and repeat the same tests with this one; that should give me\n> interesting information.\n\nI'd suggest updating to the latest 8.2.x update as well. Not for\nperformance tuning reasons but to make sure you're data's not at risk\netc... I think there's a good year or more of updates missing from the\n8.2.4 branch.\n", "msg_date": "Mon, 13 Oct 2008 09:28:46 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Mysterious\" issues with newly installed 8.3" }, { "msg_contents": "Scott Marlowe wrote:\n> On Mon, Oct 13, 2008 at 8:55 AM, Carlos Moreno <[email protected]> wrote:\n> \n>> I guess my logical next step is what was suggested by Scott --- install\n>> 8.2.4 and repeat the same tests with this one; that should give me\n>> interesting information.\n>> \n>\n> I'd suggest updating to the latest 8.2.x update as well. Not for\n> performance tuning reasons but to make sure you're data's not at risk\n> etc... I think there's a good year or more of updates missing from the\n> 8.2.4 branch.\n> \n\n\nOf course --- but do keep in mind that the reason for this was to\ndo a meanigful comparison; SQLs being run and clicked on an\n8.2.4 system vs. the same SQLs being run on a different hardware\nwith the exact same software. If for some reason I conclude that\nthe 8.2 seems to offer better performance with the given hardware,\nthen of course I would go with the latest 8.2.x ...\n\nIf you're referring to the existing installation, well, yeah, I've been\nmeaning to upgrade it, but I guess now that we are going with a\nhardware upgrade as well, then the software upgrade will be a\nside-effect of the maneuver.\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Mon, 13 Oct 2008 15:56:58 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"Mysterious\" issues with newly installed 8.3" }, { "msg_contents": "On Mon, 13 Oct 2008, Carlos Moreno wrote:\n\n>> Another really handy way to gauge memory speed on Linux, if there are\n>> similar kernels installed on each system like your case, is to use\n>> \"hdparm -T\".\n>\n> Great tip! I was familiar with the -T switch, but was not clear on the\n> notion that the figure tells you that much about the overall memory\n> performance!\n\nI wouldn't go so far as to say it tells you *much* about it, but it does \ngive a fairly useful comparison figure if the kernels are basically the \nsame and can help spot gross errors. I used it a bunch when I was \ntinkering with DDR speeds and such earlier this year, it correlated fairly \nwell with other memory bandwidth measurements within the same processor \nfamily (so far my tests suggestion results are much higher per clock on \nIntel CPUs). Certainly of no use for comparison if one system has a \n32-bit kernel and the other 64, and results will vary depending on general \nkernel configuration (I get very different results from seemingly similar \nRedHat and Ubuntu kernels on the same system for example).\n\nIt sounds like your CPU and memory setup are all fine, which leaves your \nmystery open. Please let us know if you find anything interesting out, \nnormally an 8.3 upgrade would run faster so your situation is a bit \ncurious. The only other benchmark I'd suggest just as a sanity check is \nbonnie++, that's a bit more thorough than what hdparm -t reports.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 13 Oct 2008 17:37:53 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Mysterious\" issues with newly installed 8.3" } ]
[ { "msg_contents": "Hi there,\n\nI've been toying with using PostgreSQL for some of my Drupal sites for \nsome time, and after his session at OpenSourceDays in Copenhagen last \nweekend, Magnus Hagander told me that there a quite a few in the \nPostgreSQL community using Drupal.\n\nI have been testing it a bit performance-wise, and the numbers are \nworrying. In my test, MySQL (using InnoDB) had a 40% lead in \nperformance, but I'm unsure whether this is indicative for PostgreSQL \nperformance in general or perhaps a misconfiguration on my part.\n\nIn any case, if anyone has any tips, input, etc. on how best to \nconfigure PostgreSQL for Drupal, or can find a way to poke holes in my \nanalysis, I would love to hear your insights :)\n\nThe performance test results can be found on my blog: http://mikkel.hoegh.org/blog/2008/drupal_database_performance_mysql_and_postgresql_compared\n--\nKind regards,\n\nMikkel H�gh <[email protected]>", "msg_date": "Mon, 13 Oct 2008 05:57:26 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "\nOn Oct 12, 2008, at 11:57 PM, Mikkel H�gh wrote:\n\n> In any case, if anyone has any tips, input, etc. on how best to \n> configure PostgreSQL for Drupal, or can find a way to poke holes in \n> my analysis, I would love to hear your insights :)\n\n\nI just came across this article about moving Drupal from MySQL to \nPostgreSQL because of MyISAM data corruption and InnoDB was too slow.\n\n\nhttp://groups.drupal.org/node/15793\n\n\n\n\nJohn DeSoi, Ph.D.\n\n\n\n\n", "msg_date": "Mon, 13 Oct 2008 00:37:10 -0400", "msg_from": "John DeSoi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Sun, Oct 12, 2008 at 9:57 PM, Mikkel Høgh <[email protected]> wrote:\n> Hi there,\n>\n> I've been toying with using PostgreSQL for some of my Drupal sites for some\n> time, and after his session at OpenSourceDays in Copenhagen last weekend,\n> Magnus Hagander told me that there a quite a few in the PostgreSQL community\n> using Drupal.\n>\n> I have been testing it a bit performance-wise, and the numbers are worrying.\n> In my test, MySQL (using InnoDB) had a 40% lead in performance, but I'm\n> unsure whether this is indicative for PostgreSQL performance in general or\n> perhaps a misconfiguration on my part.\n\nThe test you're running is far too simple to tell you which database\nwill actually be faster in real world usage. No updates, no inserts,\nno interesting or complex work goes into just delivering the front\npage over and over. I suggest you invest some time learning how to\ndrive a real load testing tool like jmeter and build realistic test\ncases (with insert / update / delete as well as selects) and then see\nhow the databases perform with 1, 2, 5, 10, 50, 100 consecutive\nthreads running at once.\n\nWithout a realistic test scenario and with no connection pooling and\nwith no performance tuning, I don't think you should make any\ndecisions right now about which is faster. It may well be that in a\nmore realistic testing that mysql keeps up through 5 or 10 client\nconnections then collapses at 40 or 50, while pgsql keeps climbing in\nperformance. This is the performance curve I'm used to seeing from\nboth dbs under heavy load.\n\nIn simple terms, you're kicking the tires and making a decision based on that.\n", "msg_date": "Sun, 12 Oct 2008 22:48:25 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "* Mikkel Høgh ([email protected]) wrote:\n> I have been testing it a bit performance-wise, and the numbers are \n> worrying. In my test, MySQL (using InnoDB) had a 40% lead in \n> performance, but I'm unsure whether this is indicative for PostgreSQL \n> performance in general or perhaps a misconfiguration on my part.\n\nThe comments left on your blog would probably be a good first step, if\nyou're not doing them already.. Connection pooling could definitely\nhelp if you're not already doing it. Drupal's MySQL-isms don't help\nthings either, of course.\n\nAlso, you don't post anything about the PostgreSQL config, nor the\nhardware it's running on. The default PostgreSQL config usually isn't\nappropriate for decent hardware and that could be a contributing factor\nhere. It would also be useful to make sure you've analyze'd your tables\nand didn't just do a fresh load w/o any statistics having been gathered.\n\nWe run Drupal on PostgreSQL for an internal site and it works reasonably\nwell. We havn't had any performance problems but it's not a terribly\nlarge site either. The issues we've had tend to come from PostgreSQL's\nsomewhat less-than-supported status with Drupal.\n\nI've been meaning to look into Drupal's PG support to see about\nimproving it. Perhaps this winter I'll get a chance to.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Mon, 13 Oct 2008 00:54:20 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "> I have been testing it a bit performance-wise, and the numbers are\n> worrying. In my test, MySQL (using InnoDB) had a 40% lead in\n> performance, but I'm unsure whether this is indicative for PostgreSQL\n> performance in general or perhaps a misconfiguration on my part.\n\nIn my experience the \"numbers are always worrying\" in a read-only environment.\n\nI've used MySQL, but found it rather disturbing when it comes to integrity. \nMySQL has just some things I can't live with (i.e. silently ignoring \noverflowing charater types etc). \nThat aside, MySQL IS fast when it comes to read operations. That's probably \nbecause it omits a lot of integrity checks postgres and other standard \ncompliant databases do.\nI'm running a turbogears website with a couple million pages on postgresql and \nI don't have any problems, so I guess postgres can be configured to service \nDrupal just as well. Check your indexes and your work memory \n(postgresql.conf). You want to have the indexes correct and in my experiene \nthe work memory setting is rather important. You want to have enough work \nmemory for sorted queries to fit the resultset into memory - as always disk \naccess is expensive, so I avoid that by having 2GB memory exclusively for \npostgres - which allows me to do quite expensive sorts in memory, thus \ncutting execution time down to a couple milliseconds.\nOh, and never forget: explain analyze your queries. That will show you whether \nyour indexes are correct and useful, as well as how things are handled. Once \nyou learn how to read the output of that, you'll be surprised what little \nchange to a query suddenly gives you a performance boost of 500% or more.\nI had queries take 30 seconds cut down to 80 milliseconds just by setting \nindexes straight.\n\nKeep in mind: postgres will take good care of your data (the most important \nasset in todays economy). I run all my customers on postgres and did so ever \nsince postgres became postgresql (the times way back then when postgres had \nit's own query language instead of SQL). With a little care I've never seen \npostgresql dump or corrupt my data - not a \"pull the plug\" scenario and not a \ndumb user SQL injection scenario. I was always able to recover 100% of data \n(but I always used decent hardware, which IMHO makes a big difference).\n\nI've toyed with MySQL (not as deep as postgresql I must admit) and it \ndumped/corruped my data on more than one occasion. Sure, it can be my \nproficiency level with MySQL, but personally I doubt that. Postgresql is just \nrock solid no matter what.\n\nUwe\n", "msg_date": "Sun, 12 Oct 2008 22:14:53 -0700", "msg_from": "\"Uwe C. Schroeder\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "Alright, my benchmarks might have been a bit na�ve.\nWhen it comes to hardware, my webserver is a SunFire X2100 with an \nOpteron 1210 Dual Core and 4 GB DDR2 RAM, running 64-bit Ubuntu Linux \nServer 8.04 LTS.\n\nWhen it comes to the resource usage section of my postgresql.conf, the \nonly thing that are not commented out are:\nshared_buffers = 24MB\nmax_fsm_pages = 153600\n\nI freely admit that the reason I haven't messed with these values is \nthat I have next to no clue what the different things do and how they \naffect performance, so perhaps an apology is in order. As Scott wrote, \n\"Without a realistic test scenario and with no connection pooling and \nwith no performance tuning, I don't think you should make any \ndecisions right now about which is faster\". My apologies.\n--\nKind regards,\n\nMikkel H�gh <[email protected]>\n\nOn 13/10/2008, at 06.54, Stephen Frost wrote:\n\n> * Mikkel H�gh ([email protected]) wrote:\n>> I have been testing it a bit performance-wise, and the numbers are\n>> worrying. In my test, MySQL (using InnoDB) had a 40% lead in\n>> performance, but I'm unsure whether this is indicative for PostgreSQL\n>> performance in general or perhaps a misconfiguration on my part.\n>\n> The comments left on your blog would probably be a good first step, if\n> you're not doing them already.. Connection pooling could definitely\n> help if you're not already doing it. Drupal's MySQL-isms don't help\n> things either, of course.\n>\n> Also, you don't post anything about the PostgreSQL config, nor the\n> hardware it's running on. The default PostgreSQL config usually isn't\n> appropriate for decent hardware and that could be a contributing \n> factor\n> here. It would also be useful to make sure you've analyze'd your \n> tables\n> and didn't just do a fresh load w/o any statistics having been \n> gathered.\n>\n> We run Drupal on PostgreSQL for an internal site and it works \n> reasonably\n> well. We havn't had any performance problems but it's not a terribly\n> large site either. The issues we've had tend to come from \n> PostgreSQL's\n> somewhat less-than-supported status with Drupal.\n>\n> I've been meaning to look into Drupal's PG support to see about\n> improving it. Perhaps this winter I'll get a chance to.\n>\n> \tThanks,\n>\n> \t\tStephen", "msg_date": "Mon, 13 Oct 2008 08:00:36 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Sun, 12 Oct 2008 22:14:53 -0700\n\"Uwe C. Schroeder\" <[email protected]> wrote:\n\n> > I have been testing it a bit performance-wise, and the numbers\n> > are worrying. In my test, MySQL (using InnoDB) had a 40% lead in\n> > performance, but I'm unsure whether this is indicative for\n> > PostgreSQL performance in general or perhaps a misconfiguration\n> > on my part.\n\n> In my experience the \"numbers are always worrying\" in a read-only\n> environment.\n> \n> I've used MySQL, but found it rather disturbing when it comes to\n> integrity. MySQL has just some things I can't live with (i.e.\n> silently ignoring overflowing charater types etc). \n> That aside, MySQL IS fast when it comes to read operations. That's\n> probably because it omits a lot of integrity checks postgres and\n> other standard compliant databases do.\n\nI'm replying here but I could be replying to Scott and others...\n\nI use nearly exclusively Postgresql. I do it mainly because it\nmakes me feel more comfortable as a programmer. I'm not the kind of\nguy that is satisfied if things work now. I prefer to have something\nthat gives me higher chances they will work even when I turn my\nshoulders and Postgresql give me the feeling it is easier to achieve\nthat result.\n\nAnyway I don't find myself comfortable with replies in these 2 lines\nof reasoning:\n1) default configuration of PostgreSQL generally doesn't perform well\n2) PostgreSQL may be slower but mySQL may trash your data.\n\nI think these answers don't make a good service to PostgreSQL.\n\n1) still leave the problem there and doesn't give any good reason\nwhy Postgresql comes with a doggy default configuration on most\nhardware. It still doesn't explain why I've to work more tuning\nPostgreSQL to achieve similar performances of other DB when other DB\ndon't require tuning.\nI know that a Skoda Fabia requires much less tuning than a Ferrari\nF1... but well a Ferrari F1 will run faster than a Skoda with or\nwithout tuning.\nMaking performance comparable without expert tuning will a) stop\nmost too easy critics about PostgreSQL performances b) give\ndevelopers much more feedback on PostgreSQL performance in \"nearer\nto optimal\" setup.\n1000 developers try PostgreSQL, 500 find it slow compared to other\nDBs, 50 comes back to the list asking, 30 were looking for a magic\nreceipt that solved their problem, didn't find it and gave up, 10 at\nleast could hear they had to tune the DB but couldn't get convinced\nto actually do so because it looked too expensive to them to learn.\n\nIf it is easy to write a tool that will help you to tune PostgreSQL,\nit seems it would be something that will really help PostgreSQL\ndiffusion and improvements. If it is *complicated* to tune\nPostgreSQL so that it's performance can be *comparable* (I didn't\nwrite optimal) with other DB we have a problem.\n\nDeveloper time is valuable... if it is complicated to tune\nPostgreSQL to at least have comparable performances to other DB\nPostgreSQL look less as a good investment.\n\nThen other people added in the equation connection pooling as a MUST\nto compare MySQL and PostgreSQL performances.\nThis makes the investment to have PostgreSQL in place of mySQL even\nhigher for many, or at least it is going to puzzle most.\n\nOr maybe... it is false that PostgreSQL doesn't have comparable\nperformance to other DB with default configuration and repeating\nover and over the same answer that you've to tune PostgreSQL to get\ncomparable performance doesn't play a good service to PostgreSQL.\n\n2) I never saw a \"trashing data benchmark\" comparing reliability of\nPostgreSQL to MySQL. If what I need is a fast DB I'd chose mySQL...\nI think this could still not be the best decision to take based on\n*real situation*.\nDo we really have to trade integrity for speed? Is it a matter of\ndevelopers time or technical constraints? Is MyISAM really much\nfaster in read only operations?\nIs Drupal a \"read only\" applications? Does it scale better with\nPostgreSQL or MySQL?\nThese are answers that are hard to answer even because it is hard to\nhave valuable feedback.\nWhat I get with that kind of answer is:\nan admission: - PostgreSQL is slow\nand a hard to prove claim: - MySQL will trash your data.\nUnless you circumstantiate I'd say both things are false.\n\nFrom my point of view the decision was easy. I needed transactions.\nFunctions would have made dealing with transactions much easier.\nPostgreSQL had a much more mature transaction and function engine.\nI like to sleep at night.\n\nBut is PostgreSQL competitive as a DB engine for apps like Drupal\nfor the \"average user\"?\nJudging on real experience with Drupal on PostgreSQL I'd say maybe.\nJudging on the replies I often read I'd say NO.\nUnfortunately replies aren't turning that maybe into a NO for\nany reasonable reasons.\nIf there are reasonable reasons to turn that maybe into a NO...\nthere may be some work to be done on the PostgreSQL code.\nIf there aren't reasonable reasons to turn that maybe into a NO...\nplease stop to give that kind of answers.\nor both...\n\n-- \nIvan Sergio Borgonovo\nhttp://www.webthatworks.it\n\n", "msg_date": "Mon, 13 Oct 2008 09:02:28 +0200", "msg_from": "Ivan Sergio Borgonovo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Sun, 12 Oct 2008, Scott Marlowe wrote:\n\n> It may well be that in a more realistic testing that mysql keeps up \n> through 5 or 10 client connections then collapses at 40 or 50, while \n> pgsql keeps climbing in performance.\n\nOne of the best pro-PostgreSQL comparisons showing this behavior is at \nhttp://tweakers.net/reviews/649/7 MySQL owns that benchmark until you hit \n40 users, then...ouch.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 13 Oct 2008 04:43:10 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "Well, in that benchmark, what you say is only true for the Niagara \nprocessors. On the Opteron page, MySQL performance only drops slightly \nas concurrency passes 50.\n\nMySQL might have a problem with Niagara, but it doesn't seem like it \nhas the severe concurrency vulnerability you speak of.\n\nThere are many reasons to pick PostgreSQL, but this one doesn't seem \nto be a general thing. In general, MySQL seems to have problems with \nsome kinds of threading, since their perfomance on Mac OS X is crappy \nas well for that reason.\n--\nKind regards,\n\nMikkel H�gh <[email protected]>\n\nOn 13/10/2008, at 10.43, Greg Smith wrote:\n\n> On Sun, 12 Oct 2008, Scott Marlowe wrote:\n>\n>> It may well be that in a more realistic testing that mysql keeps up \n>> through 5 or 10 client connections then collapses at 40 or 50, \n>> while pgsql keeps climbing in performance.\n>\n> One of the best pro-PostgreSQL comparisons showing this behavior is \n> at http://tweakers.net/reviews/649/7 MySQL owns that benchmark until \n> you hit 40 users, then...ouch.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com \n> Baltimore, MD", "msg_date": "Mon, 13 Oct 2008 10:51:30 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Mon, 13 Oct 2008, Mikkel H�gh wrote:\n\n> Well, in that benchmark, what you say is only true for the Niagara \n> processors. On the Opteron page, MySQL performance only drops slightly as \n> concurrency passes 50.\n\nThat's partly because the upper limit on the graph only goes to 100 \nconcurrent processes. Since the Opterons are faster, that's not a broad \nenough scale to see how fast the right edge of the MySQL curve falls.\n\nYou are right that the Niagara processors have a sharper decline than the \nmore traditional platforms. The MySQL 5.0.20a graphs at \nhttp://tweakers.net/reviews/657/6 has a nice comparison graph showing a \nfew different architectures that's also interesting.\n\nAnyway, you don't actually have to believe any of this; you've got a \ntestbed to try for yourself if you just crank the user count up. The main \nthing I was trying to suggest is that MySQL being a bit faster at 5 users \nis not unusual, but it's not really representative of which performs \nbetter either.\n\n> In general, MySQL seems to have problems with some kinds of threading, \n> since their perfomance on Mac OS X is crappy as well for that reason.\n\nOne of the reasons (but by no means not the only one) that PostgreSQL uses \na multi-process based architecture instead of a threaded one is because \nthread library quality varies so much between platforms.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD", "msg_date": "Mon, 13 Oct 2008 06:17:31 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Mon, Oct 13, 2008 at 11:57 AM, Mikkel Høgh <[email protected]> wrote:\n\n> In any case, if anyone has any tips, input, etc. on how best to configure\n> PostgreSQL for Drupal, or can find a way to poke holes in my analysis, I\n> would love to hear your insights :)\n\nIt'd be more accurate to configure Drupal for PostgreSQL. We use\nPostgreSQL for almost everything, including many drupal sites, but the\nusage pattern of Drupal puts PostgreSQL at a disadvantage. In short,\nDrupal issues a lot of small, simple SQL (100+ is the norm), that\nmakes tuning hard. To make it faster, you'd need to turn on Drupal's\ncaches (and PHP opcode caches) to reduce the number of SQLs issued. To\nget even better numbers, you'd need to get Drupal to use memcached\ninstead of calling PostgreSQL for the simple lookups. You can use the\ndevel module in Drupal to have a look at the SQLs issued. Not pretty,\nIMHO.\n\nSee: http://2bits.com/articles/benchmarking-postgresql-vs-mysql-performance-using-drupal-5x.html\nhttp://2bits.com/articles/advcache-and-memcached-benchmarks-with-drupal.html\n\nThe most promising Drupal performance module for performance looks\nlike: http://drupal.org/project/cacherouter (900 req/s!) but I haven't\ngot the chance to give it a go yet.\n\nI'm a die-hard PostgreSQL and Drupal supporter, but in this case, I\nconcede straight up Drupal+MySQL will always be faster than\nDrupal+PostgreSQL because of the way Drupal uses the database. We\nstill use PostgreSQL for our Drupal sites though, because while it's\nslower, it's plenty fast enough.\n", "msg_date": "Mon, 13 Oct 2008 18:36:03 +0800", "msg_from": "\"Ang Chin Han\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Mon, Oct 13, 2008 at 12:00 AM, Mikkel Høgh <[email protected]> wrote:\n> Alright, my benchmarks might have been a bit naïve.\n> When it comes to hardware, my webserver is a SunFire X2100 with an Opteron\n> 1210 Dual Core and 4 GB DDR2 RAM, running 64-bit Ubuntu Linux Server 8.04\n> LTS.\n>\n> When it comes to the resource usage section of my postgresql.conf, the only\n> thing that are not commented out are:\n> shared_buffers = 24MB\n> max_fsm_pages = 153600\n\nWell, 24MB is pretty small. See if you can increase your system's\nshared memory and postgresql's shared_buffers to somewhere around 256M\nto 512M. It likely won't make a big difference in this scenario, but\noverall it will definitely help.\n\n> I freely admit that the reason I haven't messed with these values is that I\n> have next to no clue what the different things do and how they affect\n> performance, so perhaps an apology is in order. As Scott wrote, \"Without a\n> realistic test scenario and with no connection pooling and with no\n> performance tuning, I don't think you should make any decisions right now\n> about which is faster\". My apologies.\n\nNo need for apologies. You're looking for the best database for\ndrupal, and you're asking questions and trying to test to see which\none is best. You just need to look deeper is all. I would, however,\nposit that you're putting the cart before the horse by looking at\nperformance first, instead of reliability.\n\nOn a machine with properly functioning hardware, postgresql is nearly\nindestructable. MySQL has a lot of instances in time where, if you\npull the plug / lose power it will scramble your db / lose part or all\nof your data. Databases are supposed to be durable. InnoDB, the\ntable handler, is pretty good, but it's surrounded by a DB that was\ndesigned for speed not reliability.\n\nThere was a time when Microsoft was trying to cast IIS as faster than\nApache, so they released a benchmark showing IIS being twice as fast\nas apache at delivering static pages. Let's say it was 10mS for\napache and 2mS for IIS. Seems really fast. Problem is, static pages\nare cheap to deliver. I can buy a $500 server to serve the static\ncontent and if I need more speed, I can throw more servers at the\nproblem for $500, no OS license fees.\n\nBut for dynamic content, the difference was the other way around, and\nthe delivery times were much higher for IIS, like 50mS for apache and\n250mS for IIS. Suddenly, a handful of dynamic pages and the IIS\nserver was noticeably slower.\n\nThe same type of comparison tends to hold true for MySQL versus\nPostgreSQL. MySQL tends to be very very fast at \"select * from table\nwhere id=5\" while PostgreSQL is much faster at 4 page long reporting\nqueries with 5 levels of subselects and a couple of unions. Things\nthat make MySQL run so slow as to be useless. Also, PostgreSQL tends\nto keep better read performance as the number of writes increase.\nThis is the real test, so the point I was making before about\nrealistic tests is very important.\n\nIt's about graceful degradation. PostgreSQL has it, and when your\nsite is getting 20 times the traffic you ever tested for, it's a\nlittle late to figure out you might have picked the wrong DBMS. Note\nI'm not saying MySQL is the wrong choice, I'm saying you don't know\nbecause you haven't proven it capable.\n", "msg_date": "Mon, 13 Oct 2008 08:19:07 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Mon, Oct 13, 2008 at 8:19 AM, Scott Marlowe <[email protected]> wrote:\n\n> There was a time when Microsoft was trying to cast IIS as faster than\n> Apache, so they released a benchmark showing IIS being twice as fast\n> as apache at delivering static pages. Let's say it was 10mS for\n> apache and 2mS for IIS.\n\nDyslexia strikes again! That was supposed to be 5mS... anywho.\n", "msg_date": "Mon, 13 Oct 2008 08:23:21 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Monday 13 October 2008 15:19:07 Scott Marlowe wrote:\n> \n> > shared_buffers = 24MB\n> > max_fsm_pages = 153600\n>\n> Well, 24MB is pretty small. See if you can increase your system's\n> shared memory and postgresql's shared_buffers to somewhere around 256M\n> to 512M. It likely won't make a big difference in this scenario, but\n> overall it will definitely help.\n\nI noted after reading earlier messages in the thread, that my distro documents \nthat the values it default to for shared_buffers is rather small.\n\nOne of our servers is fairly pressed for memory (some of the time). Is there \nany way to measure the amount of churn in the shared_buffers, as a way of \ndemonstrating that more is needed (or at this moment more would help)?\n\nA few very small databases on this server, and one which is 768M (still pretty \nsmall but a lot bigger than the rest, most of which is logging information). \nThe only \"hot\" information is the session table, ~9000 lines, one index on \nthe session id. Can I ask Postgres to tell me, or estimate, how much memory \nthis table would occupy if fully cached in memory?\n\nHalf the problem in modern computing is knowing what is \"slow\". In this case, \ncounting the rows of the session table takes about 100ms. Deleting expired \nsession rows about 120ms, more if it hasn't done it for a while, which is I \nguess evidence that table isn't being cached in memory as efficiency as it \ncould be.\n\nIn this case the server thinks the system I/O is zero for half the tools in \nuse, because of the RAID hardware, so most of the Linux based tools are \nuseless in this context.\n\nAt the risk of thread hijacking, for the session table I wonder if we are \nhandling it the most efficient way. It is just a regular table, indexed on \nsession_id. Each request of note to the server requires retrieval of the \nsession record, and often updating the expiry information. Every N requests \nthe application also issues a:\n\nDELETE FROM sessions WHERE expires<NOW() OR expires IS NULL;\n\nSince there is no index on the table, it sequentially scans, and deletes the \nstale records. I'm thinking since it is indexed for regular queries, making N \nlarger has almost no obvious penalty except we accumulate a small number of \nstale records for longer. I'm not sure if an index on expires is worth it, \nprobably too small to make much difference either way.\n\nAs for Drupal on Postgres, it might be worth the effort for big \nimplementations, I did it for a while, but doing it again I'd go with MySQL. \nNothing to do with the database, everything to do with support for 3rd party \nadd-ins. Till Drupal gets to the automated testing of these things routinely \nagainst different backends and configs...... Perhaps that is all that is \nneeded, a service for Drupal authors that tries their plugins against \nPostgres automatically and complains if it doesn't work?\n", "msg_date": "Mon, 13 Oct 2008 15:49:33 +0100", "msg_from": "Simon Waters <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Mon, 13 Oct 2008, Simon Waters wrote:\n\n> One of our servers is fairly pressed for memory (some of the time). Is there\n> any way to measure the amount of churn in the shared_buffers, as a way of\n> demonstrating that more is needed (or at this moment more would help)?\n\nIf you wander to http://www.westnet.com/~gsmith/content/postgresql/ my \n\"Inside the PostgreSQL Buffer Cache\" presentation goes over this topic in \nextreme detail.\n\n> Can I ask Postgres to tell me, or estimate, how much memory this table \n> would occupy if fully cached in memory?\n\nhttp://wiki.postgresql.org/wiki/Disk_Usage gives an example showing all \nthe biggest tables/indexes in your data, and links to an article giving \nexamples of how to find the size of all sorts of things. One of the \nqueries in my presentation even shows you what % of each table is actually \nbeing cached by the dedicated database memory.\n\nYou also need to consider the OS buffer cache to get the full picture, \nwhich is a bit more complicated; \nhttp://www.kennygorman.com/wordpress/?p=250 gives an example there you \nmight be able to use.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 13 Oct 2008 17:21:27 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Mon, Oct 13, 2008 at 1:02 AM, Ivan Sergio Borgonovo\n<[email protected]> wrote:\n<snip>\n> Anyway I don't find myself comfortable with replies in these 2 lines\n> of reasoning:\n> 1) default configuration of PostgreSQL generally doesn't perform well\n> 2) PostgreSQL may be slower but mySQL may trash your data.\n>\n> I think these answers don't make a good service to PostgreSQL.\n>\n> 1) still leave the problem there and doesn't give any good reason\n> why Postgresql comes with a doggy default configuration on most\n> hardware. It still doesn't explain why I've to work more tuning\n> PostgreSQL to achieve similar performances of other DB when other DB\n> don't require tuning.\n\nThis is a useful question, but there are reasonable answers to it. The\nkey underlying principle is that it's impossible to know what will\nwork well in a given situation until that situation is tested. That's\nwhy benchmarks from someone else's box are often mostly useless on\nyour box, except for predicting generalities and then only when they\nagree with other people's benchmarks. PostgreSQL ships with a very\nconservative default configuration because (among other things,\nperhaps) 1) it's a configuration that's very unlikely to fail\nmiserably for most situations, and 2) it's assumed that if server\nperformance matters, someone will spend time tuning things. The fact\nthat database X performs better than PostgreSQL out of the box is\nfairly irrelevant; if performance matters, you won't use the defaults,\nyou'll find better ones that work for you.\n\n> Making performance comparable without expert tuning will a) stop\n> most too easy critics about PostgreSQL performances b) give\n> developers much more feedback on PostgreSQL performance in \"nearer\n> to optimal\" setup.\n\nMost of the complaints of PostgreSQL being really slow are from people\nwho either 1) use PostgreSQL assuming its MySQL and therefore don't do\nthings they way a real DBA would do them, or 2) simply repeat myths\nthey've heard about PostgreSQL performance and have no experience to\nback up. While it would be nice to be able to win over such people,\nPostgreSQL developers tend to worry more about pleasing the people who\nreally know what they're doing. (The apparent philosophical\ncontradiction between my statements above and the fact that I'm\nwriting something as inane as PL/LOLCODE doesn't cause me much lost\nsleep -- yet)\n\n> If it is easy to write a tool that will help you to tune PostgreSQL,\n> it seems it would be something that will really help PostgreSQL\n> diffusion and improvements. If it is *complicated* to tune\n> PostgreSQL so that it's performance can be *comparable* (I didn't\n> write optimal) with other DB we have a problem.\n\nIt's not easy to write such a tool; the lists talk about one every few\nmonths, and invariable conclude it's harder than just teaching DBAs to\ndo it (or alternatively letting those that need help pay those that\ncan help to tune for them).\n\nAs to whether it's a problem that it's a complex thing to tune, sure\nit would be nice if it were easier, and efforts are made along those\nlines all the time (cf. GUC simplification efforts for a contemporary\nexample). But databases are complex things, and any tool that makes\nthem overly simple is only glossing over the important details.\n\n> Then other people added in the equation connection pooling as a MUST\n> to compare MySQL and PostgreSQL performances.\n> This makes the investment to have PostgreSQL in place of mySQL even\n> higher for many, or at least it is going to puzzle most.\n\nAnyone familiar with high-performance applications is familiar with\nconnection pooling.\n\n> Or maybe... it is false that PostgreSQL doesn't have comparable\n> performance to other DB with default configuration and repeating\n> over and over the same answer that you've to tune PostgreSQL to get\n> comparable performance doesn't play a good service to PostgreSQL.\n\nWhy not? It's the truth, and there are good reasons for it. See above.\n\n> 2) I never saw a \"trashing data benchmark\" comparing reliability of\n> PostgreSQL to MySQL. If what I need is a fast DB I'd chose mySQL...\n> I think this could still not be the best decision to take based on\n> *real situation*.\n\nIf you've got an important application (for some definition of\n\"important\"), your considerations in choosing underlying software are\nmore complex than \"is it the fastest option\". Horror stories about\nMySQL doing strange things to data, because of poor integrity\nconstraints, ISAM tables, or other problems are fairly common (among\nPostgreSQL users, at least :) But I will also admit I have none of my\nown; my particular experience in life has, thankfully, prevented me\nfrom much MySQL exposure.\n\n> Do we really have to trade integrity for speed?\n\nYes. Sanity checks take time.\n\n> Is MyISAM really much\n> faster in read only operations?\n\nYes. See above.\n\n> What I get with that kind of answer is:\n> an admission: - PostgreSQL is slow\n\nPeople aren't saying that. They're saying it works better when someone\nwho knows what they're doing runs it.\n\n> But is PostgreSQL competitive as a DB engine for apps like Drupal\n> for the \"average user\"?\n\nSo are we talking about the \"average user\", or someone who needs real\nperformance? The average user certainly cares about performance, but\nif (s)he really cares, (s)he will put time toward achieving\nperformance.\n\n- Josh / eggyknap\n", "msg_date": "Mon, 13 Oct 2008 20:45:39 -0600", "msg_from": "\"Joshua Tolley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Mon, 13 Oct 2008 20:45:39 -0600\n\"Joshua Tolley\" <[email protected]> wrote:\n\nPremise:\nI'm not sustaining that the \"default\" answers are wrong, but they are\ninadequate.\nBTW the OP made a direct comparison of pgsql and mysql running\ndrupal. That's a bit different than just asking: how can I improve\nPostgreSQL performances.\n\nI'm happy with PostgreSQL, it does what I think is important for me\nbetter than MySQL... and I'm using it on Drupal in nearly all the\nwebsites I developed.\n\n> On Mon, Oct 13, 2008 at 1:02 AM, Ivan Sergio Borgonovo\n> <[email protected]> wrote:\n> <snip>\n> > Anyway I don't find myself comfortable with replies in these 2\n> > lines of reasoning:\n> > 1) default configuration of PostgreSQL generally doesn't perform\n> > well 2) PostgreSQL may be slower but mySQL may trash your data.\n\n> > I think these answers don't make a good service to PostgreSQL.\n\n> > 1) still leave the problem there and doesn't give any good reason\n> > why Postgresql comes with a doggy default configuration on most\n> > hardware. It still doesn't explain why I've to work more tuning\n> > PostgreSQL to achieve similar performances of other DB when\n> > other DB don't require tuning.\n\n> This is a useful question, but there are reasonable answers to it.\n> The key underlying principle is that it's impossible to know what\n> will work well in a given situation until that situation is\n> tested. That's why benchmarks from someone else's box are often\n> mostly useless on your box, except for predicting generalities and\n> then only when they agree with other people's benchmarks.\n> PostgreSQL ships with a very conservative default configuration\n> because (among other things, perhaps) 1) it's a configuration\n> that's very unlikely to fail miserably for most situations, and 2)\n\nSo your target are potential skilled DBA that have a coffe pot as\ntesting machine?\nI don't want temporary needs of unskilled dev driving PostgreSQL\nproject, but they are all potential users. Users too make a project\nmore successful. Not every dev using a DB is a DBA, not every\nproject in need for a DB is mature enough to have DBA knowledge.\n\nStill you've another DB that kick your ass in most common hardware\nconfiguration and workload. Something has to be done about the\ntuning. Again... a not tuned Ferrari can't win a F1 GP competing\nwith a tuned McLaren but it can stay close. A Skoda Fabia can't.\n\nWhen people come here and ask why PostgreSQL is slow as a Skoda\ncompared to a Ferrari in some tasks and you reply they have to\ntune... a) they will think you're trying to sell them a Skoda b)\nthey will think you're selling a Ferrari in a mounting kit.\n\nIt even doesn't help to guide people if 9 out of 10 you reply:\nbefore we give you any advice... you've to spend half day learning\nhow to tune PostgreSQL. When they come back... you reply... but your\nbenchmark was not suited for your real work workload.\nIt makes helping people hard.\n\nRemember we are talking about PostgreSQL vs. MySQL performance\nrunning Drupal.\n\nBut still people point at benchmark where PostgreSQL outperform\nMySQL.\nPeople get puzzled.\n\nThings like: MySQL will eat your data are hard to sustain and\nexplain.\nI don't have direct experience on corrupted DB... but I'd say it is\neasier to program PostgreSQL than MySQL once your project is over 30\nlines of code because it is less sloppy.\nThis is easier to prove: point at the docs and to SQL standard.\n\n> it's assumed that if server performance matters, someone will\n> spend time tuning things. The fact that database X performs better\n> than PostgreSQL out of the box is fairly irrelevant; if\n> performance matters, you won't use the defaults, you'll find\n> better ones that work for you.\n\nThe fact that out of the box on common hardware PostgreSQL\nunder-perform MySQL with default config would matter if few\nparagraph below you wouldn't say that integrity has a *big*\nperformance cost even on read-only operation.\nWhen people come back crying that PostgreSQL under-perform with\nDrupal they generally show a considerable gap between the 2.\n\n> > Making performance comparable without expert tuning will a) stop\n> > most too easy critics about PostgreSQL performances b) give\n> > developers much more feedback on PostgreSQL performance in\n> > \"nearer to optimal\" setup.\n\n> Most of the complaints of PostgreSQL being really slow are from\n> people who either 1) use PostgreSQL assuming its MySQL and\n> therefore don't do things they way a real DBA would do them, or 2)\n> simply repeat myths they've heard about PostgreSQL performance and\n> have no experience to back up. While it would be nice to be able\n> to win over such people, PostgreSQL developers tend to worry more\n> about pleasing the people who really know what they're doing. (The\n> apparent philosophical contradiction between my statements above\n> and the fact that I'm writing something as inane as PL/LOLCODE\n> doesn't cause me much lost sleep -- yet)\n\n> > If it is easy to write a tool that will help you to tune\n> > PostgreSQL, it seems it would be something that will really help\n> > PostgreSQL diffusion and improvements. If it is *complicated* to\n> > tune PostgreSQL so that it's performance can be *comparable* (I\n> > didn't write optimal) with other DB we have a problem.\n\n> It's not easy to write such a tool; the lists talk about one every\n> few months, and invariable conclude it's harder than just teaching\n> DBAs to do it (or alternatively letting those that need help pay\n> those that can help to tune for them).\n\nBut generally the performance gap is astonishing on default\nconfiguration. It is hard to win the myth surrounding PostgreSQL...\nbut again... if you've to trade integrity for speed... at least you\nshould have numbers to show what are you talking about. Then people\nmay decide.\nYou're using a X% slower, Y% more reliable DB.\nYou're using a X% slower, Y% more scalable DB. etc...\nOr at least tell people they are buying a SUV, a Ferrari or a train\nfirst.\nWe were talking about CMS. So we know it is not Ferrari, it is not a\nSkoda and it may be a train or a SUV (sort of...).\n\n> As to whether it's a problem that it's a complex thing to tune,\n> sure it would be nice if it were easier, and efforts are made\n> along those lines all the time (cf. GUC simplification efforts for\n> a contemporary example). But databases are complex things, and any\n> tool that makes them overly simple is only glossing over the\n> important details.\n\nYou trade complexity for flexibility... so is PostgreSQL a SUV, a\nFerrari, a Skoda and a train too sold in a mounting kit?\nI'd expect that if it was a Skoda I wouldn't have any tuning problem\nto win a Ferrari on consumption.\n\n> > 2) I never saw a \"trashing data benchmark\" comparing reliability\n> > of PostgreSQL to MySQL. If what I need is a fast DB I'd chose\n> > mySQL... I think this could still not be the best decision to\n> > take based on *real situation*.\n\n> If you've got an important application (for some definition of\n> \"important\"), your considerations in choosing underlying software\n> are more complex than \"is it the fastest option\". Horror stories\n> about MySQL doing strange things to data, because of poor integrity\n> constraints, ISAM tables, or other problems are fairly common\n> (among PostgreSQL users, at least :) But I will also admit I have\n\nWell horror stories about PostgreSQL being doggy slow are quite\ncommon among MySQL users.\nBut while it is very easy to \"prove\" the later on a test config with\ndefault on PostgreSQL it is harder to prove the former.\nSo it would be better to rephrase the former so it is easier to\nprove or just change the term of comparison.\nAnyway making easier to tune PostgreSQL even if not optimally would\nbe a good target.\n\n> > What I get with that kind of answer is:\n> > an admission: - PostgreSQL is slow\n\n> People aren't saying that. They're saying it works better when\n> someone who knows what they're doing runs it.\n\nI find this a common excuse of programmers.\nYou user are an asshole, my software is perfect.\nIt's not a matter of \"better\". When people comes here saying\nPostgreSQL perform badly serving Drupal the performance gap is not\nrealistically described just with \"better\".\n\n> > But is PostgreSQL competitive as a DB engine for apps like Drupal\n> > for the \"average user\"?\n\n> So are we talking about the \"average user\", or someone who needs\n> real performance? The average user certainly cares about\n> performance, but if (s)he really cares, (s)he will put time toward\n> achieving performance.\n\nIf I see a performance gap of 50% I'm going to think that's not\ngoing to be that easy to fill it with \"tuning\".\nThat means:\n- I may think that with a *reasonable* effort I could come close and\nthen I'll have to find other good reasons other than performances to\nchose A in spite of B\n- I may think I need a miracle I'm not willing to bet on/pay for.\n\nNow... you've to tune is not the kind of answer that will help me to\ntake a decision in favour of PostgreSQL.\n\nAnyway a 50% or more performance gap is something that make hard to\ntake any decision. It something that really makes hard even to give\nadvices to new people.\n\nReducing that gap with a set of \"common cases\" .conf may help.\nWhen people will see a 10%-15% gap without too much effort they will\nbe more willing to listen what you've to offer more and will believe\neasier that that gap can be reduced further.\n\nUnless every camp keeps on believing in myths and new comers have to\nbelieve faithfully.\n\nYou may think that people coming here asking for performance advices\nalready collected enough information on eg. PostgreSQL features...\nbut it may not be the case.\n\nThink about Drupal developers coming here and asking... is it worth\nto support PostgreSQL?\n\nLet me go even further out of track...\n\nWhy do comparisons between PostgreSQL and MySQL come up so\nfrequently?\n\nBecause MySQL \"is the DB of the Web\".\nMany web apps are (were) mainly \"read-only\" and their data integrity\nis (was) not so important.\nMany Web apps are (were) simple.\n\nWeb apps and CMS are a reasonably large slice of what's moving on\nthe net. Do these applications need the features PostgreSQL has?\nIs there any trade off? Is it worth to pay that trade off?\n\nIs it worth to conquer this audience even if they are not skilled\nDBA?\n\n-- \nIvan Sergio Borgonovo\nhttp://www.webthatworks.it\n\n", "msg_date": "Tue, 14 Oct 2008 11:40:11 +0200", "msg_from": "Ivan Sergio Borgonovo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, Oct 14, 2008 at 3:40 AM, Ivan Sergio Borgonovo\n<[email protected]> wrote:\n> On Mon, 13 Oct 2008 20:45:39 -0600\n> \"Joshua Tolley\" <[email protected]> wrote:\n>\n> Premise:\n> I'm not sustaining that the \"default\" answers are wrong, but they are\n> inadequate.\n> BTW the OP made a direct comparison of pgsql and mysql running\n> drupal. That's a bit different than just asking: how can I improve\n> PostgreSQL performances.\n\nSadly, no one has run any meaningful benchmarks so far.\n\n>> This is a useful question, but there are reasonable answers to it.\n>> The key underlying principle is that it's impossible to know what\n>> will work well in a given situation until that situation is\n>> tested. That's why benchmarks from someone else's box are often\n>> mostly useless on your box, except for predicting generalities and\n>> then only when they agree with other people's benchmarks.\n>> PostgreSQL ships with a very conservative default configuration\n>> because (among other things, perhaps) 1) it's a configuration\n>> that's very unlikely to fail miserably for most situations, and 2)\n>\n> So your target are potential skilled DBA that have a coffe pot as\n> testing machine?\n\nActually a lot has been done to better tune pgsql out of the box, but\nsince it uses shared memory and many oses still come with incredibly\nlow shared mem settings we're stuck.\n\n> Still you've another DB that kick your ass in most common hardware\n> configuration and workload. Something has to be done about the\n> tuning. Again... a not tuned Ferrari can't win a F1 GP competing\n> with a tuned McLaren but it can stay close. A Skoda Fabia can't.\n\nExcept the current benchmark is how fast you can change the tires.\n\n> When people come here and ask why PostgreSQL is slow as a Skoda\n> compared to a Ferrari in some tasks and you reply they have to\n> tune... a) they will think you're trying to sell them a Skoda b)\n> they will think you're selling a Ferrari in a mounting kit.\n\nActually the most common answer is to ask them if they've actually\nused a realistic benchmark. Then tune.\n\n> It even doesn't help to guide people if 9 out of 10 you reply:\n> before we give you any advice... you've to spend half day learning\n> how to tune PostgreSQL. When they come back... you reply... but your\n> benchmark was not suited for your real work workload.\n> It makes helping people hard.\n\nGetting things right is hard. Do you think any joe can get behind the\nwheel of an F1 car and just start driving? Remember, for every\nproblem, there is a simple, easy, elegant answer, and it's wrong.\n\n> Remember we are talking about PostgreSQL vs. MySQL performance\n> running Drupal.\n\nYes, and the very first consideration should be, \"Will the db I'm\nchoosing be likely to eat my data?\" If you're not sure on that one\nall the benchmarketing in the world won't make a difference.\n\n> But still people point at benchmark where PostgreSQL outperform\n> MySQL.\n> People get puzzled.\n\nBecause they don't understand what databases are and what they do maybe?\n\n> Things like: MySQL will eat your data are hard to sustain and\n> explain.\n\nGoogle is your friend. Heck, you can find account after account from\nMySQL fanboys about their favorite database eating their data.\n\n> I don't have direct experience on corrupted DB... but I'd say it is\n> easier to program PostgreSQL than MySQL once your project is over 30\n> lines of code because it is less sloppy.\n> This is easier to prove: point at the docs and to SQL standard.\n\nLots of people feel MySQL's tutorial style docs are easier to\ncomprehend. Especially those unfamiliar with dbs. I prefer\nPostgreSQL's docs, as they are more thorough better suited for a\nsemi-knowledgable DBA.\n\n>> it's assumed that if server performance matters, someone will\n>> spend time tuning things. The fact that database X performs better\n>> than PostgreSQL out of the box is fairly irrelevant; if\n>> performance matters, you won't use the defaults, you'll find\n>> better ones that work for you.\n>\n> The fact that out of the box on common hardware PostgreSQL\n> under-perform MySQL with default config would matter if few\n> paragraph below you wouldn't say that integrity has a *big*\n> performance cost even on read-only operation.\n> When people come back crying that PostgreSQL under-perform with\n> Drupal they generally show a considerable gap between the 2.\n\nAgain, this is almost always for 1 to 5 users. Real world DBs have\ndozens to hundreds to even thousands of simultaneous users. My\nPostgreSQL servers at work routinely have 10 or 20 queries running at\nthe same time, and peak at 100 or more.\n\n> But generally the performance gap is astonishing on default\n> configuration.\n\nOnly for unrealistic benchmarks. Seriously, for any benchmark with\nlarge concurrency and / or high write percentage, postgreSQL wins.\n\n>It is hard to win the myth surrounding PostgreSQL...\n> but again... if you've to trade integrity for speed... at least you\n> should have numbers to show what are you talking about. Then people\n> may decide.\n> You're using a X% slower, Y% more reliable DB.\n> You're using a X% slower, Y% more scalable DB. etc...\n\nIt's not just integrity for speed! IT's the fact that MySQL has\nserious issues with large concurrency, especially when there's a fair\nbit of writes going on. This is especially true for myisam, but not\ncompletely solved in the Oracle-owned innodb table handler.\n\n> Well horror stories about PostgreSQL being doggy slow are quite\n> common among MySQL users.\n\nUsers who run single thread benchmarks. Let them pit their MySQL\nservers against my production PostgreSQL servers with a realistic\nload.\n\n> If I see a performance gap of 50% I'm going to think that's not\n> going to be that easy to fill it with \"tuning\".\n> That means:\n> - I may think that with a *reasonable* effort I could come close and\n> then I'll have to find other good reasons other than performances to\n> chose A in spite of B\n\nThen you are putting your cart before your horse. Choosing a db based\non a single synthetic benchmark is like buying a car based on the\ncolor of the shift knob. Quality is far more important. And so is\nyour data: \"MySQL mangling your data faster than any other db!\" is\nnot a good selling point..\n\n> Now... you've to tune is not the kind of answer that will help me to\n> take a decision in favour of PostgreSQL.\n\nThen please use MySQL. I've got a db that works well for me. When\nMySQL proves incapable of handling the load, then come back and ask\nfor help migrating.\n\n> Anyway a 50% or more performance gap is something that make hard to\n> take any decision. It something that really makes hard even to give\n> advices to new people.\n\nYes. You keep harping on the 50% performance gap. One you nor anyone\nelse has demonstrated to exist with any reasonable test.\n\n> Why do comparisons between PostgreSQL and MySQL come up so\n> frequently?\n>\n> Because MySQL \"is the DB of the Web\".\n> Many web apps are (were) mainly \"read-only\" and their data integrity\n> is (was) not so important.\n> Many Web apps are (were) simple.\n>\n> Web apps and CMS are a reasonably large slice of what's moving on\n> the net. Do these applications need the features PostgreSQL has?\n\nIs their data important? Is downtime a bad thing for them?\n\n> Is there any trade off? Is it worth to pay that trade off?\n>\n> Is it worth to conquer this audience even if they are not skilled\n> DBA?\n\nOnly if they're willing to learn. I can't spend all day tuning their\npgsql servers for free. IF not, then let them go, and they'll come\nback when they need to.\n", "msg_date": "Tue, 14 Oct 2008 06:56:02 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 14/10/2008, at 11.40, Ivan Sergio Borgonovo wrote:\n\n> On Mon, 13 Oct 2008 20:45:39 -0600\n> \"Joshua Tolley\" <[email protected]> wrote:\n>>\n>> PostgreSQL ships with a very conservative default configuration\n>> because (among other things, perhaps) 1) it's a configuration\n>> that's very unlikely to fail miserably for most situations, and 2)\n>\n> So your target are potential skilled DBA that have a coffe pot as\n> testing machine?\n\nYeah, I don't know why the default configuration is targetting \nsomething at least 5 years old. I figure its kinda rare with a \ncompletely new installation of PostgreSQL 8.3.3 on such a machine.\n\n>>> What I get with that kind of answer is:\n>>> an admission: - PostgreSQL is slow\n>\n>> People aren't saying that. They're saying it works better when\n>> someone who knows what they're doing runs it.\n>\n> I find this a common excuse of programmers.\n> You user are an asshole, my software is perfect.\n> It's not a matter of \"better\". When people comes here saying\n> PostgreSQL perform badly serving Drupal the performance gap is not\n> realistically described just with \"better\".\n\nSo, let me get this right, Joshua� You are targetting DBAs using \nservers with less than 512 MB RAM.\nIs PostgreSQL supposed to be used by professional DBAs on enterprise \nsystems or is it supposed to run out of the box on my old Pentium 3?\n\n\n>>> But is PostgreSQL competitive as a DB engine for apps like Drupal\n>>> for the \"average user\"?\n>> So are we talking about the \"average user\", or someone who needs\n>> real performance? The average user certainly cares about\n>> performance, but if (s)he really cares, (s)he will put time toward\n>> achieving performance.\n\nThat might be true, if the only demographic you are looking for are \nprofessional DBAs, but if you're looking to attract more developers, \nnot having sensible defaults is not really a good thing.\nWhile I'll probably take the time to learn more about how to tune \nPostgreSQL, the common Drupal-developer developer will probably just \nsay \"Ah, this is slow, I'll just go back to MySQL�\".\n\nI'm not saying that PostgreSQL should (or could) be just as fast as \nMySQL, and while my benchmark was na�ve, it's what a Drupal developer \nwill see when he decides to try out PostgreSQL. A 40% drop in page \nloading performance. Yikes.\n\nEven if you don't change the default configuration, you should at \nleast include some examples like \"If you have modern webserver, this \nis a good starting point (�) for more information about tuning \nPostgreSQL, see http://www.postgresql.org/docs/8.3/�\"", "msg_date": "Tue, 14 Oct 2008 15:27:30 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, 14 Oct 2008, Ivan Sergio Borgonovo wrote:\n\n> The fact that out of the box on common hardware PostgreSQL under-perform \n> MySQL with default config would matter if few paragraph below you \n> wouldn't say that integrity has a *big* performance cost even on \n> read-only operation.\n\nIf you want a more detailed commentary on that subject, I'd suggest \nhttp://wiki.postgresql.org/wiki/Why_PostgreSQL_Instead_of_MySQL:_Comparing_Reliability_and_Speed_in_2007 \nwhich hits all the major sides of the speed vs. integrity trade-offs here.\n\nYou suggest putting together a \"how much are you paying for integrity?\" \ncomparison. That's hard to do from the PostgreSQL side because most \noptions don't allow an unsafe mode. What would be easier is benchmarking \nsome application in MySQL with the optional strict mode toggled on and \noff; MyISAM+loose vs. InnoDB+strict would be instructive I think.\n\nThose of us who prefer PostgreSQL don't spend too much time working on \nthis area because the very concept of a non-strict mode is horrifying. \nQuantifying how much full data integrity costs is like seeing how much \nfaster you can run if you're set on fire: while you might measure it, far \nbetter to just avoid the whole possibility.\n\n> Well horror stories about PostgreSQL being doggy slow are quite\n> common among MySQL users.\n\nIn addition to the default PostgreSQL configuration being optimized for \nsize rather speed, there are a few operations that just don't execute well \nat all in Postgres. The most common example is how counting things \nhappens; that's slow in PostgreSQL for reasons that are deeply intertwined \nwith the transaction implementation. I don't believe there any problems \nlike that in the Drupal implementation, but there's enough of that in \nother web applications that some percentage of horror stories come from \nthat sort of thing--just not using PostgreSQL for a job it's a good choice \nfor. It's hard to distinguish those cases from those where it was \nappropriate, but just wasn't setup properly or compared fairly.\n\n> Anyway making easier to tune PostgreSQL even if not optimally would\n> be a good target.\n\nThere were two commits to the core PostgreSQL server code last month aimed \nat making it easier to build tools that interact with the server \nconfiguration. The tool(s) that use those features are coming soon.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 14 Oct 2008 09:38:29 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "Mikkel H�gh wrote:\n> On 14/10/2008, at 11.40, Ivan Sergio Borgonovo wrote:\n\n> That might be true, if the only demographic you are looking for are \n> professional DBAs, but if you're looking to attract more developers, not \n> having sensible defaults is not really a good thing.\n> While I'll probably take the time to learn more about how to tune \n> PostgreSQL, the common Drupal-developer developer will probably just say \n> \"Ah, this is slow, I'll just go back to MySQL�\".\n\nDevelopers should be familiar with the platforms they develop for. If \nthey are not and they are not willing to learn them they shouldn't use it.\n\n\nSincerely,\n\nJoshua D. Drake\n", "msg_date": "Tue, 14 Oct 2008 06:42:33 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, Oct 14, 2008 at 8:56 PM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Oct 14, 2008 at 3:40 AM, Ivan Sergio Borgonovo\n> <[email protected]> wrote:\n>> On Mon, 13 Oct 2008 20:45:39 -0600\n>> \"Joshua Tolley\" <[email protected]> wrote:\n>>\n>> Premise:\n>> I'm not sustaining that the \"default\" answers are wrong, but they are\n>> inadequate.\n>> BTW the OP made a direct comparison of pgsql and mysql running\n>> drupal. That's a bit different than just asking: how can I improve\n>> PostgreSQL performances.\n>\n> Sadly, no one has run any meaningful benchmarks so far.\n\nNot sure about \"meaningful\", but:\nhttp://2bits.com/articles/benchmarking-postgresql-vs-mysql-performance-using-drupal-5x.html\nTheir attached config file shows a relatively untuned postgresql\nconfig, but in *Drupal's* case, I'm not sure how else tweaking the\nconfig would help when it shows: \"Executed 99 queries in 67.81\nmilliseconds.\" which in itself is not too shabbly, but that points\ntowards Drupal's inclination to issue a *lot* of small, simple\nqueries.\n\n> Actually the most common answer is to ask them if they've actually\n> used a realistic benchmark. Then tune.\n\nThe benchmark is a mostly read-only Drupal site -- a few admins, but a\nlot of readers. Drupal as a benchmark is skewed towards lots and lots\nof small, simple queries, which MyISAM excels at. The long term fix\nought to be to help the Drupal team to make it\n\nThe front page of one my site, even with some caching turned on, but\nwith a logged in user, shows 389 queries just to generate it, mostly\nconsisting of queries like \"SELECT dst FROM url_alias WHERE src =\n'$link' AND language IN('en', '') ORDER BY language DESC\". Explain\nanalyze shows that postgresql happily uses the index to grab the\ncorrect value, in less than 0.04 ms. But it's still not fast enough,\nesp. when Drupal stupidly issues some of the exact same queries up to\n9 times!\n\nThis, to me, is clearly some thing to be fixed at Drupal's level.\nJoshua Drake is on the right path -- helping the Drupal folks treat\nthe database as a database instead of a blind data store. This is\nsomething I'm working on as well on Drupal's code base, but it looks\nlike it wouldn't be making to the mainstream Drupal release anything\nsoon as the changes are too drastic.\n", "msg_date": "Tue, 14 Oct 2008 21:47:04 +0800", "msg_from": "\"Ang Chin Han\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, 14 Oct 2008, Mikkel H�gh wrote:\n\n> You are targetting DBAs using servers with less than 512 MB RAM. Is \n> PostgreSQL supposed to be used by professional DBAs on enterprise \n> systems or is it supposed to run out of the box on my old Pentium 3?\n\nTake a look at http://bugzilla.kernel.org/show_bug.cgi?id=11381\n\nThere you'll discover that the Linux default for how much memory an \napplication like PostgreSQL can allocate is 32MB. This is true even if \nyou install the OS on a system with 128GB of RAM. If PostgreSQL created a \ndefault configuration optimized for \"enterprise systems\", that \nconfiguration wouldn't even start on *any* Linux system with the default \nkernel settings. The above is an attempt to change that, rejected with \nthe following text that aptly describes the default PostgreSQL \nconfiguration as well: \"The requirement is that SHMMAX is sane for the \nuser by default and that means *safe* rather than as big as possible\". \nThe situation on most other operating systems is similarly bad.\n\nSo the dichotomy here is even worse than you think: it's not just that \nthe performance profile would be wrong, it's that defaults targeting \nmodern hardware would make it so the database won't even start on your old \nPentium 3. The best the PostgreSQL community can do is provide \ndocumentation on how you re-tune your *operating system first*, then the \ndatabase server, to get good performance on systems with modern amounts of \nRAM.\n\nAdmittedly, that documentation and related support tools should be better, \nmainly by being easier to find. This discussion is well timed in that I \nwas planning this month to propose adding a URL with a tuning guide and/or \nsample configurations to the top of the postgresql.conf file before the \nnext release comes out; when that comes up I can point to this thread as a \nreminder of why that's needed.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD", "msg_date": "Tue, 14 Oct 2008 10:02:26 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "MG>comments prefixed with MG>\n\n> From: [email protected]\n> To: [email protected]\n> Subject: Re: [GENERAL] Drupal and PostgreSQL - performance issues?\n> CC: [email protected]; [email protected]\n> \n> On Tue, Oct 14, 2008 at 8:56 PM, Scott Marlowe <[email protected]> wrote:\n> > On Tue, Oct 14, 2008 at 3:40 AM, Ivan Sergio Borgonovo\n> > <[email protected]> wrote:\n> >> On Mon, 13 Oct 2008 20:45:39 -0600\n> >> \"Joshua Tolley\" <[email protected]> wrote:\n> >>\n> >> Premise:\n> >> I'm not sustaining that the \"default\" answers are wrong, but they are\n> >> inadequate.\n> >> BTW the OP made a direct comparison of pgsql and mysql running\n> >> drupal. That's a bit different than just asking: how can I improve\n> >> PostgreSQL performances.\n> >\n> > Sadly, no one has run any meaningful benchmarks so far.\n> \n> Not sure about \"meaningful\", but:\n> http://2bits.com/articles/benchmarking-postgresql-vs-mysql-performance-using-drupal-5x.html\n> Their attached config file shows a relatively untuned postgresql\n> config, but in *Drupal's* case, I'm not sure how else tweaking the\n> config would help when it shows: \"Executed 99 queries in 67.81\n> milliseconds.\" which in itself is not too shabbly, but that points\n> towards Drupal's inclination to issue a *lot* of small, simple\n> queries.\nMG>default behaviour of ISAM DB's\n\n> \n> > Actually the most common answer is to ask them if they've actually\n> > used a realistic benchmark. Then tune.\n> \n> The benchmark is a mostly read-only Drupal site -- a few admins, but a\n> lot of readers. Drupal as a benchmark is skewed towards lots and lots\n> of small, simple queries, which MyISAM excels at. The long term fix\n> ought to be to help the Drupal team to make it\nMG>What about INNODB is Drupal forgetting the default engine for 5.x?\n\n> \n> The front page of one my site, even with some caching turned on, but\n> with a logged in user, shows 389 queries just to generate it, mostly\n> consisting of queries like \"SELECT dst FROM url_alias WHERE src =\n> '$link' AND language IN('en', '') ORDER BY language DESC\". Explain\n> analyze shows that postgresql happily uses the index to grab the\n> correct value, in less than 0.04 ms. But it's still not fast enough,\n> esp. when Drupal stupidly issues some of the exact same queries up to\n> 9 times!\nMG>isnt this in query_cache..why is Drupal going braindead on this item?\n> \n> This, to me, is clearly some thing to be fixed at Drupal's level.\n> Joshua Drake is on the right path -- helping the Drupal folks treat\n> the database as a database instead of a blind data store. This is\n> something I'm working on as well on Drupal's code base, but it looks\n> like it wouldn't be making to the mainstream Drupal release anything\n> soon as the changes are too drastic.\nMG>From what i've been reading the author doesnt work on drupal anymore\nMG>If we could get access to the php source maybe we could fix this..?\n\n> \n> -- \n> Sent via pgsql-general mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-general\n\n_________________________________________________________________\nGet more out of the Web. Learn 10 hidden secrets of Windows Live.\nhttp://windowslive.com/connect/post/jamiethomson.spaces.live.com-Blog-cns!550F681DAD532637!5295.entry?ocid=TXT_TAGLM_WL_domore_092008\n\n\n\n\n\nMG>comments prefixed with MG>> From: [email protected]> To: [email protected]> Subject: Re: [GENERAL] Drupal and PostgreSQL - performance issues?> CC: [email protected]; [email protected]> > On Tue, Oct 14, 2008 at 8:56 PM, Scott Marlowe <[email protected]> wrote:> > On Tue, Oct 14, 2008 at 3:40 AM, Ivan Sergio Borgonovo> > <[email protected]> wrote:> >> On Mon, 13 Oct 2008 20:45:39 -0600> >> \"Joshua Tolley\" <[email protected]> wrote:> >>> >> Premise:> >> I'm not sustaining that the \"default\" answers are wrong, but they are> >> inadequate.> >> BTW the OP made a direct comparison of pgsql and mysql running> >> drupal. That's a bit different than just asking: how can I improve> >> PostgreSQL performances.> >> > Sadly, no one has run any meaningful benchmarks so far.> > Not sure about \"meaningful\", but:> http://2bits.com/articles/benchmarking-postgresql-vs-mysql-performance-using-drupal-5x.html> Their attached config file shows a relatively untuned postgresql> config, but in *Drupal's* case, I'm not sure how else tweaking the> config would help when it shows: \"Executed 99 queries in 67.81> milliseconds.\" which in itself is not too shabbly, but that points> towards Drupal's inclination to issue a *lot* of small, simple> queries.MG>default behaviour of ISAM DB's> > > Actually the most common answer is to ask them if they've actually> > used a realistic benchmark. Then tune.> > The benchmark is a mostly read-only Drupal site -- a few admins, but a> lot of readers. Drupal as a benchmark is skewed towards lots and lots> of small, simple queries, which MyISAM excels at. The long term fix> ought to be to help the Drupal team to make itMG>What about INNODB is Drupal forgetting the default engine for 5.x?> > The front page of one my site, even with some caching turned on, but> with a logged in user, shows 389 queries just to generate it, mostly> consisting of queries like \"SELECT dst FROM url_alias WHERE src => '$link' AND language IN('en', '') ORDER BY language DESC\". Explain> analyze shows that postgresql happily uses the index to grab the> correct value, in less than 0.04 ms. But it's still not fast enough,> esp. when Drupal stupidly issues some of the exact same queries up to> 9 times!MG>isnt this in query_cache..why is Drupal going braindead on this item?> > This, to me, is clearly some thing to be fixed at Drupal's level.> Joshua Drake is on the right path -- helping the Drupal folks treat> the database as a database instead of a blind data store. This is> something I'm working on as well on Drupal's code base, but it looks> like it wouldn't be making to the mainstream Drupal release anything> soon as the changes are too drastic.MG>>From what i've been reading the author doesnt work on drupal anymoreMG>If we could get access to the php source maybe we could fix this..?> > -- > Sent via pgsql-general mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-generalGet more out of the Web. Learn 10 hidden secrets of Windows Live. Learn Now", "msg_date": "Tue, 14 Oct 2008 10:05:36 -0400", "msg_from": "Martin Gainty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, Oct 14, 2008 at 7:47 AM, Ang Chin Han <[email protected]> wrote:\n> On Tue, Oct 14, 2008 at 8:56 PM, Scott Marlowe <[email protected]> wrote:\n>> On Tue, Oct 14, 2008 at 3:40 AM, Ivan Sergio Borgonovo\n>> <[email protected]> wrote:\n>>> On Mon, 13 Oct 2008 20:45:39 -0600\n>>> \"Joshua Tolley\" <[email protected]> wrote:\n>>>\n>>> Premise:\n>>> I'm not sustaining that the \"default\" answers are wrong, but they are\n>>> inadequate.\n>>> BTW the OP made a direct comparison of pgsql and mysql running\n>>> drupal. That's a bit different than just asking: how can I improve\n>>> PostgreSQL performances.\n>>\n>> Sadly, no one has run any meaningful benchmarks so far.\n>\n> Not sure about \"meaningful\", but:\n> http://2bits.com/articles/benchmarking-postgresql-vs-mysql-performance-using-drupal-5x.html\n\nAgain, a read only benchmark against the front page hardly counts as a\nmeaningful. And 5 concurrent is pretty small anyway.\n", "msg_date": "Tue, 14 Oct 2008 08:20:02 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 14/10/2008, at 16.05, Martin Gainty wrote:\n> > The benchmark is a mostly read-only Drupal site -- a few admins, \n> but a\n> > lot of readers. Drupal as a benchmark is skewed towards lots and \n> lots\n> > of small, simple queries, which MyISAM excels at. The long term fix\n> > ought to be to help the Drupal team to make it\n> MG>What about INNODB is Drupal forgetting the default engine for 5.x?\n\nWell, my benchmark was actually running on InnoDB. With MyISAM, the \ndifference would probably have been even larger, but a lot less fair, \nsince MyISAM doesn't to any kind of integrity checks at all.", "msg_date": "Tue, 14 Oct 2008 16:20:39 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, 14 Oct 2008 06:42:33 -0700\n\"Joshua D. Drake\" <[email protected]> wrote:\n\n> Mikkel Høgh wrote:\n> > On 14/10/2008, at 11.40, Ivan Sergio Borgonovo wrote:\n> \n> > That might be true, if the only demographic you are looking for\n> > are professional DBAs, but if you're looking to attract more\n> > developers, not having sensible defaults is not really a good\n> > thing. While I'll probably take the time to learn more about how\n> > to tune PostgreSQL, the common Drupal-developer developer will\n> > probably just say \"Ah, this is slow, I'll just go back to\n> > MySQL…\".\n> \n> Developers should be familiar with the platforms they develop for.\n> If they are not and they are not willing to learn them they\n> shouldn't use it.\n\nThey may be willing to get familiar if they understand the platform\nis suited for their needs.\nSometimes they don't know their needs ;)\n\n-- \nIvan Sergio Borgonovo\nhttp://www.webthatworks.it\n\n", "msg_date": "Tue, 14 Oct 2008 16:28:22 +0200", "msg_from": "Ivan Sergio Borgonovo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "Greg Smith wrote:\n> On Tue, 14 Oct 2008, Mikkel H�gh wrote:\n>\n>> You are targetting DBAs using servers with less than 512 MB RAM. Is \n>> PostgreSQL supposed to be used by professional DBAs on enterprise \n>> systems or is it supposed to run out of the box on my old Pentium 3?\n>\n> you'll discover that the Linux default for how much memory an \n> application like PostgreSQL can allocate is 32MB. This is true even if \n> you install the OS on a system with 128GB of RAM.\n\nOne thing that might help people swallow the off-putting default \"toy \nmode\" performance of PostgreSQL would be an explanation of why \nPostgreSQL uses its shared memory architecture in the first place. How \nmuch of a performance or stability advantage does it confer under what \ndatabase usage and hardware scenarios? How can any such claims be proven \nexcept by writing a bare-bones database server from scratch that can use \nmultiple memory models?\n\n-Kevin Murphy\n\n", "msg_date": "Tue, 14 Oct 2008 10:59:14 -0400", "msg_from": "Kevin Murphy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, Oct 14, 2008 at 10:05 PM, Martin Gainty <[email protected]> wrote:\n> MG>comments prefixed with MG>\n\n> MG>What about INNODB is Drupal forgetting the default engine for 5.x?\n\nI don't use MySQL for drupal myself except for testing, but Drupal\njust uses the default storage engine for MySQL, which happens to be\nMyISAM on my Ubuntu Hardy. Doesn't seem to have much difference\nbetween InnoDB or MyISAM, but I don't have a sizable Drupal site to\ntest that on.\n\n>> esp. when Drupal stupidly issues some of the exact same queries up to\n>> 9 times!\n> MG>isnt this in query_cache..why is Drupal going braindead on this item?\n\nYes, it's rather braindead. I'd rather not worry about why, but how'd\nwe make Drupal use the PostgreSQL more effectively. In it's current\nform (Drupal 5 and 6), it even issues a regexp for every query, even\nbefore it hits the database because of some design decisions to use\nuser definable table prefix as a workaround to the lack of database\nSCHEMA in MySQL: take the follow snippet as a representative Drupal\ncode:\n\n$alias = db_result(db_query(\"SELECT dst FROM {url_alias} WHERE src =\n'%s' AND language IN('%s', '') ORDER BY language DESC\", $path,\n$path_language));\n\nThat's one sprintf() and a number of string replace operations to\nreplace \"{url_alias}\" with \"url_alias\", as well as a number of regexp\nto sanitize the query string.\n\nNote this comment:\n/*\n * Queries sent to Drupal should wrap all table names in curly brackets. This\n * function searches for this syntax and adds Drupal's table prefix to all\n * tables, allowing Drupal to coexist with other systems in the same database if\n * necessary.\n*/\nThat's an MySQL-ism for working around legacy hosting sites offering\nonly a single MySQL db bogging postgresql down...\n\nAlso betraying MyISAM heritage (in Drupal pgsql driver):\n/**\n * Lock a table.\n * This function automatically starts a transaction.\n */\nfunction db_lock_table($table) {\n db_query('BEGIN; LOCK TABLE {'. db_escape_table($table) .'} IN\nEXCLUSIVE MODE');\n}\n\n/**\n * Unlock all locked tables.\n * This function automatically commits a transaction.\n */\nfunction db_unlock_tables() {\n db_query('COMMIT');\n}\n\n> MG>From what i've been reading the author doesnt work on drupal anymore\n\nNot when Dries founded a commercial venture selling Drupal hosting and services.\nhttp://acquia.com/products-services/acquia-frequently-asked-questions#driesrole\n\nDoesn't bode too well though when they don't support PostgreSQL directly:\nhttp://acquia.com/products-services/acquia-drupal-supported-platforms\n\n> MG>If we could get access to the php source maybe we could fix this..?\n\nEh? Feel free: http://drupal.org/ It's an excellent CMS, with lots of\ncool features, but comes with it's own set of wtf surprises.\nTo me, there're two ways to proceed: dive straight into the\ndevelopment Drupal release and make the next version work better from\nthe ground up, or hack the existing Drupal postgresql driver and\nfrequently used SQLs and optimize them. We're working on the latter\nbecause of some legacy code we have to support, but the former would\nbe the long term plan.\n\nOptimizing PostgreSQL to make Drupal faster is not the correct way as\nPostgreSQL is fast, scalable and robust enough. Just need to be used\nmore correctly.\n", "msg_date": "Tue, 14 Oct 2008 23:28:03 +0800", "msg_from": "\"Ang Chin Han\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, 14 Oct 2008 06:56:02 -0600\n\"Scott Marlowe\" <[email protected]> wrote:\n\n> >> This is a useful question, but there are reasonable answers to\n> >> it. The key underlying principle is that it's impossible to\n> >> know what will work well in a given situation until that\n> >> situation is tested. That's why benchmarks from someone else's\n> >> box are often mostly useless on your box, except for predicting\n> >> generalities and then only when they agree with other people's\n> >> benchmarks. PostgreSQL ships with a very conservative default\n> >> configuration because (among other things, perhaps) 1) it's a\n> >> configuration that's very unlikely to fail miserably for most\n> >> situations, and 2)\n\n> > So your target are potential skilled DBA that have a coffe pot as\n> > testing machine?\n\n> Actually a lot has been done to better tune pgsql out of the box,\n> but since it uses shared memory and many oses still come with\n> incredibly low shared mem settings we're stuck.\n\nFrom my naive understanding the parameters you can tweak for major\nimprovements can be counted on one hand's finger:\n\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nAre you going to say that expert hands could squeeze other 20%\nperformance more after the usual, pretty simple tweaks an automatic\ntool could achieve on a general workload for a cms?\nFrom the specific test Mikkel did?\n\nWouldn't it be a much better starting point to discuss if PostgreSQL\nis a suitable tool for a CMS if those pretty basic tweaks were done\nautomatically or included as example config in PostgreSQL\ndistribution?\n\n> > Still you've another DB that kick your ass in most common\n> > hardware configuration and workload. Something has to be done\n> > about the tuning. Again... a not tuned Ferrari can't win a F1 GP\n> > competing with a tuned McLaren but it can stay close. A Skoda\n> > Fabia can't.\n\n> Except the current benchmark is how fast you can change the tires.\n\nIt is not and anyway you won't reply: you've to tune PostgreSQL so\nyou can change tires very fast.\nThe test was a very low load of Drupal on pg and mysql.\nAre you expecting that on a higher load pg will outperform mysql?\nThen say so. Don't say you've to tune PostgreSQL.\nDo you think that PostgreSQL can outperform mysql on very low load\nwith tuning?\n\n> > When people come here and ask why PostgreSQL is slow as a Skoda\n> > compared to a Ferrari in some tasks and you reply they have to\n> > tune... a) they will think you're trying to sell them a Skoda b)\n> > they will think you're selling a Ferrari in a mounting kit.\n\n> Actually the most common answer is to ask them if they've actually\n> used a realistic benchmark. Then tune.\n\nrealistic? What was not realistic about Mikkel's test?\nI'd say it is not the kind of workload PostgreSQL was built for.\nBTW I don't buy the idea that even with correct tuning Drupal is\ngoing to be any faster with PostgreSQL on a mostly \"read-only\"\nbenchmark.\nThis makes an even more painful experience and undermine the trust\nof the people coming here and asking why PostgreSQL is slow compared\nto mySQL.\nThe best replies I've read were from Ang Chin Han.\n\n> > Remember we are talking about PostgreSQL vs. MySQL performance\n> > running Drupal.\n\n> Yes, and the very first consideration should be, \"Will the db I'm\n> choosing be likely to eat my data?\" If you're not sure on that one\n> all the benchmarketing in the world won't make a difference.\n\nMaybe a DB eating data is a sustainable solution to your problem.\nBut I think the web is mature enough so that very few web apps worth\nto be used could consider their data integrity a cheap asset.\nThis could be a good selling point even for people that grew up with\nMySQL.\n\n> > But still people point at benchmark where PostgreSQL outperform\n> > MySQL.\n> > People get puzzled.\n\n> Because they don't understand what databases are and what they do\n> maybe?\n\nThen? I doubt that pointing them at tuning docs will make them\nunderstand if PostgreSQL is the right tool.\n\n> > I don't have direct experience on corrupted DB... but I'd say it\n> > is easier to program PostgreSQL than MySQL once your project is\n> > over 30 lines of code because it is less sloppy.\n> > This is easier to prove: point at the docs and to SQL standard.\n\n> Lots of people feel MySQL's tutorial style docs are easier to\n> comprehend. Especially those unfamiliar with dbs. I prefer\n> PostgreSQL's docs, as they are more thorough better suited for a\n> semi-knowledgable DBA.\n\nWhat I meant was... I don't like a DB that silently turn my strings\ninto int or trim strings to make them fit into a varchar etc...\n\n> >> it's assumed that if server performance matters, someone will\n> >> spend time tuning things. The fact that database X performs\n> >> better than PostgreSQL out of the box is fairly irrelevant; if\n> >> performance matters, you won't use the defaults, you'll find\n> >> better ones that work for you.\n\n> > The fact that out of the box on common hardware PostgreSQL\n> > under-perform MySQL with default config would matter if few\n> > paragraph below you wouldn't say that integrity has a *big*\n> > performance cost even on read-only operation.\n> > When people come back crying that PostgreSQL under-perform with\n> > Drupal they generally show a considerable gap between the 2.\n\n> Again, this is almost always for 1 to 5 users. Real world DBs have\n> dozens to hundreds to even thousands of simultaneous users. My\n> PostgreSQL servers at work routinely have 10 or 20 queries running\n> at the same time, and peak at 100 or more.\n\nThat's more in the right direction to help people comparing MySQL\nand PostgreSQL. Still you don't point them at tuning, because it is\nnot going to make PostgreSQL shine anyway.\n\n> > But generally the performance gap is astonishing on default\n> > configuration.\n\n> Only for unrealistic benchmarks. Seriously, for any benchmark with\n> large concurrency and / or high write percentage, postgreSQL wins.\n\nThen DON'T point at the default config as the culprit.\na) it is true that config plays a BIG role in MySQL\n*reasonable* benchmark comparisons: offer a reasonable config!\nb) it is not true: stop pointing at tuning. It is to say the least\npuzzling.\n\n> >It is hard to win the myth surrounding PostgreSQL...\n> > but again... if you've to trade integrity for speed... at least\n> > you should have numbers to show what are you talking about. Then\n> > people may decide.\n> > You're using a X% slower, Y% more reliable DB.\n> > You're using a X% slower, Y% more scalable DB. etc...\n> \n> It's not just integrity for speed! IT's the fact that MySQL has\n> serious issues with large concurrency, especially when there's a\n> fair bit of writes going on. This is especially true for myisam,\n> but not completely solved in the Oracle-owned innodb table handler.\n\nBut no one is going to believe you if the answer is:\nMySQL is going to eat your data. That's sound like FUD.\nI think that even after 1 year of:\nsiege -H \"Cookie: drupalsessid\" -c 5 \"http://drupal-site.local/\" -b\n-t30s\nMySQL is not going to eat your data.\nIf it smells like FUD people may think it is.\n\n> > Well horror stories about PostgreSQL being doggy slow are quite\n> > common among MySQL users.\n\n> Users who run single thread benchmarks. Let them pit their MySQL\n> servers against my production PostgreSQL servers with a realistic\n> load.\n\nThis doesn't make the default 2 lines of promoting PostgreSQL any\nbetter:\n- you've to tune\n- mySQL will eat your data\n\n> > If I see a performance gap of 50% I'm going to think that's not\n> > going to be that easy to fill it with \"tuning\".\n> > That means:\n> > - I may think that with a *reasonable* effort I could come close\n> > and then I'll have to find other good reasons other than\n> > performances to chose A in spite of B\n\n> Then you are putting your cart before your horse. Choosing a db\n> based on a single synthetic benchmark is like buying a car based\n> on the color of the shift knob. Quality is far more important.\n> And so is your data: \"MySQL mangling your data faster than any\n> other db!\" is not a good selling point..\n\nBut the reply: you've to tune and mysql will eat your data doesn't\nget nearer to the kernel of the problem.\n\n> > Now... you've to tune is not the kind of answer that will help\n> > me to take a decision in favour of PostgreSQL.\n\n> Then please use MySQL. I've got a db that works well for me. When\n> MySQL proves incapable of handling the load, then come back and ask\n> for help migrating.\n\nOK... then you admit that suggesting to tune and scaring people is\njust delaying the migration ;)\n\n> > Why do comparisons between PostgreSQL and MySQL come up so\n> > frequently?\n> >\n> > Because MySQL \"is the DB of the Web\".\n> > Many web apps are (were) mainly \"read-only\" and their data\n> > integrity is (was) not so important.\n> > Many Web apps are (were) simple.\n> >\n> > Web apps and CMS are a reasonably large slice of what's moving on\n> > the net. Do these applications need the features PostgreSQL has?\n> \n> Is their data important? Is downtime a bad thing for them?\n> \n> > Is there any trade off? Is it worth to pay that trade off?\n> >\n> > Is it worth to conquer this audience even if they are not skilled\n> > DBA?\n> \n> Only if they're willing to learn. I can't spend all day tuning\n\nI bet some are willing to learn. Just pointing them at tuning and\nusing FUD-like arguments is not a very good advertising for\nPostgreSQL Kung-Fu school.\n\nBTW I hope someone may find good use of this:\n\n2xXeon HT CPU 3.20GHz (not dual core), 4Gb RAM, RAID 5 SCSI\n* absolutely not tuned Apache\n* absolutely not tuned Drupal with little content, some blocks and\nsome google adds\n(just CSS aggregation and caching enabled)\n* lightly tuned PostgreSQL 8.1\nshared_buffers = 3500\nwork_mem = 32768\ncheckpoint_segments = 10\neffective_cache_size = 15000\nrandom_page_cost = 3\ndefault_statistics_target = 30\n\nsiege -H \"Cookie: drupalsessid\" -c1 \"localhost/d1\"\n-b -t30s\n\n-c 1\nTransactions: 485 hits\nAvailability: 100.00 %\nElapsed time: 29.95 secs\nData transferred: 5.33 MB\nResponse time: 0.06 secs\nTransaction rate: 16.19 trans/sec\nThroughput: 0.18 MB/sec\nConcurrency: 1.00\nSuccessful transactions: 485\nFailed transactions: 0\nLongest transaction: 0.13\nShortest transaction: 0.06\n\n-c 5\nTransactions: 1017 hits\nAvailability: 100.00 %\nElapsed time: 29.61 secs\nData transferred: 11.29 MB\nResponse time: 0.15 secs\nTransaction rate: 34.35 trans/sec\nThroughput: 0.38 MB/sec\nConcurrency: 4.98\nSuccessful transactions: 1017\nFailed transactions: 0\nLongest transaction: 0.24\nShortest transaction: 0.08\n\n-c 20\nTransactions: 999 hits\nAvailability: 100.00 %\nElapsed time: 30.11 secs\nData transferred: 11.08 MB\nResponse time: 0.60 secs\nTransaction rate: 33.18 trans/sec\nThroughput: 0.37 MB/sec\nConcurrency: 19.75\nSuccessful transactions: 999\nFailed transactions: 0\nLongest transaction: 1.21\nShortest transaction: 0.10\n\n-c 100\nTransactions: 1085 hits\nAvailability: 100.00 %\nElapsed time: 29.97 secs\nData transferred: 9.61 MB\nResponse time: 2.54 secs\nTransaction rate: 36.20 trans/sec\nThroughput: 0.32 MB/sec\nConcurrency: 91.97\nSuccessful transactions: 911\nFailed transactions: 0\nLongest transaction: 12.41\nShortest transaction: 0.07\n\n-c 200\nTransactions: 1116 hits\nAvailability: 100.00 %\nElapsed time: 30.02 secs\nData transferred: 9.10 MB\nResponse time: 4.85 secs\nTransaction rate: 37.18 trans/sec\nThroughput: 0.30 MB/sec\nConcurrency: 180.43\nSuccessful transactions: 852\nFailed transactions: 0\nLongest transaction: 15.85\nShortest transaction: 0.25\n\n-c 400\nTransactions: 1133 hits\nAvailability: 100.00 %\nElapsed time: 29.76 secs\nData transferred: 8.51 MB\nResponse time: 6.98 secs\nTransaction rate: 38.07 trans/sec\nThroughput: 0.29 MB/sec\nConcurrency: 265.85\nSuccessful transactions: 736\nFailed transactions: 0\nLongest transaction: 28.55\nShortest transaction: 0.00\n\n-- \nIvan Sergio Borgonovo\nhttp://www.webthatworks.it\n\n", "msg_date": "Tue, 14 Oct 2008 18:44:56 +0200", "msg_from": "Ivan Sergio Borgonovo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "Hmm, those are interesting numbers. Did you use a real, logged in, \ndrupal session ID (anonymous users also get one, which still gives \nthem cached pages).\n\nThey are in the form of \n\"SESS6df8919ff2bffc5de8bcf0ad65f9dc0f=59f68e60a120de47c2cb5c98b010ffff\"\n\nNote how the thoughput stays in the 30-ish range from 100 to 400, \nalthough the response time climbs steeply. That might indicate that \nyour Apache configuration is the main roadblock here, since that \nindicates that clients are waiting for a free Apache process to handle \ntheir request (I suppose you're using MPM_PREFORK)�\n--\nKind regards,\n\nMikkel H�gh <[email protected]>\n\nOn 14/10/2008, at 18.44, Ivan Sergio Borgonovo wrote:\n\n> BTW I hope someone may find good use of this:\n>\n> 2xXeon HT CPU 3.20GHz (not dual core), 4Gb RAM, RAID 5 SCSI\n> * absolutely not tuned Apache\n> * absolutely not tuned Drupal with little content, some blocks and\n> some google adds\n> (just CSS aggregation and caching enabled)\n> * lightly tuned PostgreSQL 8.1\n> shared_buffers = 3500\n> work_mem = 32768\n> checkpoint_segments = 10\n> effective_cache_size = 15000\n> random_page_cost = 3\n> default_statistics_target = 30\n>\n> siege -H \"Cookie: drupalsessid\" -c1 \"localhost/d1\"\n> -b -t30s\n>\n> -c 1\n> Transactions: 485 hits\n> Availability: 100.00 %\n> Elapsed time: 29.95 secs\n> Data transferred: 5.33 MB\n> Response time: 0.06 secs\n> Transaction rate: 16.19 trans/sec\n> Throughput: 0.18 MB/sec\n> Concurrency: 1.00\n> Successful transactions: 485\n> Failed transactions: 0\n> Longest transaction: 0.13\n> Shortest transaction: 0.06\n>\n> -c 5\n> Transactions: 1017 hits\n> Availability: 100.00 %\n> Elapsed time: 29.61 secs\n> Data transferred: 11.29 MB\n> Response time: 0.15 secs\n> Transaction rate: 34.35 trans/sec\n> Throughput: 0.38 MB/sec\n> Concurrency: 4.98\n> Successful transactions: 1017\n> Failed transactions: 0\n> Longest transaction: 0.24\n> Shortest transaction: 0.08\n>\n> -c 20\n> Transactions: 999 hits\n> Availability: 100.00 %\n> Elapsed time: 30.11 secs\n> Data transferred: 11.08 MB\n> Response time: 0.60 secs\n> Transaction rate: 33.18 trans/sec\n> Throughput: 0.37 MB/sec\n> Concurrency: 19.75\n> Successful transactions: 999\n> Failed transactions: 0\n> Longest transaction: 1.21\n> Shortest transaction: 0.10\n>\n> -c 100\n> Transactions: 1085 hits\n> Availability: 100.00 %\n> Elapsed time: 29.97 secs\n> Data transferred: 9.61 MB\n> Response time: 2.54 secs\n> Transaction rate: 36.20 trans/sec\n> Throughput: 0.32 MB/sec\n> Concurrency: 91.97\n> Successful transactions: 911\n> Failed transactions: 0\n> Longest transaction: 12.41\n> Shortest transaction: 0.07\n>\n> -c 200\n> Transactions: 1116 hits\n> Availability: 100.00 %\n> Elapsed time: 30.02 secs\n> Data transferred: 9.10 MB\n> Response time: 4.85 secs\n> Transaction rate: 37.18 trans/sec\n> Throughput: 0.30 MB/sec\n> Concurrency: 180.43\n> Successful transactions: 852\n> Failed transactions: 0\n> Longest transaction: 15.85\n> Shortest transaction: 0.25\n>\n> -c 400\n> Transactions: 1133 hits\n> Availability: 100.00 %\n> Elapsed time: 29.76 secs\n> Data transferred: 8.51 MB\n> Response time: 6.98 secs\n> Transaction rate: 38.07 trans/sec\n> Throughput: 0.29 MB/sec\n> Concurrency: 265.85\n> Successful transactions: 736\n> Failed transactions: 0\n> Longest transaction: 28.55\n> Shortest transaction: 0.00", "msg_date": "Tue, 14 Oct 2008 19:05:42 +0200", "msg_from": "=?WINDOWS-1252?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "--\nMed venlig hilsen,\n\nMikkel H�gh <[email protected]>\n\nOn 14/10/2008, at 17.28, Ang Chin Han wrote:\n\n> On Tue, Oct 14, 2008 at 10:05 PM, Martin Gainty \n> <[email protected]> wrote:\n> Yes, it's rather braindead. I'd rather not worry about why, but how'd\n> we make Drupal use the PostgreSQL more effectively. In it's current\n> form (Drupal 5 and 6), it even issues a regexp for every query, even\n> before it hits the database because of some design decisions to use\n> user definable table prefix as a workaround to the lack of database\n> SCHEMA in MySQL: take the follow snippet as a representative Drupal\n> code:\n>\n> $alias = db_result(db_query(\"SELECT dst FROM {url_alias} WHERE src =\n> '%s' AND language IN('%s', '') ORDER BY language DESC\", $path,\n> $path_language));\n>\n> That's one sprintf() and a number of string replace operations to\n> replace \"{url_alias}\" with \"url_alias\", as well as a number of regexp\n> to sanitize the query string.\n\nYeah, good thing this is all going away in Drupal 6 (or much of it, at \nany rate), where we are converting to PHP's PDO abstraction layer.\n\n> Note this comment:\n> /*\n> * Queries sent to Drupal should wrap all table names in curly \n> brackets. This\n> * function searches for this syntax and adds Drupal's table prefix \n> to all\n> * tables, allowing Drupal to coexist with other systems in the same \n> database if\n> * necessary.\n> */\n> That's an MySQL-ism for working around legacy hosting sites offering\n> only a single MySQL db bogging postgresql down...\n\nYeah, this is one thing I wouldn't mind if they removed. I have never \nused that feature, and will probably never do so. Having multiple \napplications in the same database is a mess anyways.\n\n> Also betraying MyISAM heritage (in Drupal pgsql driver):\n> /**\n> * Lock a table.\n> * This function automatically starts a transaction.\n> */\n> function db_lock_table($table) {\n> db_query('BEGIN; LOCK TABLE {'. db_escape_table($table) .'} IN\n> EXCLUSIVE MODE');\n> }\n>\n> /**\n> * Unlock all locked tables.\n> * This function automatically commits a transaction.\n> */\n> function db_unlock_tables() {\n> db_query('COMMIT');\n> }\n\nYeah, sadly, our PostgreSQL driver has not historically been \nmaintained by someone who \"knows what's he's doing\". Our current \ndatabase maintainer, Larry Garfield, has some knowledge of PostgreSQL, \nbut he's offered pleas for input a couple of times recently. I hope to \ngain some more knowledge to help his efforts.\n\n>\n>> MG>From what i've been reading the author doesnt work on drupal \n>> anymore\n>\n> Not when Dries founded a commercial venture selling Drupal hosting \n> and services.\n> http://acquia.com/products-services/acquia-frequently-asked-questions#driesrole\n\nWell, Dries is still the project lead of Drupal, and he's betting his \ncompanys future on it, so I'm quite sure he's doing the best he can :)\n>\n>> MG>If we could get access to the php source maybe we could fix \n>> this..?\n\nYeah, there's cvs.drupal.org if you're interested :)\n>", "msg_date": "Tue, 14 Oct 2008 19:14:54 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "\tMikkel Høgh wrote:\n\n> In any case, if anyone has any tips, input, etc. on how best to \n> configure PostgreSQL for Drupal, or can find a way to poke holes in \nmy \n> analysis, I would love to hear your insights :)\n\nI'm a recent Drupal user with postgres.\nWhat I've noticed on drupal-6.4 with Ubuntu 8.04 is that the default \npostgresql.conf has:\n ssl=true\nand since drupal doesn't allow connecting to pgsql with unix socket \npaths [1], what you get by default is probably TCP + SSL encryption.\nA crude test that just connects and disconnect to a local pg server \nappears to me to be 18 times faster when SSL is off.\nSo you might want to check if setting ssl to false makes a difference \nfor your test.\n\n[1] A patch has been posted here: http://drupal.org/node/26836 , but it \nseems to have gotten nowhere. The comments about pg_connect() are \ndepressingly lame, apparently nobody had a clue how unix socket files \nshould be specified, including the contributor of the patch!\n\n Best regards,\n-- \n Daniel\n PostgreSQL-powered mail user agent and storage: \nhttp://www.manitou-mail.org\n", "msg_date": "Tue, 14 Oct 2008 20:23:47 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 14/10/2008, at 20.23, Daniel Verite wrote:\n> What I've noticed on drupal-6.4 with Ubuntu 8.04 is that the default \n> postgresql.conf has:\n> ssl=true\n> and since drupal doesn't allow connecting to pgsql with unix socket \n> paths [1], what you get by default is probably TCP + SSL encryption.\n> A crude test that just connects and disconnect to a local pg server \n> appears to me to be 18 times faster when SSL is off.\n> So you might want to check if setting ssl to false makes a \n> difference for your test.\n\nOuch, there's a gotcha. So enabling SSL gives you SSL connections by \ndefault, even for localhost? That�s� unexpected.\n\n> [1] A patch has been posted here: http://drupal.org/node/26836 , but \n> it seems to have gotten nowhere. The comments about pg_connect() are \n> depressingly lame, apparently nobody had a clue how unix socket \n> files should be specified, including the contributor of the patch!\n\nWell, I suppose no one thought about looking at a specific path \ninstead of just the default location. That's what we do for MySQL as \nwell. I suppose that is a bit silly, but that, too, is going away in \nDrupal 7 (and I won't miss it). It will do the Drupal project a lot of \ngood not having to maintain its own database abstraction system.\n\nI'm going to run the test again without SSL to see how much difference \nit does. Thanks for the tip.", "msg_date": "Tue, 14 Oct 2008 20:39:33 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, Oct 14, 2008 at 12:39 PM, Mikkel Høgh <[email protected]> wrote:\n> On 14/10/2008, at 20.23, Daniel Verite wrote:\n>>\n>> What I've noticed on drupal-6.4 with Ubuntu 8.04 is that the default\n>> postgresql.conf has:\n>> ssl=true\n>> and since drupal doesn't allow connecting to pgsql with unix socket paths\n>> [1], what you get by default is probably TCP + SSL encryption.\n>> A crude test that just connects and disconnect to a local pg server\n>> appears to me to be 18 times faster when SSL is off.\n>> So you might want to check if setting ssl to false makes a difference for\n>> your test.\n>\n> Ouch, there's a gotcha. So enabling SSL gives you SSL connections by\n> default, even for localhost? That's… unexpected.\n>\n>> [1] A patch has been posted here: http://drupal.org/node/26836 , but it\n>> seems to have gotten nowhere. The comments about pg_connect() are\n>> depressingly lame, apparently nobody had a clue how unix socket files should\n>> be specified, including the contributor of the patch!\n>\n> Well, I suppose no one thought about looking at a specific path instead of\n> just the default location. That's what we do for MySQL as well. I suppose\n> that is a bit silly, but that, too, is going away in Drupal 7 (and I won't\n> miss it). It will do the Drupal project a lot of good not having to maintain\n> its own database abstraction system.\n>\n> I'm going to run the test again without SSL to see how much difference it\n> does. Thanks for the tip.\n\nAlso, look at setting up memcached. It makes a world of difference.\n", "msg_date": "Tue, 14 Oct 2008 12:44:43 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, Oct 14, 2008 at 08:23:47PM +0200, Daniel Verite wrote:\n> [1] A patch has been posted here: http://drupal.org/node/26836 , but it \n> seems to have gotten nowhere. The comments about pg_connect() are \n> depressingly lame, apparently nobody had a clue how unix socket files \n> should be specified, including the contributor of the patch!\n\nErr, presumably host=/tmp would do it. It's documented in the PG docs,\nperhaps someone should update the PHP docs.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Please line up in a tree and maintain the heap invariant while \n> boarding. Thank you for flying nlogn airlines.", "msg_date": "Tue, 14 Oct 2008 21:05:48 +0200", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "Mikkel H�gh wrote:\n> On 14/10/2008, at 20.23, Daniel Verite wrote:\n>> What I've noticed on drupal-6.4 with Ubuntu 8.04 is that the default\n>> postgresql.conf has:\n>> ssl=true\n>> and since drupal doesn't allow connecting to pgsql with unix socket\n>> paths [1], what you get by default is probably TCP + SSL encryption.\n>> A crude test that just connects and disconnect to a local pg server\n>> appears to me to be 18 times faster when SSL is off.\n>> So you might want to check if setting ssl to false makes a difference\n>> for your test.\n> \n> Ouch, there's a gotcha. So enabling SSL gives you SSL connections by\n> default, even for localhost? That�s� unexpected.\n\nYes. To avoid it, specify \"hostnossl\" in the pg_hba.conf file.That will\nrefuse all SSL connections.\n\n\n//Magnus\n", "msg_date": "Tue, 14 Oct 2008 21:17:39 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "forwarding here too cos I just got the copy sent to my personal\naddress and just much later the one from pg list... + adding some\nmore siege run.\n\nOn Tue, 14 Oct 2008 19:05:42 +0200\nMikkel Høgh <[email protected]> wrote:\n\n> Hmm, those are interesting numbers. Did you use a real, logged\n> in, drupal session ID (anonymous users also get one, which still\n> gives them cached pages).\n\n> They are in the form of \n> \"SESS6df8919ff2bffc5de8bcf0ad65f9dc0f=59f68e60a120de47c2cb5c98b010ffff\"\n> \n> Note how the thoughput stays in the 30-ish range from 100 to 400, \n> although the response time climbs steeply. That might indicate\n> that your Apache configuration is the main roadblock here, since\n> that indicates that clients are waiting for a free Apache process\n> to handle their request (I suppose you're using MPM_PREFORK)…\n\nright\nivan@wtw:~$ aptitude search apache2 | grep prefork\ni A apache2-mpm-prefork - Traditional model for Apache\nHTTPD 2.1\nBut... well since Apache was not tuned... if I remember right it\ncomes with a default of 100 processes max.\n\nI copied your siege line.\n\nApache config was nearly untouched with the exception of virtualhost\nconfigs.\nEverything was running on a nearly stock Debian etch.\nthx for suggestions about tuning Apache...\nCurrently I'm just installing D7 on my notebook to see how it's\ngoing.\n\nThe notebook is running Lenny... that comes with pg 8.3.\n\nI really hope D7 is going in the right direction for DB support.\nI really would like to help there but I'm overwhelmed by a\nmulti-months, thousands lines of code drupal/postgresql project.\n\nOK these were run on:\nCore(TM)2 CPU T7200 @ 2.00GHz\n2Gb RAM\nnotebook so no raid etc... actually a very slow HD.\nDefault debian lenny install of pg and apache NO tuning at all.\nvserver kernel but not run in a vserver.\nSo pg is 8.3 stock debian install + minor tweak to pg_hba just to\nsetup all the stuff.\nD7 with most modules on and basic cache systems on.\n\n-c20\nTransactions: 1446 hits\nAvailability: 100.00 %\nElapsed time: 30.30 secs\nData transferred: 2.87 MB\nResponse time: 0.42 secs\nTransaction rate: 47.72 trans/sec\nThroughput: 0.09 MB/sec\nConcurrency: 19.87\nSuccessful transactions: 1446\nFailed transactions: 0\nLongest transaction: 0.60\nShortest transaction: 0.09\n\n-c100\nTransactions: 1396 hits\nAvailability: 100.00 %\nElapsed time: 30.13 secs\nData transferred: 2.77 MB\nResponse time: 2.08 secs\nTransaction rate: 46.33 trans/sec\nThroughput: 0.09 MB/sec\nConcurrency: 96.46\nSuccessful transactions: 1396\nFailed transactions: 0\nLongest transaction: 2.67\nShortest transaction: 0.09\n\nPretty impressive improvement.\nHard to say if the improvement was due to D7 update or PG 8.3 update.\nIf I've to chse I'd say the improvement comes from drupal new\ncaching system.\nI'll try to find some spare time and do some more extensive\nbenchmark.\n\nIt would be interesting to see which are the slowest queries.\n\n\n-- \nIvan Sergio Borgonovo\nhttp://www.webthatworks.it\n\n", "msg_date": "Tue, 14 Oct 2008 22:18:31 +0200", "msg_from": "Ivan Sergio Borgonovo <[email protected]>", "msg_from_op": false, "msg_subject": "benchmark on D7 + PG 8.3 Re: Drupal and PostgreSQL -\n\tperformance issues?" }, { "msg_contents": "After disabling SSL for localhost, I ran the tests again, and the \nperformance gap is reduced to about 30%.\n--\nKind regards,\n\nMikkel H�gh <[email protected]>\n\nOn 14/10/2008, at 21.17, Magnus Hagander wrote:\n\n> Mikkel H�gh wrote:\n>> On 14/10/2008, at 20.23, Daniel Verite wrote:\n>>> What I've noticed on drupal-6.4 with Ubuntu 8.04 is that the default\n>>> postgresql.conf has:\n>>> ssl=true\n>>> and since drupal doesn't allow connecting to pgsql with unix socket\n>>> paths [1], what you get by default is probably TCP + SSL encryption.\n>>> A crude test that just connects and disconnect to a local pg server\n>>> appears to me to be 18 times faster when SSL is off.\n>>> So you might want to check if setting ssl to false makes a \n>>> difference\n>>> for your test.\n>>\n>> Ouch, there's a gotcha. So enabling SSL gives you SSL connections by\n>> default, even for localhost? That�s� unexpected.\n>\n> Yes. To avoid it, specify \"hostnossl\" in the pg_hba.conf file.That \n> will\n> refuse all SSL connections.\n>\n>\n> //Magnus\n>\n> -- \n> Sent via pgsql-general mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-general", "msg_date": "Tue, 14 Oct 2008 23:04:36 +0200", "msg_from": "=?WINDOWS-1252?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, Oct 14, 2008 at 5:04 PM, Mikkel Høgh <[email protected]> wrote:\n> After disabling SSL for localhost, I ran the tests again, and the\n> performance gap is reduced to about 30%.\n\nok, now consider you are on a read only load, with lots (if I read the\nthread correctly) repetitive queries. mysql has a feature called the\nquery cache which optimizes sending the same query multiple times, but\ninvalidates when a table changes.\n\npostgresql has a very efficient locking engine, so it tends to beat\nmysql up when you have lots of writing going on from different users.\nmyisam has an edge on mainly read only data in some cases...but 30% is\na small price to pay for all the extra power you get (and that 30%\nwill flip quickly if you have to do any real work).\n\nmerlin\n", "msg_date": "Tue, 14 Oct 2008 17:22:09 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 14/10/2008, at 23.22, Merlin Moncure wrote:\n\n> On Tue, Oct 14, 2008 at 5:04 PM, Mikkel H�gh <[email protected]> wrote:\n>> After disabling SSL for localhost, I ran the tests again, and the\n>> performance gap is reduced to about 30%.\n>\n> ok, now consider you are on a read only load, with lots (if I read the\n> thread correctly) repetitive queries. mysql has a feature called the\n> query cache which optimizes sending the same query multiple times, but\n> invalidates when a table changes.\n>\n> postgresql has a very efficient locking engine, so it tends to beat\n> mysql up when you have lots of writing going on from different users.\n> myisam has an edge on mainly read only data in some cases...but 30% is\n> a small price to pay for all the extra power you get (and that 30%\n> will flip quickly if you have to do any real work).\n>\n> merlin\n\n\nWell more or less. In this case, there are only two repeating queries, \nat the measly cost of 0.68ms. We do however have 31 lookups in to the \nsame table, where one is the dreaded \"SELECT COUNT(pid) FROM \nurl_alias\" which takes PostgreSQL a whopping 70.65ms out of the \n115.74ms total for 87 queries.\nAlso, we're not comparing to MyISAM. The 30% is against MySQLs InnoDB- \nengine, which is a lot closer to PostgreSQL feature-wise than MyISAM, \nbut there is of course still the query cache (which is quite small in \nthis case, given the low default settings, but still have a hit rate \naround 80%).\n\nIn any case, I like PostgreSQL better, and MySQLs future is a bit \nuncertain, with Innobase taken over by Oracle and MySQL AB taken over \nby Sun, so I'm going to continue to play with PostgreSQL, also to take \nadvantage of stuff like vsearch, which I wouldn't be able to in like a \nmillion years with MySQL - And thank you, Magnus, for coming to \nCopenhagen to tell us about it :)\n--\nKind regards,\n\nMikkel H�gh <[email protected]>", "msg_date": "Tue, 14 Oct 2008 23:57:36 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, 14 Oct 2008, Kevin Murphy wrote:\n\n> One thing that might help people swallow the off-putting default \"toy \n> mode\" performance of PostgreSQL would be an explanation of why \n> PostgreSQL uses its shared memory architecture in the first place.\n\nI doubt popularizing the boring technical details behind UNIX memory \nallocation sematics would help do anything but reinforce PostgreSQL's \nreputation for being complicated to setup. This same problem exists with \nmost serious databases, as everyone who has seen ORA-27123 can tell you. \nWhat we need is a tool to help people manage those details if asked. \nhttp://www.ibm.com/developerworks/db2/library/techarticle/dm-0509wright/ \nshows a good example; DB2's \"probe\" tool does this little bit of tuning \nfor you:\n\n \tDB2 has automatically updated the \"shmmax\" kernel\n parameter from \"33554432\" to the recommended value \"268435456\".\n\nAnd you're off--it bumped that from the default 32MB to 256MB. The \nproblem for PostgreSQL is that nobody who is motivated enough to write \nsuch magic for a large chunk of the supported platforms has had the time \nto do it yet. I'll get to that myself eventually even if nobody else \ndoes, as this is a recurring problem I'd like to make go away.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 14 Oct 2008 18:23:09 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "\n> Note this comment:\n> /*\n> * Queries sent to Drupal should wrap all table names in curly brackets. This\n> * function searches for this syntax and adds Drupal's table prefix to all\n> * tables, allowing Drupal to coexist with other systems in the same database if\n> * necessary.\n> */\n> That's an MySQL-ism for working around legacy hosting sites offering\n> only a single MySQL db bogging postgresql down...\n\nNo it's not. It's about being able to use a single db for multiple \napp's. Either I do something like that, or I have to [hardcode] change \nschemas after each connection because I only have a single db & a single \ndb user.. which postgres/oracle[I'm sure others] support but not mysql.\n\nShared hosts don't give you a lot of resources, so apps build stuff like \nthat in to make it easier.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Wed, 15 Oct 2008 09:35:32 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, Oct 14, 2008 at 4:35 PM, Chris <[email protected]> wrote:\n>\n>> Note this comment:\n>> /*\n>> * Queries sent to Drupal should wrap all table names in curly brackets.\n>> This\n>> * function searches for this syntax and adds Drupal's table prefix to all\n>> * tables, allowing Drupal to coexist with other systems in the same\n>> database if\n>> * necessary.\n>> */\n>> That's an MySQL-ism for working around legacy hosting sites offering\n>> only a single MySQL db bogging postgresql down...\n>\n> No it's not. It's about being able to use a single db for multiple app's.\n> Either I do something like that, or I have to [hardcode] change schemas\n> after each connection because I only have a single db & a single db user..\n> which postgres/oracle[I'm sure others] support but not mysql.\n\nAre you saying you have to reconnect to change schemas? In Oracle and\nPostgreSQL both you can change the current schema (or schemas for\npostgresql) with a single inline command.\n\nAlso, Oracle and PostgreSQL support differing default schemas for\nindividual users, so you can have connections pooled by user to go to\na certain schema.\n\n> Shared hosts don't give you a lot of resources, so apps build stuff like\n> that in to make it easier.\n\nSchemas cost virtually nothing.\n", "msg_date": "Tue, 14 Oct 2008 18:00:23 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "\n> Are you saying you have to reconnect to change schemas? In Oracle and\n> PostgreSQL both you can change the current schema (or schemas for\n> postgresql) with a single inline command.\n\nNo I meant you have to change the schema after connecting.\n\nSome hosts only give you one db & one user. Yeh it sucks but that's all \nthey give you.\n\nBefore anyone says \"get a new host\".. from the end user POV.. you don't \nknow and/or don't care about the technical details, you just want \nsomething \"that works\" with what you have. It's not an ideal situation \nto be in but it definitely does happen.\n\n>> Shared hosts don't give you a lot of resources, so apps build stuff like\n>> that in to make it easier.\n> \n> Schemas cost virtually nothing.\n\nNeither does building a smart(er) app which lets you set a \"prefix\" for \nall of your tables so they are grouped together.\n\nYou'll get a tiny performance hit from doing the replacement of the \nprefix, but it's not going to be significant compared to everything else.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Wed, 15 Oct 2008 11:15:34 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, Oct 14, 2008 at 6:15 PM, Chris <[email protected]> wrote:\n>\n>> Are you saying you have to reconnect to change schemas? In Oracle and\n>> PostgreSQL both you can change the current schema (or schemas for\n>> postgresql) with a single inline command.\n>\n> No I meant you have to change the schema after connecting.\n>\n> Some hosts only give you one db & one user. Yeh it sucks but that's all they\n> give you.\n\nYeah, if you could at least have multiple users with pgsql it would be\nideal. connect via user1, inherit his search_path, connect via user2,\ninherit the default search path, etc...\n\n> Before anyone says \"get a new host\".. from the end user POV.. you don't know\n> and/or don't care about the technical details, you just want something \"that\n> works\" with what you have. It's not an ideal situation to be in but it\n> definitely does happen.\n\nUnderstood. We aren't all working on large corporate in house servers\nfor our stuff that we can configure how we want.\n\n>>> Shared hosts don't give you a lot of resources, so apps build stuff like\n>>> that in to make it easier.\n>>\n>> Schemas cost virtually nothing.\n>\n> Neither does building a smart(er) app which lets you set a \"prefix\" for all\n> of your tables so they are grouped together.\n\nTrue. Given that MySQL treats the first of three identifiers in dot\nnotation as a db, and pg treats them as a schema, it would be a better\nsolution to have pg use schemas and mysql use dbs. Except for the\nhosting providers.\n\n> You'll get a tiny performance hit from doing the replacement of the prefix,\n> but it's not going to be significant compared to everything else.\n\nYeah, it's likely lost in the noise. I'd be more worried about ugly queries.\n", "msg_date": "Tue, 14 Oct 2008 18:21:06 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "\nMartin Gainty <[email protected]> writes:\n\n> MG>comments prefixed with MG>\n\nIncidentally that's a good way to make sure people don't see your comments.\nThere are a few variations but the common denominator is that things prefixed\nwith \"foo>\" are quotations from earlier messages. Many mailers hide such\ncomments or de-emphasize them to help the user concentrate on the new\nmaterial.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Wed, 15 Oct 2008 02:03:06 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "\nGreg Smith <[email protected]> writes:\n\n> \tDB2 has automatically updated the \"shmmax\" kernel\n> parameter from \"33554432\" to the recommended value \"268435456\".\n\nThis seems like a bogus thing for an application to do though. The Redhat\npeople seem happy with the idea but I'm pretty sure it would violate several\nDebian packaging rules. Generally it seems crazy for a distribution to ship\nconfigured one way by default but have packages change that behind the user's\nback. What if the admin set SHMMAX that way because he wanted it? What happens\nwhen a new distribution package has a new default but doesn't adjust it\nbecause it sees the \"admin\" has changed it -- even though it was actually\nPostgres which made the change?\n\n> And you're off--it bumped that from the default 32MB to 256MB. The problem for\n> PostgreSQL is that nobody who is motivated enough to write such magic for a\n> large chunk of the supported platforms has had the time to do it yet. I'll get\n> to that myself eventually even if nobody else does, as this is a recurring\n> problem I'd like to make go away.\n\nISTM the right way to make it go away is to allocate temporary files and mmap\nthem instead of using sysv shared memory. Then we can mmap as much as we want.\nBefore we lose root privileges we can even mlock as much as we want.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Wed, 15 Oct 2008 02:38:40 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> Greg Smith <[email protected]> writes:\n>> DB2 has automatically updated the \"shmmax\" kernel\n>> parameter from \"33554432\" to the recommended value \"268435456\".\n\n> This seems like a bogus thing for an application to do though. The Redhat\n> people seem happy with the idea but I'm pretty sure it would violate several\n> Debian packaging rules.\n\nFWIW, I don't think Red Hat would accept it either.\n\n> ISTM the right way to make it go away is to allocate temporary files and mmap\n> them instead of using sysv shared memory.\n\nThat's a non-starter unless you can point to a different kernel API that\n(a) is as portable as SysV and (b) offers the same amount of assistance\ntowards preventing multiple postmasters in the same data directory.\nSee many, many previous discussions in -hackers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Oct 2008 23:07:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues? " }, { "msg_contents": "On Wed, 15 Oct 2008, Gregory Stark wrote:\n\n> Greg Smith <[email protected]> writes:\n>> \tDB2 has automatically updated the \"shmmax\" kernel\n>> parameter from \"33554432\" to the recommended value \"268435456\".\n>\n> This seems like a bogus thing for an application to do though.\n\nIt happens when you run a utility designed to figure out if the \napplication is compatible with your system and make corrections as it can \nto make it work properly. If you want something like that to be easy to \nuse, the optimal approach to achieve that is to just barrel ahead and \nnotify the admin what you did.\n\n> I'm pretty sure it would violate several Debian packaging rules.\n\nYou can wander to http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=67481 \nto see one of the many times someone has tried to get the SHMMAX situation \nstraightened out at the OS level. Quoth Martin Pitt: \"Debian \npackages...must not alter kernel parameters at will; if they did, they \nwould destroy each others settings.\", followed by the standard wontfix for \nactually changing anything. They did at least improve the error reporting \nwhen the server won't start because of this problem there.\n\n> Generally it seems crazy for a distribution to ship configured one way \n> by default but have packages change that behind the user's back. What if \n> the admin set SHMMAX that way because he wanted it? What happens when a \n> new distribution package has a new default but doesn't adjust it because \n> it sees the \"admin\" has changed it -- even though it was actually \n> Postgres which made the change?\n\nIf there were ever any Linux distributions that increased this value from \nthe tiny default, you might have a defensible position here (maybe \nOracle's RHEL fork does, they might do something here). I've certainly \nnever seen anything besides Solaris ship with a sensible SHMMAX setting \nfor database use on 2008 hardware out of the box. It's really quite odd, \nbut as numerous probes in this area (from the above in 2000 to Peter's \nrecent Linux bugzilla jaunt) show the resistance to making the OS default \nto any higher is considerable.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 15 Oct 2008 02:20:13 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> If there were ever any Linux distributions that increased this value from \n> the tiny default, you might have a defensible position here (maybe \n> Oracle's RHEL fork does, they might do something here). I've certainly \n> never seen anything besides Solaris ship with a sensible SHMMAX setting \n> for database use on 2008 hardware out of the box. It's really quite odd, \n> but as numerous probes in this area (from the above in 2000 to Peter's \n> recent Linux bugzilla jaunt) show the resistance to making the OS default \n> to any higher is considerable.\n\nI think the subtext there is that the Linux kernel hackers hate the SysV\nIPC APIs and wish they'd go away. They are presently constrained from\nremoving 'em by their desire for POSIX compliance, but you won't get\nthem to make any changes that might result in those APIs becoming more\nwidely used :-(\n\nMind you, I find the SysV APIs uselessly baroque too, but there is one\nfeature that we have to have that is not in mmap(): the ability to\ndetect other processes attached to a shmem block.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Oct 2008 12:13:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues? " }, { "msg_contents": "Tom Lane wrote:\n> I think the subtext there is that the Linux kernel hackers hate the SysV\n> IPC APIs and wish they'd go away. They are presently constrained from\n> removing 'em by their desire for POSIX compliance, but you won't get\n> them to make any changes that might result in those APIs becoming more\n> widely used :-(\n>\n> Mind you, I find the SysV APIs uselessly baroque too, but there is one\n> feature that we have to have that is not in mmap(): the ability to\n> detect other processes attached to a shmem block.\n\nDidn't we solve this problem on Windows? Can we do a similar thing in \nUnix and get ride of the SysV stuff?\n\n", "msg_date": "Wed, 15 Oct 2008 14:10:56 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "\"Matthew T. O'Connor\" <[email protected]> writes:\n> Tom Lane wrote:\n>> Mind you, I find the SysV APIs uselessly baroque too, but there is one\n>> feature that we have to have that is not in mmap(): the ability to\n>> detect other processes attached to a shmem block.\n\n> Didn't we solve this problem on Windows?\n\nNot terribly well --- see active thread on -hackers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Oct 2008 14:15:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues? " }, { "msg_contents": "Tom Lane wrote:\n> \"Matthew T. O'Connor\" <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Mind you, I find the SysV APIs uselessly baroque too, but there is one\n> >> feature that we have to have that is not in mmap(): the ability to\n> >> detect other processes attached to a shmem block.\n> \n> > Didn't we solve this problem on Windows?\n> \n> Not terribly well --- see active thread on -hackers.\n\nWe could allocate a small shared memory area to solve this and use\nmmap() for the other shared memory usage.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Wed, 15 Oct 2008 15:28:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 2008-10-14 23:57, Mikkel Hogh wrote:\n\n> one is the dreaded \"SELECT COUNT(pid) FROM \n> url_alias\" which takes PostgreSQL a whopping 70.65ms out of the \n> 115.74ms total for 87 queries.\n\nThis is stupid.\n\nThe Drupal code looks like this:\n\n// Use $count to avoid looking up paths in subsequent calls\n// if there simply are no aliases\nif (!isset($count)) {\n $count = db_result(db_query('SELECT COUNT(pid) FROM {url_alias}'));\n}\n/* ... */\nif ($count > 0 /* */) {\n /* one simple query */\n}\n\n\nIt is doing count(*) type query (which requires a full table scan in\nPostgres) to avoid one simple, indexable query, which is also often\ncached. It has to be slower in any database, but it is much, much slower\nin Postgres.\n\nTry attached patch for drupal-5.11, and rerun your benchmarks.\n\nRegards\nTometzky\n-- \n...although Eating Honey was a very good thing to do, there was a\nmoment just before you began to eat it which was better than when you\nwere...\n Winnie the Pooh", "msg_date": "Thu, 16 Oct 2008 09:34:44 +0200", "msg_from": "Tomasz Ostrowski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "It's not only to avoid one query, but to avoid one query every time \ndrupal_lookup_path() is called (which is every time the system builds \na link, which can be dozens of time on a page).\n\nSo, I think it's probably a worthwhile tradeoff on MyISAM, because \nsuch queries are fast there, and you potentially save a bunch of \nqueries, if you're not using URL aliases.\n\nIs there a better way to check if a table contains anything in \nPostgreSQL? Perhaps just selecting one row?\n--\nKind regards,\n\nMikkel Hřgh <[email protected]>\n\nOn 16/10/2008, at 09.34, Tomasz Ostrowski wrote:\n\n> On 2008-10-14 23:57, Mikkel Hogh wrote:\n>\n>> one is the dreaded \"SELECT COUNT(pid) FROM\n>> url_alias\" which takes PostgreSQL a whopping 70.65ms out of the\n>> 115.74ms total for 87 queries.\n>\n> This is stupid.\n>\n> The Drupal code looks like this:\n>\n> // Use $count to avoid looking up paths in subsequent calls\n> // if there simply are no aliases\n> if (!isset($count)) {\n> $count = db_result(db_query('SELECT COUNT(pid) FROM {url_alias}'));\n> }\n> /* ... */\n> if ($count > 0 /* */) {\n> /* one simple query */\n> }\n>\n>\n> It is doing count(*) type query (which requires a full table scan in\n> Postgres) to avoid one simple, indexable query, which is also often\n> cached. It has to be slower in any database, but it is much, much \n> slower\n> in Postgres.\n>\n> Try attached patch for drupal-5.11, and rerun your benchmarks.\n>\n> Regards\n> Tometzky\n> -- \n> ...although Eating Honey was a very good thing to do, there was a\n> moment just before you began to eat it which was better than when you\n> were...\n> Winnie the Pooh\n> diff -urNP drupal-5.11.orig/includes/path.inc drupal-5.11/includes/ \n> path.inc\n> --- drupal-5.11.orig/includes/path.inc\t2006-12-23 23:04:52.000000000 \n> +0100\n> +++ drupal-5.11/includes/path.inc\t2008-10-16 09:26:48.000000000 +0200\n> @@ -42,18 +42,12 @@\n> function drupal_lookup_path($action, $path = '') {\n> // $map keys are Drupal paths and the values are the corresponding \n> aliases\n> static $map = array(), $no_src = array();\n> - static $count;\n> -\n> - // Use $count to avoid looking up paths in subsequent calls if \n> there simply are no aliases\n> - if (!isset($count)) {\n> - $count = db_result(db_query('SELECT COUNT(pid) FROM \n> {url_alias}'));\n> - }\n>\n> if ($action == 'wipe') {\n> $map = array();\n> $no_src = array();\n> }\n> - elseif ($count > 0 && $path != '') {\n> + elseif ($path != '') {\n> if ($action == 'alias') {\n> if (isset($map[$path])) {\n> return $map[$path];\n>\n> -- \n> Sent via pgsql-general mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-general", "msg_date": "Thu, 16 Oct 2008 10:34:07 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 2008-10-16 10:34, Mikkel Hogh wrote:\n\n> It's not only to avoid one query, but to avoid one query every time \n> drupal_lookup_path() is called (which is every time the system builds \n> a link, which can be dozens of time on a page).\n\nOh, $count is static. My bad. Using count for testing for empty table is\nstupid nonetheless.\n\nThere is an issue report with lengthy discussion on drupal.org:\nhttp://drupal.org/node/196862\nAnd a proposed patch:\nhttp://drupal.org/files/issues/drupal_lookup_path-5.x.patch.txt\nwhich uses \"limit 1\". This patch is not applied though. I don't know why.\n\nPlease retest with this patch. And keep the CC to the list in your messages.\n\nRegards\nTometzky\n-- \n...although Eating Honey was a very good thing to do, there was a\nmoment just before you began to eat it which was better than when you\nwere...\n Winnie the Pooh\n", "msg_date": "Thu, 16 Oct 2008 10:54:47 +0200", "msg_from": "Tomasz Ostrowski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Tue, 14 Oct 2008 18:44:56 +0200\nIvan Sergio Borgonovo <[email protected]> wrote:\n\n> BTW I hope someone may find good use of this:\n\n> 2xXeon HT CPU 3.20GHz (not dual core), 4Gb RAM, RAID 5 SCSI\n> * absolutely not tuned Apache\n> * absolutely not tuned Drupal with little content, some blocks and\n> some google adds\n> (just CSS aggregation and caching enabled)\n> * lightly tuned PostgreSQL 8.1\n> shared_buffers = 3500\n> work_mem = 32768\n> checkpoint_segments = 10\n> effective_cache_size = 15000\n> random_page_cost = 3\n> default_statistics_target = 30\n\ncomparison between 8.1 and 8.3, similar tuning, (D5)\n8.3 uses sockets, can't easily go back to check if 8.1 used tcp/ip.\nBTW I had to hack D5 core to use sockets. (commented out the host\nline in include/database.pgsql.inc).\n\n> siege -H \"Cookie: drupalsessid\" -c1 \"localhost/d1\"\n> -b -t30s\n\n> -c 1\n> Transactions: 485 hits\n> Availability: 100.00 %\n> Elapsed time: 29.95 secs\n> Data transferred: 5.33 MB\n> Response time: 0.06 secs\n> Transaction rate: 16.19 trans/sec\n> Throughput: 0.18 MB/sec\n> Concurrency: 1.00\n> Successful transactions: 485\n> Failed transactions: 0\n> Longest transaction: 0.13\n> Shortest transaction: 0.06\n\nTransactions: 842 hits\nAvailability: 100.00 %\nElapsed time: 29.87 secs\nData transferred: 9.26 MB\nResponse time: 0.04 secs\nTransaction rate: 28.19 trans/sec\nThroughput: 0.31 MB/sec\nConcurrency: 1.00\nSuccessful transactions: 842\nFailed transactions: 0\nLongest transaction: 0.09\nShortest transaction: 0.03\n\n> -c 5\n> Transactions: 1017 hits\n> Availability: 100.00 %\n> Elapsed time: 29.61 secs\n> Data transferred: 11.29 MB\n> Response time: 0.15 secs\n> Transaction rate: 34.35 trans/sec\n> Throughput: 0.38 MB/sec\n> Concurrency: 4.98\n> Successful transactions: 1017\n> Failed transactions: 0\n> Longest transaction: 0.24\n> Shortest transaction: 0.08\n\nTransactions: 1674 hits\nAvailability: 100.00 %\nElapsed time: 29.93 secs\nData transferred: 18.80 MB\nResponse time: 0.09 secs\nTransaction rate: 55.93 trans/sec\nThroughput: 0.63 MB/sec\nConcurrency: 4.98\nSuccessful transactions: 1674\nFailed transactions: 0\nLongest transaction: 0.20\nShortest transaction: 0.05\n\n> -c 20\n> Transactions: 999 hits\n> Availability: 100.00 %\n> Elapsed time: 30.11 secs\n> Data transferred: 11.08 MB\n> Response time: 0.60 secs\n> Transaction rate: 33.18 trans/sec\n> Throughput: 0.37 MB/sec\n> Concurrency: 19.75\n> Successful transactions: 999\n> Failed transactions: 0\n> Longest transaction: 1.21\n> Shortest transaction: 0.10\n\nTransactions: 1677 hits\nAvailability: 100.00 %\nElapsed time: 29.68 secs\nData transferred: 18.86 MB\nResponse time: 0.35 secs\nTransaction rate: 56.50 trans/sec\nThroughput: 0.64 MB/sec\nConcurrency: 19.89\nSuccessful transactions: 1677\nFailed transactions: 0\nLongest transaction: 0.74\nShortest transaction: 0.09\n\n> -c 100\n> Transactions: 1085 hits\n> Availability: 100.00 %\n> Elapsed time: 29.97 secs\n> Data transferred: 9.61 MB\n> Response time: 2.54 secs\n> Transaction rate: 36.20 trans/sec\n> Throughput: 0.32 MB/sec\n> Concurrency: 91.97\n> Successful transactions: 911\n> Failed transactions: 0\n> Longest transaction: 12.41\n> Shortest transaction: 0.07\n\nTransactions: 1651 hits\nAvailability: 100.00 %\nElapsed time: 29.78 secs\nData transferred: 17.41 MB\nResponse time: 1.73 secs\nTransaction rate: 55.44 trans/sec\nThroughput: 0.58 MB/sec\nConcurrency: 95.68\nSuccessful transactions: 1563\nFailed transactions: 0\nLongest transaction: 3.97\nShortest transaction: 0.06\n\n> -c 200\n> Transactions: 1116 hits\n> Availability: 100.00 %\n> Elapsed time: 30.02 secs\n> Data transferred: 9.10 MB\n> Response time: 4.85 secs\n> Transaction rate: 37.18 trans/sec\n> Throughput: 0.30 MB/sec\n> Concurrency: 180.43\n> Successful transactions: 852\n> Failed transactions: 0\n> Longest transaction: 15.85\n> Shortest transaction: 0.25\n\nTransactions: 1689 hits\nAvailability: 100.00 %\nElapsed time: 30.00 secs\nData transferred: 16.52 MB\nResponse time: 3.30 secs\nTransaction rate: 56.30 trans/sec\nThroughput: 0.55 MB/sec\nConcurrency: 185.76\nSuccessful transactions: 1483\nFailed transactions: 0\nLongest transaction: 8.37\nShortest transaction: 0.08\n\nSorry to introduce so many factors to make a reasonable comparison.\nNow I'm having problems with 8.3 and ssl connections from outside...\n\nI wonder if pg is using ssl even on sockets.\n\n-- \nIvan Sergio Borgonovo\nhttp://www.webthatworks.it\n\n", "msg_date": "Thu, 16 Oct 2008 16:07:56 +0200", "msg_from": "Ivan Sergio Borgonovo <[email protected]>", "msg_from_op": false, "msg_subject": "further tests with 8.3 was: Re: Drupal and PostgreSQL -\n\tperformance issues?" }, { "msg_contents": "Ivan Sergio Borgonovo <[email protected]> writes:\n> I wonder if pg is using ssl even on sockets.\n\nNo, it won't do that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Oct 2008 10:29:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: further tests with 8.3 was: Re: Drupal and PostgreSQL -\n\tperformance issues?" }, { "msg_contents": "* Tomasz Ostrowski ([email protected]) wrote:\n> There is an issue report with lengthy discussion on drupal.org:\n> http://drupal.org/node/196862\n> And a proposed patch:\n> http://drupal.org/files/issues/drupal_lookup_path-5.x.patch.txt\n> which uses \"limit 1\". This patch is not applied though. I don't know why.\n\nI don't see 'limit 1' anywhere in that patch.. And you don't want to\nuse 'limit 1' *and* count(*), that doesn't do what you're expecting\n(since count(*) is an aggregate and limit 1 is applied after). You\nreally want to do something more like:\n\nselect true from tab1 limit 1;\n\nAnd then test if you got back any rows or not (which might be what\n*your* patch does, hadn't looked at it yet).\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 16 Oct 2008 10:40:47 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Thu, Oct 16, 2008 at 4:40 PM, Stephen Frost <[email protected]> wrote:\n> * Tomasz Ostrowski ([email protected]) wrote:\n> I don't see 'limit 1' anywhere in that patch.. And you don't want to\n> use 'limit 1' *and* count(*), that doesn't do what you're expecting\n> (since count(*) is an aggregate and limit 1 is applied after). You\n> really want to do something more like:\n>\n> select true from tab1 limit 1;\n>\n> And then test if you got back any rows or not (which might be what\n> *your* patch does, hadn't looked at it yet).\n>\n> Thanks,\n>\n> Stephen\n\nSeems Tomasz linked to the wrong patch. The patch he meant was:\nhttp://drupal.org/files/issues/drupal_lookup_path-6.x.patch.txt\n\nOr better... he linked to the correct patch (for drupal 5.x) but the\nuser Earnie made a mistake or something in the 5.x patch, seeing the\ncomment[1] that the patches should be the same...\n\nAlso nice to see people \"benchmark\" differences by just executing a\nquery once[2][3]....\n\nWessel\n\n[1] http://drupal.org/node/196862#comment-648511\n[2] http://drupal.org/node/196862#comment-649791\n[3] http://drupal.org/node/196862#comment-649928\n", "msg_date": "Thu, 16 Oct 2008 17:09:44 +0200", "msg_from": "DelGurth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 2008-10-16 16:40, Stephen Frost wrote:\n\n>> There is an issue report with lengthy discussion on drupal.org:\n>> http://drupal.org/node/196862\n>> And a proposed patch:\n> \n> I don't see 'limit 1' anywhere in that patch..\n\nSorry - haven't checked it - I have checked only a 6.x version\nhttp://drupal.org/files/issues/drupal_lookup_path-6.x.patch.txt\nwhich is sane. This Earnie guy, the author, made a mistake in 5.x version.\n\nI've corrected 5.x based on 6.x. I've attached it. Maybe this time I'll \nget this right.\n\nRegards\nTometzky\n-- \n...although Eating Honey was a very good thing to do, there was a\nmoment just before you began to eat it which was better than when you\nwere...\n Winnie the Pooh", "msg_date": "Thu, 16 Oct 2008 17:26:24 +0200", "msg_from": "Tomasz Ostrowski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "* DelGurth ([email protected]) wrote:\n> Seems Tomasz linked to the wrong patch. The patch he meant was:\n> http://drupal.org/files/issues/drupal_lookup_path-6.x.patch.txt\n\nThat's much better.\n\n> Also nice to see people \"benchmark\" differences by just executing a\n> query once[2][3]....\n\nThis thread is insane.. Is every change to drupal run through this kind\nof design-by-committee? And the paranoia of a cost difference of 0.09ms\nfor something which ends up getting cached for one particular storage\nengine under one particular database?!\n\nMakes me worried about drupal's future, heh.\n\n\tStephen", "msg_date": "Thu, 16 Oct 2008 11:26:47 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Thu, Oct 16, 2008 at 9:26 AM, Stephen Frost <[email protected]> wrote:\n> * DelGurth ([email protected]) wrote:\n>> Seems Tomasz linked to the wrong patch. The patch he meant was:\n>> http://drupal.org/files/issues/drupal_lookup_path-6.x.patch.txt\n>\n> That's much better.\n>\n>> Also nice to see people \"benchmark\" differences by just executing a\n>> query once[2][3]....\n>\n> This thread is insane.. Is every change to drupal run through this kind\n> of design-by-committee? And the paranoia of a cost difference of 0.09ms\n> for something which ends up getting cached for one particular storage\n> engine under one particular database?!\n>\n> Makes me worried about drupal's future, heh.\n\nIn all fairness, pgsql goes through the same kind of design by\ncommittee process. We just have a committee of very smart people\nhashing things out.\n", "msg_date": "Thu, 16 Oct 2008 10:27:16 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 16/10/2008, at 18.27, Scott Marlowe wrote:\n\n> On Thu, Oct 16, 2008 at 9:26 AM, Stephen Frost <[email protected]> \n> wrote:\n>> * DelGurth ([email protected]) wrote:\n>>> Seems Tomasz linked to the wrong patch. The patch he meant was:\n>>> http://drupal.org/files/issues/drupal_lookup_path-6.x.patch.txt\n>>\n>> That's much better.\n>>\n>>> Also nice to see people \"benchmark\" differences by just executing a\n>>> query once[2][3]....\n>>\n>> This thread is insane.. Is every change to drupal run through this \n>> kind\n>> of design-by-committee? And the paranoia of a cost difference of \n>> 0.09ms\n>> for something which ends up getting cached for one particular storage\n>> engine under one particular database?!\n>>\n>> Makes me worried about drupal's future, heh.\n>\n> In all fairness, pgsql goes through the same kind of design by\n> committee process. We just have a committee of very smart people\n> hashing things out.\n\nWell, I think it's a general weak spot of almost all CMS projects. \nThey are all run by programmers, not DBAs. In our case, it's PHP \npeople excel at, (sadly) not *SQL. That's a trend I've seen a lot with \nweb developers. They like to program, and the database is usually \nabstracted away as much as possible, to avoid having to write your own \nqueries.\n\nThat is, at least, the picture I'm seeing. PHP PDO, ActiveRecord, \nSQLAlchemy, Django DB, Hibernate. Not all of these are equally bad, \nbut the main reason these are so popular is that then you don't have \nto write SQL.\n\nSo, we have a lot of otherwise proficient programmers who couldn't SQL \ntheir way out a phone booth. Sigh.\n\nP.S.: Why are e-mails from this list not sent with a Reply-To: header \nof the lists e-mail-address?", "msg_date": "Thu, 16 Oct 2008 23:17:35 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 2008-10-16 23:17, Mikkel Høgh wrote:\n\n> P.S.: Why are e-mails from this list not sent with a Reply-To: header \n> of the lists e-mail-address?\n\nBecause it is dangerous - too easy to send to the list, when you really\nmean to send to one. Most e-mail programs have two buttons for replying:\nordinary \"Reply\" and \"Reply to all\". You're supposed to use \"Reply to\nall\" if you want to reply to the list.\n\nYou are using BCC (blind carbon copy) instead of CC (carbon copy) to the\nlist address in your e-mails and you break this functionality.\n\nRegards\nTometzky\n-- \n...although Eating Honey was a very good thing to do, there was a\nmoment just before you began to eat it which was better than when you\nwere...\n Winnie the Pooh\n", "msg_date": "Fri, 17 Oct 2008 11:37:49 +0200", "msg_from": "Tomasz Ostrowski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 17/10/2008, at 11.37, Tomasz Ostrowski wrote:\n\n> On 2008-10-16 23:17, Mikkel H�gh wrote:\n>\n>> P.S.: Why are e-mails from this list not sent with a Reply-To: header\n>> of the lists e-mail-address?\n>\n> Because it is dangerous - too easy to send to the list, when you \n> really\n> mean to send to one. Most e-mail programs have two buttons for \n> replying:\n> ordinary \"Reply\" and \"Reply to all\". You're supposed to use \"Reply to\n> all\" if you want to reply to the list.\n\nWell, I think the most common use case for a mailing list is to reply \nback to the list, isn't that the whole point?\n\nPersonally I find it annoying that I get two copies of each reply to \none of my posts, one that is filtered into the mailinglist folder \nbecause it has the correct X-Mailing-List header and the other just \nsits there in my inbox, wasting both bandwidth and disk space in the \nprocess.\n\nBesides, the if the Reply-To thing is so dangerous, why do most other \nmailing lists do it?", "msg_date": "Fri, 17 Oct 2008 12:13:00 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 2008-10-17 12:13, Mikkel Høgh wrote:\n\n>> You're supposed to use \"Reply to all\" if you want to reply to the\n>> list.\n> \n> Well, I think the most common use case for a mailing list is to reply \n> back to the list, isn't that the whole point?\n\nIt is a point of having \"Reply to all\" button. With \"reply-to\" is it\nhard to reply to one person, easy to reply to the list. Without it it is\nboth easy.\n\n> Personally I find it annoying that I get two copies of each reply to \n> one of my posts, one that is filtered into the mailinglist folder \n> because it has the correct X-Mailing-List header and the other just \n> sits there in my inbox, wasting both bandwidth and disk space in the \n> process.\n\nSo set reply-to in messages you send by yourself - it will be honored.\n\n> Besides, the if the Reply-To thing is so dangerous, why do most other \n> mailing lists do it?\n\nfor i in Windows MySQL IE Sweets Alcohol etc.; do\n\techo \"If using $i is so dangerous, why do most do it?\"\ndone\n\nRegards\nTometzky\n-- \n...although Eating Honey was a very good thing to do, there was a\nmoment just before you began to eat it which was better than when you\nwere...\n Winnie the Pooh\n", "msg_date": "Fri, 17 Oct 2008 12:24:46 +0200", "msg_from": "Tomasz Ostrowski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On 17/10/2008, at 12.24, Tomasz Ostrowski wrote:\n\n> On 2008-10-17 12:13, Mikkel H�gh wrote:\n>\n>>> You're supposed to use \"Reply to all\" if you want to reply to the\n>>> list.\n>>\n>> Well, I think the most common use case for a mailing list is to reply\n>> back to the list, isn't that the whole point?\n>\n> It is a point of having \"Reply to all\" button. With \"reply-to\" is it\n> hard to reply to one person, easy to reply to the list. Without it \n> it is\n> both easy.\n\nBut again, how often do you want to give a personal reply only? That \nis a valid use-case, but I'd say amongst the hundreds of mailing-list \nreplies I've written over the years, only two or three were not sent \nback to the mailing list.\n\n>\n>> Personally I find it annoying that I get two copies of each reply to\n>> one of my posts, one that is filtered into the mailinglist folder\n>> because it has the correct X-Mailing-List header and the other just\n>> sits there in my inbox, wasting both bandwidth and disk space in the\n>> process.\n>\n> So set reply-to in messages you send by yourself - it will be honored.\n\nYay, even more manual labour instead of having the computers doing the \nwork for us. What's your next suggestion, go back to pen and paper?\n\n>\n>\n>> Besides, the if the Reply-To thing is so dangerous, why do most other\n>> mailing lists do it?\n>\n> for i in Windows MySQL IE Sweets Alcohol etc.; do\n> \techo \"If using $i is so dangerous, why do most do it?\"\n> done\n\n\nWell, my point is that Reply-To: is only dangerous if you're not \ncareful. Not so with the other examples you mention :)\n\nIf you're writing something important, private and/or confidential, \ndon't you always check before you send? You'd better, because a small \ntypo when you selected the recipient might mean that you're sending \nlove-letters to your boss or something like that :)", "msg_date": "Fri, 17 Oct 2008 13:02:57 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Annoying Reply-To" }, { "msg_contents": "In response to \"Mikkel Høgh\" <[email protected]>:\n\n> \n> On 17/10/2008, at 12.24, Tomasz Ostrowski wrote:\n> \n> > On 2008-10-17 12:13, Mikkel Høgh wrote:\n> >\n> >>> You're supposed to use \"Reply to all\" if you want to reply to the\n> >>> list.\n> >>\n> >> Well, I think the most common use case for a mailing list is to reply\n> >> back to the list, isn't that the whole point?\n> >\n> > It is a point of having \"Reply to all\" button. With \"reply-to\" is it\n> > hard to reply to one person, easy to reply to the list. Without it \n> > it is\n> > both easy.\n> \n> But again, how often do you want to give a personal reply only? That \n> is a valid use-case, but I'd say amongst the hundreds of mailing-list \n> replies I've written over the years, only two or three were not sent \n> back to the mailing list.\n\nYou're forgetting the cost of a mistake in that case.\n\nAs it stands, if you hit reply when you meant reply-to, oops, resend.\n\nIf it's changed and you hit reply when you want to send a private message\nto the poster, you just broadcast your private message to the world.\n\n> Yay, even more manual labour instead of having the computers doing the \n> work for us. What's your next suggestion, go back to pen and paper?\n\nDon't be an asshole. There's no need for that kind of cynicism.\n\n> Well, my point is that Reply-To: is only dangerous if you're not \n> careful. Not so with the other examples you mention :)\n\nBut as it is now, it's not dangerous at all.\n\n> If you're writing something important, private and/or confidential, \n> don't you always check before you send? You'd better, because a small \n> typo when you selected the recipient might mean that you're sending \n> love-letters to your boss or something like that :)\n\nI'd rather know that the computer had my back in the case of an error,\ninstead of it helping me mindlessly even when I'm doing the wrong thing.\nTo me, that's also the difference between MySQL and PostgreSQL.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 17 Oct 2008 07:20:17 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On 17/10/2008, at 13.20, Bill Moran wrote:\n\n> In response to \"Mikkel H�gh\" <[email protected]>:\n>\n>>\n>> On 17/10/2008, at 12.24, Tomasz Ostrowski wrote:\n>>\n>> But again, how often do you want to give a personal reply only? That\n>> is a valid use-case, but I'd say amongst the hundreds of mailing-list\n>> replies I've written over the years, only two or three were not sent\n>> back to the mailing list.\n>\n> You're forgetting the cost of a mistake in that case.\n>\n> As it stands, if you hit reply when you meant reply-to, oops, resend.\n>\n> If it's changed and you hit reply when you want to send a private \n> message\n> to the poster, you just broadcast your private message to the world.\n\nAnd again, how often does this happen? How often do people write \nreally sensitive e-mails based on messages on pgsql-general.\n\nBecause if we wanted to be really safe, we should not even send the \nmailing-list address along, so even if someone used the reply-all \nbutton, he could not accidentally post his private e-mail on the web.\n\nIn true McDonalds-style, we could change the mailing-list-address to \nbe pgsql-general-if-you-send-to-this-your-private-information-will-be-posted-on-the-internet@postgresql.org\n\nHow far are you willing to go to protect people against themselves?\n\n>> Yay, even more manual labour instead of having the computers doing \n>> the\n>> work for us. What's your next suggestion, go back to pen and paper?\n>\n> Don't be an asshole. There's no need for that kind of cynicism.\n\nIn my opinion, asking for sane defaults is neither cynicism or being \nan asshole.\nI may have put it on an edge, but having to manually add a Reply-To \nheader to each message I send to pgsql-general is not my idea of fun.\n\n>\n>\n>> Well, my point is that Reply-To: is only dangerous if you're not\n>> careful. Not so with the other examples you mention :)\n>\n> But as it is now, it's not dangerous at all.\n\nNo, just annoying and a waste of time, energy, bandwidth and \nultimately, money.\n\n>\n>\n>> If you're writing something important, private and/or confidential,\n>> don't you always check before you send? You'd better, because a small\n>> typo when you selected the recipient might mean that you're sending\n>> love-letters to your boss or something like that :)\n>\n> I'd rather know that the computer had my back in the case of an error,\n> instead of it helping me mindlessly even when I'm doing the wrong \n> thing.\n> To me, that's also the difference between MySQL and PostgreSQL.\n\n\nWell, in the above case, the computer doesn't have your back. If you \ntold it to send the e-mail to Marty Boss instead of Maggie Blond, \nthat's exactly what it'll do.\n\nCurrently, when I tell my computer to reply to a message on the pgsql \nmailing list, it'll do something else, because who ever set it up \ndecided to cater to the 0.1% edge-case instead of just having the \ndefault action be what it should be 99.5% of the time.\n\nYou may not care about usability or user experience, but remember that \nwhat seems to be correct from a technical perpective is not always the \n\"right\" thing to do.", "msg_date": "Fri, 17 Oct 2008 13:48:11 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "In response to \"Mikkel Høgh\" <[email protected]>:\n> \n> On 17/10/2008, at 13.20, Bill Moran wrote:\n> \n> > In response to \"Mikkel Høgh\" <[email protected]>:\n> >\n> >> On 17/10/2008, at 12.24, Tomasz Ostrowski wrote:\n> >>\n> >> But again, how often do you want to give a personal reply only? That\n> >> is a valid use-case, but I'd say amongst the hundreds of mailing-list\n> >> replies I've written over the years, only two or three were not sent\n> >> back to the mailing list.\n> >\n> > You're forgetting the cost of a mistake in that case.\n> >\n> > As it stands, if you hit reply when you meant reply-to, oops, resend.\n> >\n> > If it's changed and you hit reply when you want to send a private \n> > message\n> > to the poster, you just broadcast your private message to the world.\n> \n> And again, how often does this happen? How often do people write \n> really sensitive e-mails based on messages on pgsql-general.\n\nIt happens very infrequently. You're ignoring me and constantly trying\nto refocus away from my real argument. The frequency is not the\njustification, it's the severity that justifies it.\n\nIf we save one overworked DBA per year from endangering their job online,\nI say it's worth it.\n\n> How far are you willing to go to protect people against themselves?\n\nPersonally, I'm willing to go so far as to expect the person to think\nabout whether to hit reply or reply-to before sending the mail. I don't\nsee that as unreasonable.\n\n> but having to manually add a Reply-To \n> header to each message I send to pgsql-general is not my idea of fun.\n\nI was not aware that Apple Mail was such a primitive email client. You\nshould consider switching to something that has a reply-to button. I'm\nvery disappointed in Apple.\n\n> You may not care about usability or user experience, but remember that \n> what seems to be correct from a technical perpective is not always the \n> \"right\" thing to do.\n\nAs I said, consider getting a real email client if this is wasting so\nmuch of your time. It doesn't cause me any undue effort.\n\nOr, you could just be lonely. I think you've spent enough man hours\ncomplaining about this to manually work around the problem for several\nyears, which blows your \"this wastes too much of my valuable time\"\nargument out of the water.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 17 Oct 2008 08:01:03 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]> writes:\n> Yay, even more manual labour instead of having the computers doing the \n> work for us. What's your next suggestion, go back to pen and paper?\n\nPlease stop wasting everyone's time with this. The list policy has\nbeen debated adequately in the past. You are not going to change it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Oct 2008 08:06:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To " }, { "msg_contents": "\nOn Oct 17, 2008, at 8:01 , Bill Moran wrote:\n\n> In response to \"Mikkel H�gh\" <[email protected]>:\n>>\n>> but having to manually add a Reply-To\n>> header to each message I send to pgsql-general is not my idea of fun.\n>\n> I was not aware that Apple Mail was such a primitive email client. \n> You\n> should consider switching to something that has a reply-to button. \n> I'm\n> very disappointed in Apple.\n\nIt's not. It has Reply and Reply All buttons for the clicky-clicky \nfolk, and keyboard shortcuts for each as well.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n", "msg_date": "Fri, 17 Oct 2008 08:10:16 -0400", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On 17/10/2008, at 14.01, Bill Moran wrote:\n\n> In response to \"Mikkel H�gh\" <[email protected]>:\n>>\n>> On 17/10/2008, at 13.20, Bill Moran wrote:\n>>\n>>> In response to \"Mikkel H�gh\" <[email protected]>:\n>>>\n>>>> On 17/10/2008, at 12.24, Tomasz Ostrowski wrote:\n>>>>\n>>>> But again, how often do you want to give a personal reply only? \n>>>> That\n>>>> is a valid use-case, but I'd say amongst the hundreds of mailing- \n>>>> list\n>>>> replies I've written over the years, only two or three were not \n>>>> sent\n>>>> back to the mailing list.\n>>>\n>>> You're forgetting the cost of a mistake in that case.\n>>>\n>>> As it stands, if you hit reply when you meant reply-to, oops, \n>>> resend.\n>>>\n>>> If it's changed and you hit reply when you want to send a private\n>>> message\n>>> to the poster, you just broadcast your private message to the world.\n>>\n>> And again, how often does this happen? How often do people write\n>> really sensitive e-mails based on messages on pgsql-general.\n>\n> It happens very infrequently. You're ignoring me and constantly \n> trying\n> to refocus away from my real argument. The frequency is not the\n> justification, it's the severity that justifies it.\n>\n> If we save one overworked DBA per year from endangering their job \n> online,\n> I say it's worth it.\n>\n>> How far are you willing to go to protect people against themselves?\n>\n> Personally, I'm willing to go so far as to expect the person to think\n> about whether to hit reply or reply-to before sending the mail. I \n> don't\n> see that as unreasonable.\n\nWell, neither is checking whether you're sending it the right place \nunreasonable. The difference here being if you have to do it each time \nyou're posting to the mailing list or that once in a blue moon where \nthere's something that should remain private.\nSo I respect\n\n>\n>\n>> but having to manually add a Reply-To\n>> header to each message I send to pgsql-general is not my idea of fun.\n>\n> I was not aware that Apple Mail was such a primitive email client. \n> You\n> should consider switching to something that has a reply-to button. \n> I'm\n> very disappointed in Apple.\n\nYou should read the original post. Thomas suggested \"So set reply-to \nin messages you send by yourself - it will be honored.\". That's what \nI'm talking about here, not \"Reply All\"-buttons (which it has, with \nreasonable keyboard shortcuts, even). I can also add a \"Reply-To:\" \nfield on my composer window and even have it pre-filled for all my \noutgoing email, but the features of my MUA are not point here :)\n\n>\n>\n>> You may not care about usability or user experience, but remember \n>> that\n>> what seems to be correct from a technical perpective is not always \n>> the\n>> \"right\" thing to do.\n>\n> As I said, consider getting a real email client if this is wasting so\n> much of your time. It doesn't cause me any undue effort.\n\"It's good for me, so it's good for everyone\"\n\n> Or, you could just be lonely.\n\nI resent that you're trying to make this a personal thing.\n\n> I think you've spent enough man hours\n> complaining about this to manually work around the problem for several\n> years, which blows your \"this wastes too much of my valuable time\"\n> argument out of the water.\n\n\nAnd it's not as much time as it is energy. I've only used this mailing- \nlist for a few days, and I've already had to manually resend mails to \nthe mailing-list several times. If I manage to save myself from that \nonly 5 times, i figure talking this debate has been worth it :)", "msg_date": "Fri, 17 Oct 2008 14:25:00 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On 17/10/2008, at 14.06, Tom Lane wrote:\n\n> =?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]> writes:\n>> Yay, even more manual labour instead of having the computers doing \n>> the\n>> work for us. What's your next suggestion, go back to pen and paper?\n>\n> Please stop wasting everyone's time with this. The list policy has\n> been debated adequately in the past. You are not going to change it.\n\n It is probably going to come up again every once in a while, as long \nas it's not changed, but I will respect your wishes.\n\n--\nKind regards,\n\nMikkel H�gh <[email protected]>", "msg_date": "Fri, 17 Oct 2008 14:33:04 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying Reply-To " }, { "msg_contents": "On Fri, Oct 17, 2008 at 01:02:57PM +0200, Mikkel H�gh wrote:\n> Yay, even more manual labour instead of having the computers doing the work \n> for us. What's your next suggestion, go back to pen and paper?\n\nMy suggestion would be to use a mail user agent that knows how to read\nthe list headers, which were standardized many years ago. Then you\n\"reply to list\". Mutt has done this for at least a few years now. I\ndon't know about other MUAs.\n\nA\n\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Fri, 17 Oct 2008 08:42:46 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "In response to \"Mikkel Høgh\" <[email protected]>:\n> \n> On 17/10/2008, at 14.01, Bill Moran wrote:\n> \n> > Or, you could just be lonely.\n> \n> I resent that you're trying to make this a personal thing.\n\nI was going to answer the rest of this email, then I realized that the\nreal problem was right here, and discussing anything else was dancing\naround the issue and wasting time.\n\nYou can resent it or not, but this _is_ a personal thing. It's personal\nbecause you are the only one complaining about it. Despite the large\nnumber of people on this list, I don't see anyone jumping in to defend\nyou.\n\nI'm not saying your problems aren't real, I'm just saying you're apparently\nthe only person in this community that has enough trouble with them to\ntake the time to start a discussion. To that degree, the problem is\nvery personal to you.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 17 Oct 2008 08:49:17 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "* Bill Moran ([email protected]) wrote:\n> You can resent it or not, but this _is_ a personal thing. It's personal\n> because you are the only one complaining about it. Despite the large\n> number of people on this list, I don't see anyone jumping in to defend\n> you.\n\nUgh. No one else is jumping in simply because we've already been\nthrough all of this and it hasn't and isn't going to change. The PG\nlists are the odd ones out here, not the other way around, I assure you.\nOne might compare it to our continued use of CVS. It's wrong and\nbackwards and all that, but most of the PG community is used to it and\nchanging is a pain. shrug.\n\n\tEnjoy,\n\n\t\tStephen", "msg_date": "Fri, 17 Oct 2008 08:56:34 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Fri, 17 Oct 2008, Bill Moran wrote:\n\n> You can resent it or not, but this _is_ a personal thing. It's personal\n> because you are the only one complaining about it. Despite the large\n> number of people on this list, I don't see anyone jumping in to defend\n> you.\n\nMikkel is right, every other well-organized mailing list I've ever been on \nhandles things the sensible way he suggests, but everybody on his side \nwho's been on lists here for a while already knows this issue is a dead \nhorse. Since I use the most advanced e-mail client on the market I just \nwork around that the settings here are weird, it does annoy me a bit \nanytime I stop to think about it though.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 17 Oct 2008 09:27:46 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "\n>> I resent that you're trying to make this a personal thing.\n>> \n>\n> I was going to answer the rest of this email, then I realized that the\n> real problem was right here, and discussing anything else was dancing\n> around the issue and wasting time.\n>\n> You can resent it or not, but this _is_ a personal thing. It's personal\n> because you are the only one complaining about it. Despite the large\n> number of people on this list, I don't see anyone jumping in to defend\n> you.\n>\n> I'm not saying your problems aren't real, I'm just saying you're apparently\n> the only person in this community that has enough trouble with them to\n> take the time to start a discussion. To that degree, the problem is\n> very personal to you.\n>\n> \n\nI was going to stay out of this but I'll jump in and defend him. The \npeople on this list are so pedantic, so sure that their way is the only \nway that they absolutely rain nuclear fire down on anyone who dares to \ndisagree. And you wonder why no one sprang to his defense???\n\nAnd, I do agree with him on this issue.\n", "msg_date": "Fri, 17 Oct 2008 09:39:18 -0400", "msg_from": "Collin Kidder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Fri, Oct 17, 2008 at 09:27:46AM -0400, Greg Smith wrote:\n\n> Mikkel is right, every other well-organized mailing list I've ever been on \n> handles things the sensible way he suggests, but everybody on his side \n\nThey may be well-organized, but they're doing bad things to the mail\nheaders. RFC 5322 (which just obsoleted 2822) says this:\n\n When the \"Reply-To:\" field is present, it\n indicates the address(es) to which the author of the message suggests\n that replies be sent.\n\nThe mailing list is not the author of the message. Therefore it\nshould not alter that header.\n\nMoreover, since you are allowed at most one Reply-To header, if the\noriginal author needs individual responses to go to some other\naddress, then that Reply-To: header will be lost if the list munges\nthem.\n\nThere is therefore a mail standards reason not to munge the headers,\nand it rests in the rules about origin fields and in the potential for\nlost functionality. Given the project's goal of SQL conformance, why\nwould we blow off SMTP standards?\n\n(Anyway, I agree with Tom, so I'm saying nothing more in this thread.)\n\nA\n \n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Fri, 17 Oct 2008 09:43:35 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Fri, Oct 17, 2008 at 7:39 AM, Collin Kidder <[email protected]> wrote:\n> I was going to stay out of this but I'll jump in and defend him. The people\n> on this list are so pedantic, so sure that their way is the only way that\n> they absolutely rain nuclear fire down on anyone who dares to disagree. And\n> you wonder why no one sprang to his defense???\n>\n> And, I do agree with him on this issue.\n\nI prefer the list the way it is. And so do a very large, very silent\nmajority of users.\n", "msg_date": "Fri, 17 Oct 2008 08:12:00 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On 17/10/2008, at 16.12, Scott Marlowe wrote:\n\n> On Fri, Oct 17, 2008 at 7:39 AM, Collin Kidder <[email protected]> \n> wrote:\n>> I was going to stay out of this but I'll jump in and defend him. \n>> The people\n>> on this list are so pedantic, so sure that their way is the only \n>> way that\n>> they absolutely rain nuclear fire down on anyone who dares to \n>> disagree. And\n>> you wonder why no one sprang to his defense???\n>>\n>> And, I do agree with him on this issue.\n>\n> I prefer the list the way it is. And so do a very large, very silent\n> majority of users.\n\nAnd no one messes with the silent majority and their spokesmen ;)", "msg_date": "Fri, 17 Oct 2008 16:15:20 +0200", "msg_from": "=?ISO-8859-1?Q?Mikkel_H=F8gh?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Fri, Oct 17, 2008 at 8:15 AM, Mikkel Høgh <[email protected]> wrote:\n> On 17/10/2008, at 16.12, Scott Marlowe wrote:\n>\n>> On Fri, Oct 17, 2008 at 7:39 AM, Collin Kidder <[email protected]> wrote:\n>>>\n>>> I was going to stay out of this but I'll jump in and defend him. The\n>>> people\n>>> on this list are so pedantic, so sure that their way is the only way that\n>>> they absolutely rain nuclear fire down on anyone who dares to disagree.\n>>> And\n>>> you wonder why no one sprang to his defense???\n>>>\n>>> And, I do agree with him on this issue.\n>>\n>> I prefer the list the way it is. And so do a very large, very silent\n>> majority of users.\n>\n> And no one messes with the silent majority and their spokesmen ;)\n\nNo, no one makes idiotic arguments that go against the RFCs for how to\nrun a mailing list and then makes the entire pgsql community change\nfrom the rather sensible way it's done to a non-sensical way to\naccomodate broken email clients. But I thought that was rather\nobvious.\n", "msg_date": "Fri, 17 Oct 2008 08:23:21 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "free unfettered and open discussion without interference from ANY entity is a requirement of a democracy\nthe REAL question is ..is this a democracy???\n\nThanks Scott\nMartin \nPlease vote November 4\n______________________________________________ \nDisclaimer and confidentiality note \nEverything in this e-mail and any attachments relates to the official business of Sender. This transmission is of a confidential nature and Sender does not endorse distribution to any party other than intended recipient. Sender does not necessarily endorse content contained within this transmission. \n\n\n> CC: [email protected]\n> From: [email protected]\n> To: [email protected]\n> Subject: Re: [GENERAL] Annoying Reply-To\n> Date: Fri, 17 Oct 2008 16:15:20 +0200\n> \n> On 17/10/2008, at 16.12, Scott Marlowe wrote:\n> \n> > On Fri, Oct 17, 2008 at 7:39 AM, Collin Kidder <[email protected]> \n> > wrote:\n> >> I was going to stay out of this but I'll jump in and defend him. \n> >> The people\n> >> on this list are so pedantic, so sure that their way is the only \n> >> way that\n> >> they absolutely rain nuclear fire down on anyone who dares to \n> >> disagree. And\n> >> you wonder why no one sprang to his defense???\n> >>\n> >> And, I do agree with him on this issue.\n> >\n> > I prefer the list the way it is. And so do a very large, very silent\n> > majority of users.\n> \n> And no one messes with the silent majority and their spokesmen ;)\n\n_________________________________________________________________\nWant to read Hotmail messages in Outlook? The Wordsmiths show you how.\nhttp://windowslive.com/connect/post/wedowindowslive.spaces.live.com-Blog-cns!20EE04FBC541789!167.entry?ocid=TXT_TAGLM_WL_hotmail_092008\n\n\n\n\n\nfree unfettered and open discussion without interference from ANY entity is a requirement of a democracythe REAL question is ..is this a democracy???Thanks ScottMartin Please vote November 4______________________________________________ Disclaimer and confidentiality note Everything in this e-mail and any attachments relates to the official business of Sender. This transmission is of a confidential nature and Sender does not endorse distribution to any party other than intended recipient. Sender does not necessarily endorse content contained within this transmission. > CC: [email protected]> From: [email protected]> To: [email protected]> Subject: Re: [GENERAL] Annoying Reply-To> Date: Fri, 17 Oct 2008 16:15:20 +0200> > On 17/10/2008, at 16.12, Scott Marlowe wrote:> > > On Fri, Oct 17, 2008 at 7:39 AM, Collin Kidder <[email protected]> > > wrote:> >> I was going to stay out of this but I'll jump in and defend him. > >> The people> >> on this list are so pedantic, so sure that their way is the only > >> way that> >> they absolutely rain nuclear fire down on anyone who dares to > >> disagree. And> >> you wonder why no one sprang to his defense???> >>> >> And, I do agree with him on this issue.> >> > I prefer the list the way it is. And so do a very large, very silent> > majority of users.> > And no one messes with the silent majority and their spokesmen ;)Want to read Hotmail messages in Outlook? The Wordsmiths show you how. Learn Now", "msg_date": "Fri, 17 Oct 2008 10:24:44 -0400", "msg_from": "Martin Gainty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Sigh...\nwasting another junk of bandwidth with this bike shed discussion....\n\nGranted, replying here is more annoying and less convenient compared to other lists -\nas long as your MUA still does not provide decent support for mailing lists.\n\nDown back from 1998 is RFC 2369 that defined additional headers for mailing lists to indicate important information (e.g. how to post\nmessages). Thus, since then there is no excuse in hijacking Reply-To headers for use of simplifying mailing list replies.\n\nEven if for the last ten years most MUAs did not care adding support, there is no real reason for blaming mailing list maintainers\nthat follow current standards for inconveniences caused by dumb MUAs. Especially if there is a reasonable workaround (using Replay All\n- not ideal as any workaround but manageable).\n\nThus, *please* complain with the maintainer of your MUA to get the annoyance alleviated - or change your MUA.\n\nAs long as there are \"friendly\" list maintainers that abuse mail headers for overcoming deficiencies of some MUAs,\nthere will be no change for the better with MUAs.\n\nSorry I'm a bit fussy here,\nbut nowadays there is so much effort wasted with solving the wrong problems in making bad things bearable instead of fixing the\nunderlying reasons in the first place....\n\nRainer\n\nCollin Kidder wrote\n> \n>>> I resent that you're trying to make this a personal thing.\n>>> \n>>\n>> I was going to answer the rest of this email, then I realized that the\n>> real problem was right here, and discussing anything else was dancing\n>> around the issue and wasting time.\n>>\n>> You can resent it or not, but this _is_ a personal thing. It's personal\n>> because you are the only one complaining about it. Despite the large\n>> number of people on this list, I don't see anyone jumping in to defend\n>> you.\n>>\n>> I'm not saying your problems aren't real, I'm just saying you're\n>> apparently\n>> the only person in this community that has enough trouble with them to\n>> take the time to start a discussion. To that degree, the problem is\n>> very personal to you.\n>>\n>> \n> \n> I was going to stay out of this but I'll jump in and defend him. The\n> people on this list are so pedantic, so sure that their way is the only\n> way that they absolutely rain nuclear fire down on anyone who dares to\n> disagree. And you wonder why no one sprang to his defense???\n> \n> And, I do agree with him on this issue.\n> \n", "msg_date": "Fri, 17 Oct 2008 16:26:59 +0200", "msg_from": "Rainer Pruy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Martin Gainty escribi�:\n> \n> free unfettered and open discussion without interference from ANY\n> entity is a requirement of a democracy the REAL question is ..is this\n> a democracy???\n\n_Of course_ it isn't ... (thankfully!)\n\n-- \nAlvaro Herrera\n", "msg_date": "Fri, 17 Oct 2008 11:28:05 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Fri, Oct 17, 2008 at 8:24 AM, Martin Gainty <[email protected]> wrote:\n> free unfettered and open discussion without interference from ANY entity is\n> a requirement of a democracy\n> the REAL question is ..is this a democracy???\n\nNo, it's a well mostly well behaved meritocracy. And I prefer that.\nI believe that free and unfettered discussion is important to many\nquality forms of governance, not just democracies.\n\nI really do prefer the way this list works because when I hit reply\nall to a discussion with \"Bob Smith and Postgresql-general\" I know\nthat Bob gets a direct answer from me, now, when he needs it at 2am\nwhen his servers are puking their data out their gigE ports, and the\nrest of the list gets it whenever the mailing list server can get\naround to it.\n", "msg_date": "Fri, 17 Oct 2008 08:28:57 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Scott Marlowe escribi�:\n\n> I really do prefer the way this list works because when I hit reply\n> all to a discussion with \"Bob Smith and Postgresql-general\" I know\n> that Bob gets a direct answer from me, now, when he needs it at 2am\n> when his servers are puking their data out their gigE ports, and the\n> rest of the list gets it whenever the mailing list server can get\n> around to it.\n\nRight. It also works in the following situations:\n\n1. Bob Smith posted without being subscribed; when the moderator\napproves and somebody replies to Bob, whenever Bob responds the person\nthat he is responding to will receive his response right away without\nhaving to wait for the moderator.\n\n1a. Note that this means that crossposting to lists from other projects\nworks too (for example when there are discussions between here and\nFreeBSD)\n\n2. In the example (1) above, Bob is sure to receive the response,\nwhereas if the list was the Reply-To-set kind, Bob would never get it;\nhe'd have to troll the archives\n\n3. Discussions continue to work even when the list servers die (happens,\neven if rare) or they are very slow (relatively frequent)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 17 Oct 2008 11:41:22 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Fri, 17 Oct 2008 08:56:34 -0400\nStephen Frost <[email protected]> wrote:\n\n> * Bill Moran ([email protected]) wrote:\n> > You can resent it or not, but this _is_ a personal thing. It's\n> > personal because you are the only one complaining about it.\n> > Despite the large number of people on this list, I don't see\n> > anyone jumping in to defend you.\n> \n> Ugh. No one else is jumping in simply because we've already been\n> through all of this and it hasn't and isn't going to change. The\n> PG lists are the odd ones out here, not the other way around, I\n> assure you. One might compare it to our continued use of CVS.\n> It's wrong and backwards and all that, but most of the PG\n> community is used to it and changing is a pain. shrug.\n\nI'd say because postgresql list has been used to it by a longer time\nthan most of the new comers doing the other way around did. But it\nseems that the new comers are the most vocal.\nMaybe because what's in the email headers have been abstracted to\nthem long ago they never had the need to look what they are for and\nuse them properly getting all the functionality their way has and\nsome additional bits too.\n\nIt is surprising how many people think to have enough knowledge in\nemail distribution systems to discuss an RFC that has been\nalready rewritten 1 time.\nIt would be nice if all those people spent their time rewriting in a\ncoherent way that RFC so that Reply-To works as they think is best\nfor the overall Internet without breaking any already existent\nfunctionality before challenging this list consolidated habits.\n\nSettings of ml are generally a mirror of their community.\n\nDecent email clients have a switch for reckless people. That's\nfreedom.\nMine is called: \"Reply button invokes mailing list reply\".\nDecent lists generally have quite helpful headers for filtering and\nchoosing to reply to the right address.\n\nIf you're for freedom... then let the recipients choose. Not the\nlist. If people insist in badly configuring/choosing their email\nclients how far are you willing to go to protect them against\nthemselves and imposing a toll on the rest of the others?\n\nI think the overall amount of the time I spent choosing the right\nbutton in my life is lower than the time a single person has spent\nwriting 1 post on this topic and much much much lower than the time\nthey will have to spend in excuses (if anything worse) the first\ntime they will send to the wrong address.\nBut still it is much more than the time the people are complaining\nhave spent reading RFC 2822 and considering its implications.\n\nBut maybe this will give everyone a chance to consider all the small\ncoherent technical details that good engineers placed deciding about\nemail headers and email clients and reconsider what RFC are there\nfor.\nIf headers are properly set the action taken once you press your\nchosen button is unambiguous and you conserve as much information as\npossible.\n\nIf you think the majority is right since most of the people that\narrived to the Internet late got used to mangled Reply-To I think\nthat mistakes are educating. But till people will ignore what's\navailable and why I bet they will just learn to wait 20 minutes\nbefore sending any email after their first expensive error, rather\nthan considering other ways to operate.\n\nBTW consider this even from a HCI point of view... you still need 2\nfunctions: one to send to \"list\" one to \"author\".\nSaying you just want one button since 99% of times you'll reply to\nthe list is making the wrong expensive choice even more probable.\nOnce you've \"2 buttons\" why should you mangle the headers and give\nthem meaning they don't have?\nBecause most of the people aren't able to properly chose and\nconfigure their email client?\nWhat's wrong between making the difference in the header, in the\nclient and in your mind between Reply-to and List-Post?\nOr is it better that the client thinks that Reply-To is replying to\nthe Sender (that's not true), the mailing list thinks that the\nReply-To is the List-Post (that's not true) and you think that today\nis a good day to play lottery (In Italy it may be true)?\n\nIf you're among the reckless people you could get used to invoke the\n\"send to list\" button and let your client send it to the Reply-To in\ncase there is no List-Post... or whatever bad habit you enjoy more.\nBut I see no reason to harass an RFC.\n\nWhat about non standard web sites that have to cope with not\nstandard browsers and then being forced to adapt browsers that were\nalready standard to cope with non standard web sites?\nDoes it sound familiar?\n\nI beg everyone pardon, especially to Tom Lane whose replies always\nshine here, but I couldn't resist to reply to people thinking I'm not\n\"sensible\", I took it personally ;) Evidently I'm old enough to\nknow the existence of RFCs but not mature enough ;)\n\n\n-- \nIvan Sergio Borgonovo\nhttp://www.webthatworks.it\n\n", "msg_date": "Fri, 17 Oct 2008 17:19:31 +0200", "msg_from": "Ivan Sergio Borgonovo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "* Ivan Sergio Borgonovo ([email protected]) wrote:\n> I'd say because postgresql list has been used to it by a longer time\n> than most of the new comers doing the other way around did. But it\n> seems that the new comers are the most vocal.\n\nsigh. First people complain that poor Mikkel is the only one\ncomplaining, now people are bitching that us 'new comers' are the most\nvocal when we point out he's not alone. Yes, the PG lists have been\naround a long time. So have the Debian lists. In the end, it's not a\ncontest. This discussion obviously isn't going anywhere tho, and it's\nnot going to change the list policy, so let's just drop it, please. I'm\nsure we'll all have an opportunity to revisit it (again) in another 6\nmonths.\n\n\tStephen", "msg_date": "Fri, 17 Oct 2008 11:29:28 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "At 09:42 PM 10/14/2008, you wrote:\n>Mikkel Høgh wrote:\n>>On 14/10/2008, at 11.40, Ivan Sergio Borgonovo wrote:\n>\n>>That might be true, if the only demographic you \n>>are looking for are professional DBAs, but if \n>>you're looking to attract more developers, not \n>>having sensible defaults is not really a good thing.\n>>While I'll probably take the time to learn more \n>>about how to tune PostgreSQL, the common \n>>Drupal-developer developer will probably just \n>>say \"Ah, this is slow, I'll just go back to MySQL…\".\n>\n>Developers should be familiar with the platforms \n>they develop for. If they are not and they are \n>not willing to learn them they shouldn't use it.\n\nIf they did that there'll be a lot fewer programs out there.\n\nThere have been lots of popular stuff written by incompetent/ignorant people.\n\nIf there were such a rule, it'll just be one more \nimportant thing they weren't aware of (or choose to ignore).\n\nAnyway, I personally think that having \"small, \nmedium, large\" configs would be useful, at least as examples.\n\nOn a related note: is it possible to have a \nconfig where you can be certain that postgresql \nwill not use more than X MB of memory (and still work OK)?\n\nLink.\n\n\n", "msg_date": "Sat, 18 Oct 2008 00:12:02 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "On Fri, 17 Oct 2008, Andrew Sullivan wrote:\n\n> There is therefore a mail standards reason not to munge the headers, and \n> it rests in the rules about origin fields and in the potential for lost \n> functionality.\n\nI should have included the standard links to both sides of this \ndiscussion:\n\nhttp://www.unicom.com/pw/reply-to-harmful.html\nhttp://www.metasystema.net/essays/reply-to.mhtml\n\nI find the \"Principle of Minimal Bandwidth\" and \"Principle of Least Total \nWork\" arguments in the latter match my personal preferences here better \n(particularly as someone who only cares about on-list replies even more \nthan the 90% of the time given in that example), while respecting that \ntrue RFC-compliance is also a reasonable perspective.\n\nIt's also clear to me you'll never change the mind of anyone who had \nadopted a firm stance on either side here. My spirit for e-mail pedantry \narguments was broken recently anyway, when I had someone I'm compelled to \ncommunicate with regularly complain that they couldn't follow my \ntop-posted messages and requested me to reply \"like everybody else\" to \ntheir mail in the future.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 17 Oct 2008 14:15:51 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Bill Moran wrote:\n> You can resent it or not, but this _is_ a personal thing. It's personal\n> because you are the only one complaining about it. Despite the large\n> number of people on this list, I don't see anyone jumping in to defend\n> you.\n\nI'm another in the crowd that had this same discussion when I joined \nyears ago. I had the same point of view as Mikkel, but I've adapted to \nthe community way of doing things.\n\nWhen I use \"Reply All\" in Thunderbird, it adds a \"To:\" to each of the \nindividuals in the discussion, and a \"CC:\" to the list. Since I \npersonally don't like receiving multiple copies of emails from this \nlist, I delete all of the \"To:\" addressees and change the list from \n\"CC:\" to \"To:\". Would be nice if everyone did the same.\n\n-- \nGuy Rouillier\n", "msg_date": "Fri, 17 Oct 2008 15:01:33 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Guy Rouillier wrote:\n\n> When I use \"Reply All\" in Thunderbird, it adds a \"To:\" to each of the \n> individuals in the discussion, and a \"CC:\" to the list. Since I \n> personally don't like receiving multiple copies of emails from this \n> list, I delete all of the \"To:\" addressees and change the list from \n> \"CC:\" to \"To:\". Would be nice if everyone did the same.\n\nI don't know about Thunderbird, but my email client has a keystroke that\ndoes exactly that. I disable that feature though, because I don't like\nit; I very much prefer the two copies because I get the fastest one\nfirst (of course, I only receive one -- the system makes sure that the\nsecond one is not delivered to me but discarded silently).\n\nI would be really annoyed if my mail system inflicted so much pain on my\nas Thunderbird seems to inflict on you.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 17 Oct 2008 16:15:24 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Fri, Oct 17, 2008 at 9:26 PM, Serge Fonville <[email protected]>wrote:\n\n> Altough I am not sure what the real issue is,I do know that on (for\n> example) the tomcat mailing list, when I choose reply (in gmail) the to:\n> field contains the address of the mailing list.\n> Based on what I know, this should be relatively easy to set up in the\n> mailing list manager.\n>\n> just my 2ct\n>\n> Serge Fonvilee\n>\n> On Fri, Oct 17, 2008 at 9:15 PM, Alvaro Herrera <\n> [email protected]> wrote:\n>\n>> Guy Rouillier wrote:\n>>\n>> > When I use \"Reply All\" in Thunderbird, it adds a \"To:\" to each of the\n>> > individuals in the discussion, and a \"CC:\" to the list. Since I\n>> > personally don't like receiving multiple copies of emails from this\n>> > list, I delete all of the \"To:\" addressees and change the list from\n>> > \"CC:\" to \"To:\". Would be nice if everyone did the same.\n>>\n>> I don't know about Thunderbird, but my email client has a keystroke that\n>> does exactly that. I disable that feature though, because I don't like\n>> it; I very much prefer the two copies because I get the fastest one\n>> first (of course, I only receive one -- the system makes sure that the\n>> second one is not delivered to me but discarded silently).\n>>\n>> I would be really annoyed if my mail system inflicted so much pain on my\n>> as Thunderbird seems to inflict on you.\n>>\n>> --\n>> Alvaro Herrera\n>> http://www.CommandPrompt.com/\n>> The PostgreSQL Company - Command Prompt, Inc.\n>>\n>> --\n>> Sent via pgsql-general mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-general\n>>\n>\n>\n\nOn Fri, Oct 17, 2008 at 9:26 PM, Serge Fonville <[email protected]> wrote:\nAltough I am not sure what the real issue is,I do know that on (for example) the tomcat mailing list, when I choose reply (in gmail) the to: field contains the address of the mailing list.Based on what I know, this should be relatively easy to set up in the mailing list manager.\njust my 2ctSerge FonvileeOn Fri, Oct 17, 2008 at 9:15 PM, Alvaro Herrera <[email protected]> wrote:\nGuy Rouillier wrote:\n\n> When I use \"Reply All\" in Thunderbird, it adds a \"To:\" to each of the\n> individuals in the discussion, and a \"CC:\" to the list.  Since I\n> personally don't like receiving multiple copies of emails from this\n> list, I delete all of the \"To:\" addressees and change the list from\n> \"CC:\" to \"To:\".  Would be nice if everyone did the same.\n\nI don't know about Thunderbird, but my email client has a keystroke that\ndoes exactly that.  I disable that feature though, because I don't like\nit; I very much prefer the two copies because I get the fastest one\nfirst (of course, I only receive one -- the system makes sure that the\nsecond one is not delivered to me but discarded silently).\n\nI would be really annoyed if my mail system inflicted so much pain on my\nas Thunderbird seems to inflict on you.\n\n--\nAlvaro Herrera                                http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n\n--\nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general", "msg_date": "Fri, 17 Oct 2008 21:26:27 +0200", "msg_from": "\"Serge Fonville\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "* Guy Rouillier <[email protected]> [081001 00:00]:\n> Bill Moran wrote:\n> >You can resent it or not, but this _is_ a personal thing. It's personal\n> >because you are the only one complaining about it. Despite the large\n> >number of people on this list, I don't see anyone jumping in to defend\n> >you.\n> \n> I'm another in the crowd that had this same discussion when I joined \n> years ago. I had the same point of view as Mikkel, but I've adapted to \n> the community way of doing things.\n> \n> When I use \"Reply All\" in Thunderbird, it adds a \"To:\" to each of the \n> individuals in the discussion, and a \"CC:\" to the list. Since I \n> personally don't like receiving multiple copies of emails from this \n> list, I delete all of the \"To:\" addressees and change the list from \n> \"CC:\" to \"To:\". Would be nice if everyone did the same.\n\nSince you asked, I did. \n\nBut now, if the list munged my reply-to, how would you get back to me?\n\n\"Clicking on the author\", as the 2nd link Greg posted suggested *won't*\nwork. In fact, my MUA explicitly told you how to get back to me (by\nsetting a reply-to), but if the MLM munged that...\n \n/me is glad that PostgreSQL doesn't \"just insert NULL\" when I give it an\nempty string, just because \"NULL is pretty much the same thing\" ;-)\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Fri, 17 Oct 2008 15:43:08 -0400", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "I am a member of a number of lists, some of which exhibit this\n'reply-to' behaviour and I have also managed to adapt... to a point.\n\nSometimes, however, I do end up replying directly to the poster rather\nthan through the list. Tellingly, I very nearly sent this post\ndirectly to Serge Fonvilee.\n\nWithout wanting to be too controversial, I have generally found that\nthe lists which have the default reply configured like this do tend to\nbe those that are dominated by members who are, shall we say, pedantic\nabout protocol and 'netiquette'.\n\nPersonally I would prefer the default reply-to to go to the list, but\nI'm not really bothered about it.\n\nMy 2 cents.\n\nDisclaimer: If we are not talking about the default 'reply-to'\nbehaviour of this list, please ignore this post; I came upon the\nthread late and it is possible that I am at cross purposes.\n\nKind reagards,\n\nDave Coventry\n\n\n2008/10/17 Serge Fonville <[email protected]>:\n> Altough I am not sure what the real issue is,\n> I do know that on (for example) the tomcat mailing list, when I choose\n> reply (in gmail) the to: field contains the address of the mailing list.\n> Based on what I know, this should be relatively easy to set up in the\n> mailing list manager.\n> just my 2ct\n> Serge Fonvilee\n", "msg_date": "Fri, 17 Oct 2008 22:02:14 +0200", "msg_from": "\"Dave Coventry\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "I am not fond of this approach either. I never find myself replying \ndirectly to the poster.\n\nI actually greatly prefer forums which email me a copy of every post \nwith a nice link to the original thread. 95% of the time I do not even \nneed to use the link. The latest posting is enough.\n\nThis makes things more organized as accessible.\n\nDave Coventry wrote:\n> I am a member of a number of lists, some of which exhibit this\n> 'reply-to' behaviour and I have also managed to adapt... to a point.\n>\n> Sometimes, however, I do end up replying directly to the poster rather\n> than through the list. Tellingly, I very nearly sent this post\n> directly to Serge Fonvilee.\n>\n> Without wanting to be too controversial, I have generally found that\n> the lists which have the default reply configured like this do tend to\n> be those that are dominated by members who are, shall we say, pedantic\n> about protocol and 'netiquette'.\n>\n> Personally I would prefer the default reply-to to go to the list, but\n> I'm not really bothered about it.\n>\n> My 2 cents.\n>\n> Disclaimer: If we are not talking about the default 'reply-to'\n> behaviour of this list, please ignore this post; I came upon the\n> thread late and it is possible that I am at cross purposes.\n>\n> Kind reagards,\n>\n> Dave Coventry\n>\n>\n> 2008/10/17 Serge Fonville <[email protected]>:\n> \n>> Altough I am not sure what the real issue is,\n>> I do know that on (for example) the tomcat mailing list, when I choose\n>> reply (in gmail) the to: field contains the address of the mailing list.\n>> Based on what I know, this should be relatively easy to set up in the\n>> mailing list manager.\n>> just my 2ct\n>> Serge Fonvilee\n>> \n>\n> \n\n\n\n\n\n\n\n\nI am not fond of this approach either.  I never find myself replying\ndirectly to the poster.\n\nI actually greatly prefer forums which email me a copy of every post\nwith a nice link to the original thread.  95% of the time I do not even\nneed to use the link.  The latest posting is enough.\n\nThis makes things more organized as accessible.\n\nDave Coventry wrote:\n\nI am a member of a number of lists, some of which exhibit this\n'reply-to' behaviour and I have also managed to adapt... to a point.\n\nSometimes, however, I do end up replying directly to the poster rather\nthan through the list. Tellingly, I very nearly sent this post\ndirectly to Serge Fonvilee.\n\nWithout wanting to be too controversial, I have generally found that\nthe lists which have the default reply configured like this do tend to\nbe those that are dominated by members who are, shall we say, pedantic\nabout protocol and 'netiquette'.\n\nPersonally I would prefer the default reply-to to go to the list, but\nI'm not really bothered about it.\n\nMy 2 cents.\n\nDisclaimer: If we are not talking about the default 'reply-to'\nbehaviour of this list, please ignore this post; I came upon the\nthread late and it is possible that I am at cross purposes.\n\nKind reagards,\n\nDave Coventry\n\n\n2008/10/17 Serge Fonville <[email protected]>:\n \n\nAltough I am not sure what the real issue is,\nI do know that on (for example) the tomcat mailing list, when I choose\nreply (in gmail) the to: field contains the address of the mailing list.\nBased on what I know, this should be relatively easy to set up in the\nmailing list manager.\njust my 2ct\nSerge Fonvilee", "msg_date": "Fri, 17 Oct 2008 15:17:06 -0500", "msg_from": "Jason Long <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Am 2008-10-16 23:17:35, schrieb Mikkel Høgh:\n> P.S.: Why are e-mails from this list not sent with a Reply-To: header \n> of the lists e-mail-address?\n\nBecause if I hit <Reply-To> I want to send a private message and if I\nhit <Reply-To-List> it goes to the list and <Reply-To-All> the all\npeople get bulk-mail from me?\n\nIf you have problems with Reply-To: get a REAL MUA, which support\n<Reply-To-List>\n\nThanks, Greetings and nice Day/Evening\n Michelle Konzack\n Systemadministrator\n 24V Electronic Engineer\n Tamay Dogan Network\n Debian GNU/Linux Consultant\n\n\n-- \nLinux-User #280138 with the Linux Counter, http://counter.li.org/\n##################### Debian GNU/Linux Consultant #####################\nMichelle Konzack Apt. 917 ICQ #328449886\n+49/177/9351947 50, rue de Soultz MSN LinuxMichi\n+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)", "msg_date": "Sat, 18 Oct 2008 03:32:28 +0200", "msg_from": "Michelle Konzack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "Am 2008-10-17 12:13:00, schrieb Mikkel Høgh:\n> Besides, the if the Reply-To thing is so dangerous, why do most other \n> mailing lists do it?\n\nCurently I am on 117 Mailinglists and ONLY 2 Winsuck lists do this crap.\nSo, from what are you talking about?\n\nThanks, Greetings and nice Day/Evening\n Michelle Konzack\n Systemadministrator\n 24V Electronic Engineer\n Tamay Dogan Network\n Debian GNU/Linux Consultant\n\n\n-- \nLinux-User #280138 with the Linux Counter, http://counter.li.org/\n##################### Debian GNU/Linux Consultant #####################\nMichelle Konzack Apt. 917 ICQ #328449886\n+49/177/9351947 50, rue de Soultz MSN LinuxMichi\n+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)", "msg_date": "Sat, 18 Oct 2008 03:34:32 +0200", "msg_from": "Michelle Konzack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drupal and PostgreSQL - performance issues?" }, { "msg_contents": "Am 2008-10-17 08:12:00, schrieb Scott Marlowe:\n> I prefer the list the way it is. And so do a very large, very silent\n> majority of users.\n\n<pip> I agree with you.\n\nI am on Mailinglist since I use the Internet (1995) and there are not\nvery much mailinglists which manipulate the \"Reply-To:\" Header...\n\nSo, I prefer, HOW this list is.\n\nOf course, I reply with <l> or <Reply-To-List>.\n\nThanks, Greetings and nice Day/Evening\n Michelle Konzack\n Systemadministrator\n 24V Electronic Engineer\n Tamay Dogan Network\n Debian GNU/Linux Consultant\n\n\n-- \nLinux-User #280138 with the Linux Counter, http://counter.li.org/\n##################### Debian GNU/Linux Consultant #####################\nMichelle Konzack Apt. 917 ICQ #328449886\n+49/177/9351947 50, rue de Soultz MSN LinuxMichi\n+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)", "msg_date": "Sat, 18 Oct 2008 03:47:41 +0200", "msg_from": "Michelle Konzack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Hi Martinn, here the great Dictator Michelle!\n\nAm 2008-10-17 10:24:44, schrieb Martin Gainty:\n> \n> free unfettered and open discussion without interference from ANY entity is a requirement of a democracy\n> the REAL question is ..is this a democracy???\n\nShut-Up or I will install you Micr0$of SQL Server... LOL ;-)\n\nThanks, Greetings and nice Day/Evening\n Michelle Konzack\n Systemadministrator\n 24V Electronic Engineer\n Tamay Dogan Network\n Debian GNU/Linux Consultant\n\n\n-- \nLinux-User #280138 with the Linux Counter, http://counter.li.org/\n##################### Debian GNU/Linux Consultant #####################\nMichelle Konzack Apt. 917 ICQ #328449886\n+49/177/9351947 50, rue de Soultz MSN LinuxMichi\n+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)", "msg_date": "Sat, 18 Oct 2008 03:50:07 +0200", "msg_from": "Michelle Konzack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Am 2008-10-17 08:42:46, schrieb Andrew Sullivan:\n> My suggestion would be to use a mail user agent that knows how to read\n> the list headers, which were standardized many years ago. Then you\n> \"reply to list\". Mutt has done this for at least a few years now. I\n> don't know about other MUAs.\n\nN.C. ;-)\n\nThanks, Greetings and nice Day/Evening\n Michelle Konzack\n Systemadministrator\n 24V Electronic Engineer\n Tamay Dogan Network\n Debian GNU/Linux Consultant\n\n\n-- \nLinux-User #280138 with the Linux Counter, http://counter.li.org/\n##################### Debian GNU/Linux Consultant #####################\nMichelle Konzack Apt. 917 ICQ #328449886\n+49/177/9351947 50, rue de Soultz MSN LinuxMichi\n+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)", "msg_date": "Sat, 18 Oct 2008 03:55:46 +0200", "msg_from": "Michelle Konzack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "since you are not an advocate of democracy I bid you adieu\nMartin \n______________________________________________ \nDisclaimer and confidentiality note \nEverything in this e-mail and any attachments relates to the official business of Sender. This transmission is of a confidential nature and Sender does not endorse distribution to any party other than intended recipient. Sender does not necessarily endorse content contained within this transmission. \n\n\n> Date: Sat, 18 Oct 2008 03:50:07 +0200\n> From: [email protected]\n> To: [email protected]\n> Subject: Re: [GENERAL] Annoying Reply-To\n> \n> Hi Martinn, here the great Dictator Michelle!\n> \n> Am 2008-10-17 10:24:44, schrieb Martin Gainty:\n> > \n> > free unfettered and open discussion without interference from ANY entity is a requirement of a democracy\n> > the REAL question is ..is this a democracy???\n> \n> Shut-Up or I will install you Micr0$of SQL Server... LOL ;-)\n> \n> Thanks, Greetings and nice Day/Evening\n> Michelle Konzack\n> Systemadministrator\n> 24V Electronic Engineer\n> Tamay Dogan Network\n> Debian GNU/Linux Consultant\n> \n> \n> -- \n> Linux-User #280138 with the Linux Counter, http://counter.li.org/\n> ##################### Debian GNU/Linux Consultant #####################\n> Michelle Konzack Apt. 917 ICQ #328449886\n> +49/177/9351947 50, rue de Soultz MSN LinuxMichi\n> +33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)\n\n_________________________________________________________________\nWant to read Hotmail messages in Outlook? The Wordsmiths show you how.\nhttp://windowslive.com/connect/post/wedowindowslive.spaces.live.com-Blog-cns!20EE04FBC541789!167.entry?ocid=TXT_TAGLM_WL_hotmail_092008\n\n\n\n\n\nsince you are not an advocate of democracy I bid you adieuMartin ______________________________________________ Disclaimer and confidentiality note Everything in this e-mail and any attachments relates to the official business of Sender. This transmission is of a confidential nature and Sender does not endorse distribution to any party other than intended recipient. Sender does not necessarily endorse content contained within this transmission. > Date: Sat, 18 Oct 2008 03:50:07 +0200> From: [email protected]> To: [email protected]> Subject: Re: [GENERAL] Annoying Reply-To> > Hi Martinn, here the great Dictator Michelle!> > Am 2008-10-17 10:24:44, schrieb Martin Gainty:> > > > free unfettered and open discussion without interference from ANY entity is a requirement of a democracy> > the REAL question is ..is this a democracy???> > Shut-Up or I will install you Micr0$of SQL Server... LOL ;-)> > Thanks, Greetings and nice Day/Evening> Michelle Konzack> Systemadministrator> 24V Electronic Engineer> Tamay Dogan Network> Debian GNU/Linux Consultant> > > -- > Linux-User #280138 with the Linux Counter, http://counter.li.org/> ##################### Debian GNU/Linux Consultant #####################> Michelle Konzack Apt. 917 ICQ #328449886> +49/177/9351947 50, rue de Soultz MSN LinuxMichi> +33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)Want to read Hotmail messages in Outlook? The Wordsmiths show you how. Learn Now", "msg_date": "Mon, 20 Oct 2008 10:56:10 -0400", "msg_from": "Martin Gainty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Friday 17 October 2008 22:01:33 Guy Rouillier wrote:\n> When I use \"Reply All\" in Thunderbird, it adds a \"To:\" to each of the\n> individuals in the discussion, and a \"CC:\" to the list.  Since I\n> personally don't like receiving multiple copies of emails from this\n> list, I delete all of the \"To:\" addressees and change the list from\n> \"CC:\" to \"To:\".  Would be nice if everyone did the same.\n\nSet the eliminatecc option in your majordomo configuration to avoid getting \nduplicate mail.\n", "msg_date": "Tue, 21 Oct 2008 18:39:46 +0300", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Aidan Van Dyk wrote:\n> But now, if the list munged my reply-to, how would you get back to me?\n\nI wouldn't ;). The whole point of a mailing list is to have discussions \nwith the list. If I wanted to correspond with you directly, I wouldn't \nuse the list for that.\n\n-- \nGuy Rouillier\n", "msg_date": "Tue, 21 Oct 2008 19:32:09 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Fri, 17 Oct 2008, Aidan Van Dyk wrote:\n\n> But now, if the list munged my reply-to, how would you get back to me?\n\nWhy'd you have to interrupt a perfectly good, unwinnable idealogical \nargument with a technical question? While there is only one reply-to \nallowed for a message, you can put multiple addresses in there. It is not \nnecessarily the case that a list that munges the header must be lossy \n(although majordomo isn't a good example here[1]). As most incoming list \nmessages around only have a from, not a reply-to, you can usefully add \nreply-to for regular messages to redirect them to the list (the goal \npeople who are pro list-based reply to want) and append the list address \nto any existing reply-to for the occasional odd message that specifies it \ndirectly, like yours I'm replying to.\n\nAs for an actual implementation of good behavior here, see the end of \nhttp://www.gnu.org/software/mailman/mailman-admin/node11.html for one \nexample of list software that supports adding a reply-to without stripping \nany already there off in the process.\n\n From a RFC 5322 standards-based perspective, I see the crux of the \nargument like this: the reply-to is supposed to be set to the \n\"address(es) to which the author of the message suggests that replies be \nsent\". The RFC says the author is \"the mailbox(es) of the person(s) or \nsystem(s) responsible for the writing of the message\". I don't think it's \ncompletely unreasonable to say the system running the mailing list \noriginating the actual message into my account could be considered a \nco-author of it by that definition. It's a system with a mailbox that's \nresponsible for me receiving the message, and the fact that it does touch \nsome headers says it's writing part of the message (if you consider the \nheader part of the message, which I do).\n\nI'm OK that such interpretation is not considered correct, but it does bug \nme a bit that most of the arguments I see against it are either strawman \nor appeal to authority based rather than focusing on the practical. \nYou'd be better focusing on real-world issues like \"oh, if reply-to were \nset to the list, every idiot subscribed with an auto-reply that doesn't \nrespect the bulk precedence would hit the whole list, thereby introducing \nthe potential for an endless mail-loop\"; now that is much easier to \nswallow as a problem for a large list than RFC trivia.\n\n[1] majordomo doesn't handle this very well out of the box as far as I \nknow. I believe its only behavior is still to only replace the reply-to \nwith the list address. You can insert something in between where messages \nare approved as list-worthy (attachments aren't too big, etc.) and when \n\"resend\" is called to implement the same feature: add or extend the \nreply-to with the list address while not losing any explicit reply-to in \nthe original. But I think you still have to hack it in there yourself, as \nI did once long ago.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 21 Oct 2008 23:47:29 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Greg Smith wrote:\n> On Fri, 17 Oct 2008, Bill Moran wrote:\n> \n> > You can resent it or not, but this _is_ a personal thing. It's personal\n> > because you are the only one complaining about it. Despite the large\n> > number of people on this list, I don't see anyone jumping in to defend\n> > you.\n> \n> Mikkel is right, every other well-organized mailing list I've ever been on \n> handles things the sensible way he suggests, but everybody on his side \n> who's been on lists here for a while already knows this issue is a dead \n> horse. Since I use the most advanced e-mail client on the market I just \n> work around that the settings here are weird, it does annoy me a bit \n> anytime I stop to think about it though.\n\nI think this is the crux of the problem --- if I subscribed to multiple\nemail lists, and some have \"rely\" going to the list and some have\n\"reply\" going to the author, I would have to think about the right reply\noption every time I send email.\n\nFortunately, every email list I subscribe to and manage behaves like the\nPostgres lists.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 23 Oct 2008 13:01:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Bruce Momjian wrote:\n\n> I think this is the crux of the problem --- if I subscribed to multiple\n> email lists, and some have \"rely\" going to the list and some have\n> \"reply\" going to the author, I would have to think about the right reply\n> option every time I send email.\n\nThat's not really the case. I always use \"reply to all\" in mutt (the\n\"g\" key) and it always work; in all the lists I subscribe (including\nthose which set reply-to) and in personal email too.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 23 Oct 2008 14:15:37 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Bruce Momjian wrote:\n>>\n>> Mikkel is right, every other well-organized mailing list I've ever been on \n>> handles things the sensible way he suggests, but everybody on his side \n>> who's been on lists here for a while already knows this issue is a dead \n>> horse. Since I use the most advanced e-mail client on the market I just \n>> work around that the settings here are weird, it does annoy me a bit \n>> anytime I stop to think about it though.\n>> \n>\n> I think this is the crux of the problem --- if I subscribed to multiple\n> email lists, and some have \"rely\" going to the list and some have\n> \"reply\" going to the author, I would have to think about the right reply\n> option every time I send email.\n>\n> Fortunately, every email list I subscribe to and manage behaves like the\n> Postgres lists.\n>\n> \n\nI find it difficult to believe that every list you subscribe to behaves \nas the Postgres list does. Not that I'm doubting you, just that it's \ndifficult given that the PG list is the ONLY list I've ever been on to \nuse Reply as just replying to the author. Every other list I've ever \nseen has reply as the list address and requires Reply All to reply to \nthe original poster. Thus, I would fall into the category of people who \nhave to think hard in order to do the correct thing when posting to this \nlist.\n\nI've checked and I can't even find an option to make Thunderbird (the \nclient I use in windows) reply to the list properly with the reply \nbutton (it just cannot be set that way.) You must use Reply All. You \nmight say that that makes Thunderbird crippled but I see it more as a \nsign that nobody outside of a few fussy RFC worshipping types would ever \nwant the behavior of the Postgre list. Yes, I'll have to live with the \ncurrent behavior. Yes, it's an RFC standard. But, even after having \nheard the arguments I'm not convinced that this list's behavior is \ndesirable. YMMV.\n", "msg_date": "Thu, 23 Oct 2008 13:25:47 -0400", "msg_from": "Collin Kidder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "\nOn Oct 23, 2008, at 12:25 PM, Collin Kidder wrote:\n\n> Bruce Momjian wrote:\n>>>\n>>> Mikkel is right, every other well-organized mailing list I've ever \n>>> been on handles things the sensible way he suggests, but everybody \n>>> on his side who's been on lists here for a while already knows \n>>> this issue is a dead horse. Since I use the most advanced e-mail \n>>> client on the market I just work around that the settings here are \n>>> weird, it does annoy me a bit anytime I stop to think about it \n>>> though.\n>>>\n>>\n>> I think this is the crux of the problem --- if I subscribed to \n>> multiple\n>> email lists, and some have \"rely\" going to the list and some have\n>> \"reply\" going to the author, I would have to think about the right \n>> reply\n>> option every time I send email.\n>>\n>> Fortunately, every email list I subscribe to and manage behaves \n>> like the\n>> Postgres lists.\n>>\n>>\n>\n> I find it difficult to believe that every list you subscribe to \n> behaves as the Postgres list does. Not that I'm doubting you, just \n> that it's difficult given that the PG list is the ONLY list I've \n> ever been on to use Reply as just replying to the author. Every \n> other list I've ever seen has reply as the list address and requires \n> Reply All to reply to the original poster. Thus, I would fall into \n> the category of people who have to think hard in order to do the \n> correct thing when posting to this list.\n\nI have the same experience, only PG list seems to behave different.\n\nIn my humble opinion I feel that I am subscribed to the list (It also \nsays on the bottom Sent via pgsql-general mailing list ([email protected] \n)), so a reply (not reply all --- remove original author) should go \nback to the list where I am subscribed at, in in my opinion the source \nis the list aswell (that's why I am getting it in the first place).\n\n>\n>\n> I've checked and I can't even find an option to make Thunderbird \n> (the client I use in windows) reply to the list properly with the \n> reply button (it just cannot be set that way.) You must use Reply \n> All. You might say that that makes Thunderbird crippled but I see it \n> more as a sign that nobody outside of a few fussy RFC worshipping \n> types would ever want the behavior of the Postgre list. Yes, I'll \n> have to live with the current behavior. Yes, it's an RFC standard. \n> But, even after having heard the arguments I'm not convinced that \n> this list's behavior is desirable. YMMV.\n\nmail.App is crippled aswell.. I think I will install Mutt again for \nconvenience --- just kidding...\n\nRies\n\n\n\n\n\n\n\n", "msg_date": "Thu, 23 Oct 2008 12:42:29 -0500", "msg_from": "ries van Twisk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "El Jueves 23 Octubre 2008 Collin Kidder escribió:\n\n> >> horse.  Since I use the most advanced e-mail client on the market I just \n> >> work around that the settings here are weird, it does annoy me a bit \n> >> anytime I stop to think about it though.\n\nWhat's such most advanced mail reader??\n\nNo one, ive seen, seems to be perfect nor thunderbird.\nBy the way kmail has 4 options (reply, reply to all, reply to author, reply to list) \nin addition to be able to use list headers included in the message. \n\nin fact many other mail-list dont have such extended features, as not all of the \nprevious 4 options works as expected. For me this makes postgres lists the most\ncomplete about the RFC.\n\nSo is about, thunderbird to move forward one step, not to cripple standars back.\nIn fact this remembers me the M$ way of doing things..\n\nRegards, Angel\n\n\n-- \n------------------------------------------------\nClist UAH\n------------------------------------------------\n", "msg_date": "Thu, 23 Oct 2008 20:09:04 +0200", "msg_from": "Angel Alvarez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Thursday 23 October 2008, Collin Kidder <[email protected]> wrote:\n> You must use Reply All. You\n> might say that that makes Thunderbird crippled but I see it more as a\n> sign that nobody outside of a few fussy RFC worshipping types would ever\n> want the behavior of the Postgre list. Yes, I'll have to live with the\n> current behavior. Yes, it's an RFC standard. But, even after having\n> heard the arguments I'm not convinced that this list's behavior is\n> desirable. YMMV.\n\nIf it bugs you that much, just fix it for yourself.\n\n:0 fr\n* ^(To:|Cc:).*[email protected].*\n| /usr/bin/formail -I \"Reply-To: [email protected]\"\n\n.. and eliminate any dupes:\n\n:0 Whc: /home/$USER/.msgid.lock\n|/usr/bin/formail -D 1024 /home/$USER/.msgid.cache\n\nIf your MUA doesn't work right, procmail is your friend.\n\n-- \nAlan\n", "msg_date": "Thu, 23 Oct 2008 11:12:51 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On 23/10/2008 19:09, Angel Alvarez wrote:\n\n> No one, ive seen, seems to be perfect nor thunderbird.\n> By the way kmail has 4 options (reply, reply to all, reply to author, reply to list) \n> in addition to be able to use list headers included in the message. \n\nHere's a \"reply to list\" add-on for ThunderBird - it's marked\nexperimental, but may be worth a try:\n\n https://addons.mozilla.org/en-US/thunderbird/addon/4455\n\nRay.\n\n------------------------------------------------------------------\nRaymond O'Donnell, Director of Music, Galway Cathedral, Ireland\[email protected]\nGalway Cathedral Recitals: http://www.galwaycathedral.org/recitals\n------------------------------------------------------------------\n", "msg_date": "Thu, 23 Oct 2008 19:31:22 +0100", "msg_from": "Raymond O'Donnell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Angel Alvarez wrote:\n> What's such most advanced mail reader??\n>\n> No one, ive seen, seems to be perfect nor thunderbird.\n> By the way kmail has 4 options (reply, reply to all, reply to author, reply to list) \n> in addition to be able to use list headers included in the message. \n>\n> in fact many other mail-list dont have such extended features, as not all of the \n> previous 4 options works as expected. For me this makes postgres lists the most\n> complete about the RFC.\n>\n> So is about, thunderbird to move forward one step, not to cripple standars back.\n> In fact this remembers me the M$ way of doing things..\n>\n> Regards, Angel\n>\n>\n> \n\nOne could argue that a standard which is respected by nobody but a few \npeople from this list is NOT a standard but rather a botched attempt at \ncreating a standard which no one wanted.\n", "msg_date": "Thu, 23 Oct 2008 14:31:38 -0400", "msg_from": "Collin Kidder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Well\n\nbut the RFC's were in fact prior to thunderbird\nSo for he most of its life, when few people was using it, \nThiunderbird was a sad example of your botched attempt of creating a\nstandar of NOT FOLLOWING THE RFC's...\n\nWell, also M$ thought they invented internet so its a common mistake.\n\nMay be you can push Thunderbird guys a bit to include a little more funcionailty\nother than complaining others that try to follow what to seemed to be the right way.\n\nAre we going to try be standars compliant or we keep trying to reinvent the wheel?\n\nPoor RFC people, what a waste of time...\n\nRegards, Angel\n\nEl Jueves 23 Octubre 2008 Collin Kidder escribió:\n> Angel Alvarez wrote:\n> > What's such most advanced mail reader??\n> >\n> > No one, ive seen, seems to be perfect nor thunderbird.\n> > By the way kmail has 4 options (reply, reply to all, reply to author, reply to list) \n> > in addition to be able to use list headers included in the message. \n> >\n> > in fact many other mail-list dont have such extended features, as not all of the \n> > previous 4 options works as expected. For me this makes postgres lists the most\n> > complete about the RFC.\n> >\n> > So is about, thunderbird to move forward one step, not to cripple standars back.\n> > In fact this remembers me the M$ way of doing things..\n> >\n> > Regards, Angel\n> >\n> >\n> > \n> \n> One could argue that a standard which is respected by nobody but a few \n> people from this list is NOT a standard but rather a botched attempt at \n> creating a standard which no one wanted.\n> \n\n\n\n-- \nNingún personajillo ha sido vilipendiado si no es necesario. El 'buen rollo' está en nuestras manos.\n->>-----------------------------------------------\n Clist UAH a.k.a Angel\n---------------------------------[www.uah.es]-<<--\n\"BIG BANG: átomo primigenio que, debido a una incorrecta manipulación por parte de Dios, explotó y provocó una tremenda algarada en el vacío, razón por la que, aún hoy, las empresas aseguradoras no quieren ni oír hablar de Dios.\"\n", "msg_date": "Thu, 23 Oct 2008 20:46:54 +0200", "msg_from": "Angel Alvarez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Angel Alvarez wrote:\n> Well\n>\n> but the RFC's were in fact prior to thunderbird\n> So for he most of its life, when few people was using it, \n> Thiunderbird was a sad example of your botched attempt of creating a\n> standar of NOT FOLLOWING THE RFC's...\n> \nBut, as I mentioned, nobody cares about this particular standard. In my \nopinion a standard which is totally ignored by almost everyone is \neffectively dead and worthless.\n\n> Well, also M$ thought they invented internet so its a common mistake.\n> \n\nI thought that was Al Gore that invented the internet. ;-)\n\n> May be you can push Thunderbird guys a bit to include a little more funcionailty\n> other than complaining others that try to follow what to seemed to be the right way.\n>\n> Are we going to try be standars compliant or we keep trying to reinvent the wheel?\n>\n> Poor RFC people, what a waste of time...\n> \nStandards are all well and good but anything should be evaluated for \nit's utility. If a standard is undesirable then it's undesirable. Most \nmailing lists do not exhibit the same behavior as this list not because \nthey are all ignorant of the standard but because they feel that \nfollowing the standard is not desirable. I'm perfectly fine to follow \nthe convention of this list. Some lists like top posting. Not this one. \nThat's ok, I'll bottom or interleaved post. All I'm saying is that one \ncannot look to standards as gospel and disengage their brains. It's \nperfectly acceptable to say 'this standard sucks!' And so, I along with \na couple of other people from this list, say 'this standard sucks.'\n", "msg_date": "Thu, 23 Oct 2008 15:01:59 -0400", "msg_from": "Collin Kidder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Thu, Oct 23, 2008 at 01:25:47PM -0400, Collin Kidder wrote:\n\n> that that makes Thunderbird crippled but I see it more as a sign that \n> nobody outside of a few fussy RFC worshipping types would ever want the \n> behavior of the Postgre list. \n\nIndeed. And PostgreSQL not interpreting '' as NULL, or '2008-02-31'\nas a date, or other such silly strictness is just the imposition on\nyou of the personal views of a few fussy ANSI worshipping types.\nNobody would ever want such behaviour.\n\nOf course, you could consider that the behaviour as defined in the\nstandards, which are there to ensure good interoperability, were\nwritten over many years by painstaking standards weenie types who\nspent a great amount of time thinking about the advantages and\ndisadvantages of these various options.[1] Or perhaps you think that\nyou're the only person to whom it ever occurred that some different\nbehaviour might be desirable?\n\nSorry, but pointing and laughing at people with whom you disagree\ndoesn't constitute an argument. Relying on an appeal to popularity\nis, in fact, a well-known fallacy (sometimes known as _ad populum_).\nIf you want to argue that the standard is wrong, you need something\nbetter than this. Alternatively, please go stand in the corner with\nthe people who think that MySQL version 3.x is the pinnacle of correct\ndatabase behaviour.\n\nA\n\n[1] To pick an example I can think of off the top of my head, you\nwould not believe just how much wrangling has gone on this year alone\nover whether the \"�\" character (LATIN SMALL LETTER SHARP S) should or\nshould not be allowed into internationalized domain names.\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Thu, 23 Oct 2008 15:04:47 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "On standards weenies (was: Annoying Reply-To)" }, { "msg_contents": "\nOn Oct 23, 2008, at 12:01 PM, Collin Kidder wrote:\n\n> Angel Alvarez wrote:\n>> Well\n>>\n>> but the RFC's were in fact prior to thunderbird\n>> So for he most of its life, when few people was using it, \n>> Thiunderbird was a sad example of your botched attempt of creating a\n>> standar of NOT FOLLOWING THE RFC's...\n>>\n> But, as I mentioned, nobody cares about this particular standard. In \n> my opinion a standard which is totally ignored by almost everyone is \n> effectively dead and worthless.\n\nIf you don't like it (and this applies to everyone else arguing about \nit, on either side) please do one of these three things:\n\n1. \"Fix\" it locally at your end, as is trivial to do with procmail, \namongst other approaches, and quit whining about it.\n\nor\n\n2. Quit whining about it.\n\nor\n\n3. Find somewhere else to whine about it and quit whining about it here.\n\nCheers,\n Steve\n\n", "msg_date": "Thu, 23 Oct 2008 12:07:34 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Thu, 23 Oct 2008, Angel Alvarez wrote:\n\n>>>> horse.  Since I use the most advanced e-mail client on the market I just\n>>>> work around that the settings here are weird\n>\n> What's such most advanced mail reader??\n\nThat quoted bit was actually from me, I was hoping to get a laugh out of \nanyone who actually looked at the header of my messages to see what I use. \nOr perhaps start a flamewar with those deviant mutt users; that would be \nabout as productive as the continued existence of this thread.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD", "msg_date": "Thu, 23 Oct 2008 16:44:50 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Greg Smith escribi�:\n> On Thu, 23 Oct 2008, Angel Alvarez wrote:\n>\n>>>>> horse. �Since I use the most advanced e-mail client on the market I just\n>>>>> work around that the settings here are weird\n>>\n>> What's such most advanced mail reader??\n>\n> That quoted bit was actually from me, I was hoping to get a laugh out of \n> anyone who actually looked at the header of my messages to see what I \n> use.\n\nYou did get a laugh from me, one of those deviant mutt users ;-)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 23 Oct 2008 17:51:56 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "\nOn Oct 23, 2008, at 3:44 PM, Greg Smith wrote:\n\n> On Thu, 23 Oct 2008, Angel Alvarez wrote:\n>\n>>>>> horse. Since I use the most advanced e-mail client on the \n>>>>> market I just\n>>>>> work around that the settings here are weird\n>>\n>> What's such most advanced mail reader??\n>\n> That quoted bit was actually from me, I was hoping to get a laugh \n> out of anyone who actually looked at the header of my messages to \n> see what I use. Or perhaps start a flamewar with those deviant mutt \n> users; that would be about as productive as the continued existence \n> of this thread.\n\n\nJust checked... it says : Sender: [email protected]\nDoes that mean a reply should go back to the sender???? :D Just \nkidding....\n\n\nanyways.. I don't care anymore... I will do a reply all.\n\n\n\t\t\tregards, Ries van Twisk\n\n\n-------------------------------------------------------------------------------------------------\nA: Because it messes up the order in which people normally read text.\nQ: Why is top-posting such a bad thing?\nA: Top-posting.\nQ: What is the most annoying thing in e-mail?\n\n", "msg_date": "Thu, 23 Oct 2008 15:52:30 -0500", "msg_from": "ries van Twisk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "2008/10/23 Steve Atkins <[email protected]>:\n> If you don't like it (and this applies to everyone else arguing about it, on\n> either side) please do one of these three things:\n>\n> 1. \"Fix\" it locally at your end, as is trivial to do with procmail, amongst\n> other approaches, and quit whining about it.\n>\n> or\n>\n> 2. Quit whining about it.\n>\n> or\n>\n> 3. Find somewhere else to whine about it and quit whining about it here.\n>\n> Cheers,\n> Steve\n\n> On Fri, 17 Oct 2008, Bill Moran wrote:\n>\n> > You can resent it or not, but this _is_ a personal thing. It's personal\n> > because you are the only one complaining about it. Despite the large\n> > number of people on this list, I don't see anyone jumping in to defend\n> > you.\n\nPersonally I am of the view that, as I am on this list principally to\nget support from it, and I am quite prepared to submit to the vagaries\nand oddities of the list behaviour in pursuit of the answers I might\nseek.\n\nAs such, I am quite prepared to 1) Fix it my end, 2) Quit whining\nabout it and 3) Find something else to whine about.\n\nHowever, the point is made by Bill that 'only one person' might feel\nthat the reply-to configuration could be improved, and I feel\ncompelled to say that, while I might not be driven to complain about\nthe list behaviour myself, I do feel that the OP does have a point.\n\nAnd that's all that I'm going to say on the matter.\n\n- Dave Coventry\n", "msg_date": "Thu, 23 Oct 2008 22:54:41 +0200", "msg_from": "\"Dave Coventry\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Hulou hjuvat folkenbergers und goody good peoples ute po daer in allas e \noceanos / terranos,\n\nI chose to 'Reply to Mailing-List' in my preferred mail app. (Kmail on Ubuntu \n8.10) I really don't know why ... but shore hope, mean sure, as in truly, this \nis hopefully not an annoying postage broadcastung. wink, lol, capatcha, \ntsajajaja, manolito!\n\nFurthermore, I am pop to(a)sting. Phonetically reversed, top posting.\n\nPuff the embers every once and a while. To have [a] flame[s] --help?.\n\nKeep on keepin' on.\n\nSamiTjahkasTjohkasny, äeeäee kenen poro, Vincentin Genen? Simmons? Gruppa? \nGene? Ootsiäee ihan varma? Oks sähöstarttii, ahe! Läks, where's mi head \n(laughing it off)?\n\nFocus. \"Remember, there's a big difference between kneeling down and bending \nover.\"\n-- great late FZ --\n\nWha'ever dad means, I think we need some standards.\n\n1. I wish\n\n2. I had a button that kept me from sending silly messages.\n\n3. Whoops, still not do.\n\nRock,\n\nAarni\n\nReceived: from mx2.hub.org ([200.46.204.254]:26819 \"EHLO mx2.hub.org\")\n\tby northstar.iwn.fi with ESMTP id S10843AbYJWU4D (ORCPT\n\t<rfc822;[email protected]>); Thu, 23 Oct 2008 23:56:03 +0300\nReceived: from postgresql.org (unknown [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id 2DEDA1E8229B;\n\tThu, 23 Oct 2008 17:55:49 -0300 (ADT)\nReceived: from localhost (unknown [200.46.204.183])\n\tby postgresql.org (Postfix) with ESMTP id B161764FD71\n\tfor <[email protected]>; Thu, 23 Oct 2008 17:54:47 \n-0300 (ADT)\nReceived: from postgresql.org ([200.46.204.86])\n by localhost (mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024)\n with ESMTP id 73727-01-4 for <[email protected]>;\n Thu, 23 Oct 2008 17:54:44 -0300 (ADT)\nX-Greylist: domain auto-whitelisted by SQLgrey-1.7.6\nReceived: from rv-out-0506.google.com (rv-out-0506.google.com \n[209.85.198.231])\n\tby postgresql.org (Postfix) with ESMTP id 301A164FD93\n\tfor <[email protected]>; Thu, 23 Oct 2008 17:54:42 -0300 (ADT)\nReceived: by rv-out-0506.google.com with SMTP id b25so482917rvf.43\n for <[email protected]>; Thu, 23 Oct 2008 13:54:41 -0700 \n(PDT)\nDKIM-Signature:\tv=1; a=rsa-sha256; c=relaxed/relaxed;\n d=gmail.com; s=gamma;\n h=domainkey-signature:received:received:message-id:date:from:to\n :subject:cc:in-reply-to:mime-version:content-type\n :content-transfer-encoding:content-disposition:references;\n bh=7Asrle/2g7OnvL+HaGS+X0l8PHMMoueNpeFTF88Jfzs=;\n b=cyRbNr15f00Tbu7laTnSM9nSmJhkwnqBa04eGgWN7AV9nfxSEqrCM0EhMiy0ZFPec1\n PzBioeul06+v2CNCD7LDc3KiTHky2DliiA7gYedUtJcW6UJtjbeqo1CxBqEA/sM47FHO\n fHmhUNIJ9nh5tnfI4SejEiDdht1mgmGsdbFDM=\nDomainKey-Signature: a=rsa-sha1; c=nofws;\n d=gmail.com; s=gamma;\n h=message-id:date:from:to:subject:cc:in-reply-to:mime-version\n :content-type:content-transfer-encoding:content-disposition\n :references;\n b=UIdt6g6DfV7h1z9tYcTxlUF90IFoeN6iAAH6GHrzWsp5VpluWUEIVgsZloJkDcIFeh\n ffuxRD9Ucq2bPWcnk2flEeLbyCQwOvDsiU5WR24JGYF5LAPugouNEKW5xeLAOnN5JA8/\n V7/aUnJMkdkdCNxpLyGBxCUsJcbVKgGDdyaN0=\nReceived: by 10.141.162.16 with SMTP id p16mr661554rvo.243.1224795281837;\n Thu, 23 Oct 2008 13:54:41 -0700 (PDT)\nReceived: by 10.141.77.14 with HTTP; Thu, 23 Oct 2008 13:54:41 -0700 (PDT)\nMessage-ID: <[email protected]>\nDate:\tThu, 23 Oct 2008 22:54:41 +0200\nFrom:\t\"Dave Coventry\" <[email protected]>\nTo: \"Steve Atkins\" <[email protected]>\nSubject: Re: [GENERAL] Annoying Reply-To\nCc: \"pgsql-general General\" <[email protected]>\nIn-Reply-To: <[email protected]>\nMIME-Version: 1.0\nContent-Type: text/plain;\n charset=ISO-8859-1\nContent-Transfer-Encoding: 7bit\nContent-Disposition: inline\nReferences: <[email protected]>\n\t <[email protected]> <[email protected]>\n\t <[email protected]> <[email protected]>\n\t <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0 tagged_above=0 required=5 tests=none\nX-Spam-Level: \nX-Mailing-List:\tpgsql-general\nList-Archive: <http://archives.postgresql.org/pgsql-general>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-general.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:[email protected]>\nList-Subscribe:\t<mailto:[email protected]?body=sub%20pgsql-general>\nList-Unsubscribe: <mailto:[email protected]?body=unsub%20pgsql-general>\nPrecedence: bulk\nSender:\[email protected]\nX-MailScanner-Information: Please contact the ISP for more information\nX-MailScanner: Found to be clean\nReturn-Path: <[email protected]>\nX-Envelope-To: <[email protected]> (uid 0)\nX-Orcpt: rfc822;[email protected]\nOriginal-Recipient: rfc822;[email protected]\nStatus: R\nX-Status: N\nX-KMail-EncryptionState: \nX-KMail-SignatureState: \nX-KMail-MDN-Sent: \n\n\nOn Thursday 23 October 2008 23:54:41 Dave Coventry wrote:\n> 2008/10/23 Steve Atkins <[email protected]>:\n> > If you don't like it (and this applies to everyone else arguing about it,\n> > on either side) please do one of these three things:\n> >\n> > 1. \"Fix\" it locally at your end, as is trivial to do with procmail,\n> > amongst other approaches, and quit whining about it.\n> >\n> > or\n> >\n> > 2. Quit whining about it.\n> >\n> > or\n> >\n> > 3. Find somewhere else to whine about it and quit whining about it here.\n> >\n> > Cheers,\n> > Steve\n> >\n> > On Fri, 17 Oct 2008, Bill Moran wrote:\n> > > You can resent it or not, but this _is_ a personal thing. It's\n> > > personal because you are the only one complaining about it. Despite\n> > > the large number of people on this list, I don't see anyone jumping in\n> > > to defend you.\n>\n> Personally I am of the view that, as I am on this list principally to\n> get support from it, and I am quite prepared to submit to the vagaries\n> and oddities of the list behaviour in pursuit of the answers I might\n> seek.\n>\n> As such, I am quite prepared to 1) Fix it my end, 2) Quit whining\n> about it and 3) Find something else to whine about.\n>\n> However, the point is made by Bill that 'only one person' might feel\n> that the reply-to configuration could be improved, and I feel\n> compelled to say that, while I might not be driven to complain about\n> the list behaviour myself, I do feel that the OP does have a point.\n>\n> And that's all that I'm going to say on the matter.\n>\n> - Dave Coventry\n", "msg_date": "Fri, 24 Oct 2008 01:02:37 +0300", "msg_from": "Aarni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "On Thu, Oct 23, 2008 at 10:42 AM, ries van Twisk <[email protected]> wrote:\n\n>\n> On Oct 23, 2008, at 12:25 PM, Collin Kidder wrote:\n>\n> Bruce Momjian wrote:\n>>\n>>>\n>>>> Mikkel is right, every other well-organized mailing list I've ever been\n>>>> on handles things the sensible way he suggests, but everybody on his side\n>>>> who's been on lists here for a while already knows this issue is a dead\n>>>> horse. Since I use the most advanced e-mail client on the market I just\n>>>> work around that the settings here are weird, it does annoy me a bit anytime\n>>>> I stop to think about it though.\n>>>>\n>>>>\n>>> I think this is the crux of the problem --- if I subscribed to multiple\n>>> email lists, and some have \"rely\" going to the list and some have\n>>> \"reply\" going to the author, I would have to think about the right reply\n>>> option every time I send email.\n>>>\n>>> Fortunately, every email list I subscribe to and manage behaves like the\n>>> Postgres lists.\n>>>\n>>>\n>>>\n>> I find it difficult to believe that every list you subscribe to behaves as\n>> the Postgres list does. Not that I'm doubting you, just that it's difficult\n>> given that the PG list is the ONLY list I've ever been on to use Reply as\n>> just replying to the author. Every other list I've ever seen has reply as\n>> the list address and requires Reply All to reply to the original poster.\n>> Thus, I would fall into the category of people who have to think hard in\n>> order to do the correct thing when posting to this list.\n>>\n>\n> I have the same experience, only PG list seems to behave different.\n>\n> In my humble opinion I feel that I am subscribed to the list (It also says\n> on the bottom Sent via pgsql-general mailing list (\n> [email protected])), so a reply (not reply all --- remove\n> original author) should go back to the list where I am subscribed at, in in\n> my opinion the source is the list aswell (that's why I am getting it in the\n> first place).\n>\n\nI know of at least one other list that is similar: MySQL.\n\nAnd I brought it up a year ago with no eventual change:\nhttp://lists.mysql.com/mysql/209593\n\nAfter a while you just get used to hitting reply all when you mean to reply\nall. I now prefer (though not strongly) this setting.\n\n\n-- \nRob Wultsch\n\nOn Thu, Oct 23, 2008 at 10:42 AM, ries van Twisk <[email protected]> wrote:\n\nOn Oct 23, 2008, at 12:25 PM, Collin Kidder wrote:\n\n\nBruce Momjian wrote:\n\n\nMikkel is right, every other well-organized mailing list I've ever been on handles things the sensible way he suggests, but everybody on his side who's been on lists here for a while already knows this issue is a dead horse.  Since I use the most advanced e-mail client on the market I just work around that the settings here are weird, it does annoy me a bit anytime I stop to think about it though.\n\n\n\nI think this is the crux of the problem --- if I subscribed to multiple\nemail lists, and some have \"rely\" going to the list and some have\n\"reply\" going to the author, I would have to think about the right reply\noption every time I send email.\n\nFortunately, every email list I subscribe to and manage behaves like the\nPostgres lists.\n\n\n\n\nI find it difficult to believe that every list you subscribe to behaves as the Postgres list does. Not that I'm doubting you, just that it's difficult given that the PG list is the ONLY list I've ever been on to use Reply as just replying to the author. Every other list I've ever seen has reply as the list address and requires Reply All to reply to the original poster. Thus, I would fall into the category of people who have to think hard in order to do the correct thing when posting to this list.\n\n\nI have the same experience, only PG list seems to behave different.\n\nIn my humble opinion I feel that I am subscribed to the list (It also says on the bottom Sent via pgsql-general mailing list ([email protected])), so a reply (not reply all --- remove original author) should go back to the list where I am subscribed at, in in my opinion the source is the list aswell (that's why I am getting it in the first place).\n\nI know of at least one other list that is similar: MySQL.And I brought it up a year ago with no eventual change: http://lists.mysql.com/mysql/209593\nAfter a while you just get used to hitting reply all when you mean to reply all. I now prefer (though not strongly) this setting.-- Rob Wultsch", "msg_date": "Thu, 23 Oct 2008 15:37:15 -0700", "msg_from": "\"Rob Wultsch\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Raymond O'Donnell wrote:\n> On 23/10/2008 19:09, Angel Alvarez wrote:\n> \n>> No one, ive seen, seems to be perfect nor thunderbird.\n>> By the way kmail has 4 options (reply, reply to all, reply to author, reply to list) \n>> in addition to be able to use list headers included in the message. \n> \n> Here's a \"reply to list\" add-on for ThunderBird - it's marked\n> experimental, but may be worth a try:\n> \n> https://addons.mozilla.org/en-US/thunderbird/addon/4455\n\nWorks great! Thanks, Ray - no more complaints from me ;). Anyone using \nThunderbird to read this list would benefit from this add-on.\n\n-- \nGuy Rouillier\n", "msg_date": "Thu, 23 Oct 2008 19:18:35 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" }, { "msg_contents": "Am 2008-10-23 15:52:30, schrieb ries van Twisk:\n> anyways.. I don't care anymore... I will do a reply all.\n\nI do normaly: killall ;-)\n\nThanks, Greetings and nice Day/Evening\n Michelle Konzack\n Systemadministrator\n 24V Electronic Engineer\n Tamay Dogan Network\n Debian GNU/Linux Consultant\n\n\n-- \nLinux-User #280138 with the Linux Counter, http://counter.li.org/\n##################### Debian GNU/Linux Consultant #####################\nMichelle Konzack Apt. 917 ICQ #328449886\n+49/177/9351947 50, rue de Soultz MSN LinuxMichi\n+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)", "msg_date": "Fri, 24 Oct 2008 17:11:06 +0200", "msg_from": "Michelle Konzack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying Reply-To" } ]
[ { "msg_contents": "I'm running a medium-traffic Web site that has been running for a few \nyears, and which uses about four PostgreSQL databases on a regular \nbasis. I'm currently running 8.2, although I'm planning to upgrade to \n8.3 in the coming week or two, in part because of the problems that I'm \nhaving. The databases consume a combined total of 35 GB. Like a good \nboy, I've been backing the system up overnight, when we have less \ntraffic, since the site began to run. I use pg_dump to back up, saving \nboth schemas and data for a full restore in case of failure. pg_dump \ntypically executes from another machine on a local network; if it would \nhelp to run pg_dump locally, then I'm certainly open to doing that.\n\nOver the last month or two, database performance has become increasingly \nproblematic during the hours that I run pg_dump. Moreover, the size of \nthe database has gotten to the point where it takes a good number of \nhours to dump everything to disk. This ends up interfering with our \nusers on the East Coast of the United States, when they access our site \nearly in the morning.\n\nOne possible solution is for me to backup our main database more \nregularly, and our development database less regularly. But given the \ngrowth in our traffic (about double what it was 12 months ago), I have \nto assume that this isn't a long-term solution. \n\nI'm also considering taking our oldest data and sticking into a separate \ndatabase (sort of a data warehouse), so that the production database \nbecomes smaller, and thus easier to back up.\n\nBut before I do any of these things, I want to hear what others have \ndiscovered in terms of high-performance backups. Is there a way to stop \npg_dump from locking up the database so much? Is there a knob that I \ncan turn to do a low-priority backup while the live site is running? Is \nthere a superior backup strategy than pg_dump every 24 hours?\n\nThanks in advance for any advice you can offer!\n\nReuven\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n\n", "msg_date": "Tue, 14 Oct 2008 19:27:47 +0200", "msg_from": "\"Reuven M. Lerner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Backup strategies" }, { "msg_contents": ">>> \"Reuven M. Lerner\" <[email protected]> wrote: \n> Is there a superior backup strategy than pg_dump\n> every 24 hours?\n \nYou should probably consider:\n \nhttp://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html\n \n-Kevin\n", "msg_date": "Tue, 14 Oct 2008 12:51:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "Reuven M. Lerner wrote:\n\n> But before I do any of these things, I want to hear what others have\n> discovered in terms of high-performance backups. Is there a way to stop\n> pg_dump from locking up the database so much? Is there a knob that I\n> can turn to do a low-priority backup while the live site is running? Is\n> there a superior backup strategy than pg_dump every 24 hours?\n\nIf you are sysadmin-minded and your operating system & file system\nsupport snapshots, an easy solution (and the one I use) is to create a\nread-only snapshot of the file system with the (binary) database files\nand back that up. The approach has some benefits:\n\n* It won't interfere with \"normal\" database operations (no locking;\nthough I'm not sure that locking is your problem here as pgsql uses MVCC)\n* It works at disk speeds instead of converting data back to SQL for storage\n* Restoring the database works automagically - no need to import the\ndata from SQL back\n* It's convenient to backup snapshots with usual file system backup\nutilities. Tar works fine.\n\nIt also has some significant disadvantages:\n\n* The binary database representation is usually much larger than the SQL\ntext one (because of indexes and internal structures). OTOH you can\neasily use tar with gzip to compress it on the fly.\n* Technically, the snapshot of the database you're taking represents a\ncorrupted database, which is repaired automatically when it's restored.\nIt's similar to as if you pulled the plug on the server while it was\nworking - PostgreSQL will repair itself.\n* You cannot restore the database to a different version of PostgreSQL.\nThe same rules apply as if upgrading - for example you can run data from\n8.3.0 on 8.3.3 but not from 8.2.0 to 8.3.0.\n\nWarning: DO NOT do on-the-fly binary backups without snapshots.\nArchiving the database directory with tar on a regular file system,\nwhile the server is running, will result in an archive that most likely\nwon't work when restored.", "msg_date": "Wed, 15 Oct 2008 11:30:40 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "Ivan Voras wrote:\n> Warning: DO NOT do on-the-fly binary backups without snapshots.\n> Archiving the database directory with tar on a regular file system,\n> while the server is running, will result in an archive that most likely\n> won't work when restored.\n\nEven if you do a \"pg_start_backup/pg_stop_backup\" as specified here:\n\nhttp://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html \n=> Making a Base backup.\n\n??\n\nIt worked when I tested it, but I may just have been darn lucky.\n\n-- \nJesper\n", "msg_date": "Wed, 15 Oct 2008 11:46:36 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "Ivan Voras wrote:\n\n> Warning: DO NOT do on-the-fly binary backups without snapshots.\n> Archiving the database directory with tar on a regular file system,\n> while the server is running, will result in an archive that most likely\n> won't work when restored.\n\nYou can do non-snapshot-based filesystem level backups with\npg_start_backup() and pg_stop_backup() as part of a PITR setup. See:\n\nhttp://www.postgresql.org/docs/8.3/static/continuous-archiving.html\n\nThat's the setup I use, with a full backup taken weekly and WAL files\narchived from then until the next full backup. There is always at least\none full backup at any time in case a backup fails, and I can roll back\nin time for a minimum of a week if anything goes wrong.\n\nI also include plain SQL dumps from pg_dump in the nightly disaster\nrecovery backups.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 15 Oct 2008 18:13:51 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "Jesper Krogh wrote:\n> Ivan Voras wrote:\n>> Warning: DO NOT do on-the-fly binary backups without snapshots.\n>> Archiving the database directory with tar on a regular file system,\n>> while the server is running, will result in an archive that most likely\n>> won't work when restored.\n> \n> Even if you do a \"pg_start_backup/pg_stop_backup\" as specified here:\n> \n> http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html\n> => Making a Base backup.\n> \n> ??\n> \n> It worked when I tested it, but I may just have been darn lucky.\n\nNo, it should be ok - I just didn't catch up with the times :) At least\nthat's my interpretation of this paragraph in documentation:\n\n\"\"\"Some backup tools that you might wish to use emit warnings or errors\nif the files they are trying to copy change while the copy proceeds.\nThis situation is normal, and not an error, when taking a base backup of\nan active database; so you need to ensure that you can distinguish\ncomplaints of this sort from real errors...\"\"\"\n\nIt looks like PostgreSQL freezes the state of the \"data\" directory in\nthis case (and new data goes only to the transaction log - pg_xlog),\nwhich effectively creates an application-level snapshot. Good to know.", "msg_date": "Wed, 15 Oct 2008 12:31:08 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "Ivan Voras a écrit :\n> Jesper Krogh wrote:\n>>[...]\n>> It worked when I tested it, but I may just have been darn lucky.\n> \n> No, it should be ok - I just didn't catch up with the times :) At least\n> that's my interpretation of this paragraph in documentation:\n> \n> \"\"\"Some backup tools that you might wish to use emit warnings or errors\n> if the files they are trying to copy change while the copy proceeds.\n> This situation is normal, and not an error, when taking a base backup of\n> an active database; so you need to ensure that you can distinguish\n> complaints of this sort from real errors...\"\"\"\n> \n> It looks like PostgreSQL freezes the state of the \"data\" directory in\n> this case (and new data goes only to the transaction log - pg_xlog),\n> which effectively creates an application-level snapshot. Good to know.\n> \n\nNope. Even files in data directory change. That's why the documentation\nwarns against tools that emit errors for files that change during the copy.\n\n\n-- \nGuillaume.\n http://www.postgresqlfr.org\n http://dalibo.com\n", "msg_date": "Wed, 15 Oct 2008 13:40:30 +0200", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "Guillaume Lelarge wrote:\n> Ivan Voras a écrit :\n\n>> It looks like PostgreSQL freezes the state of the \"data\" directory in\n>> this case (and new data goes only to the transaction log - pg_xlog),\n>> which effectively creates an application-level snapshot. Good to know.\n> \n> Nope. Even files in data directory change. That's why the documentation\n> warns against tools that emit errors for files that change during the copy.\n\nOk, thanks. This is a bit off-topic, but if it's not how I imagine it,\nthen how is it implemented?", "msg_date": "Wed, 15 Oct 2008 15:21:12 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "On Wed, 15 Oct 2008, Ivan Voras wrote:\n>> Nope. Even files in data directory change. That's why the documentation\n>> warns against tools that emit errors for files that change during the copy.\n>\n> Ok, thanks. This is a bit off-topic, but if it's not how I imagine it,\n> then how is it implemented?\n\nThe files may change, but it doesn't matter, because there is enough \ninformation in the xlog to correct it all.\n\nMatthew\n\n-- \nOf course it's your fault. Everything here's your fault - it says so in your\ncontract. - Quark\n", "msg_date": "Wed, 15 Oct 2008 14:37:40 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Wed, 15 Oct 2008, Ivan Voras wrote:\n>>> Nope. Even files in data directory change. That's why the documentation\n>>> warns against tools that emit errors for files that change during the\n>>> copy.\n>>\n>> Ok, thanks. This is a bit off-topic, but if it's not how I imagine it,\n>> then how is it implemented?\n> \n> The files may change, but it doesn't matter, because there is enough\n> information in the xlog to correct it all.\n\nI'm thinking about these paragraphs in the documentation:\n\n\"\"\"\nBe certain that your backup dump includes all of the files underneath\nthe database cluster directory (e.g., /usr/local/pgsql/data). If you are\nusing tablespaces that do not reside underneath this directory, be\ncareful to include them as well (and be sure that your backup dump\narchives symbolic links as links, otherwise the restore will mess up\nyour tablespaces).\n\nYou can, however, omit from the backup dump the files within the\npg_xlog/ subdirectory of the cluster directory. This slight complication\nis worthwhile because it reduces the risk of mistakes when restoring.\nThis is easy to arrange if pg_xlog/ is a symbolic link pointing to\nsomeplace outside the cluster directory, which is a common setup anyway\nfor performance reasons.\n\"\"\"\n\nSo, pg_start_backup() freezes the data at the time it's called but still\ndata and xlog are changed, in a different way that's safe to backup? Why\nnot run with pg_start_backup() always enabled?", "msg_date": "Wed, 15 Oct 2008 16:05:06 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "2008/10/15 Ivan Voras <[email protected]>:\n> Matthew Wakeling wrote:\n>> On Wed, 15 Oct 2008, Ivan Voras wrote:\n>>>> Nope. Even files in data directory change. That's why the documentation\n>>>> warns against tools that emit errors for files that change during the\n>>>> copy.\n>>>\n>>> Ok, thanks. This is a bit off-topic, but if it's not how I imagine it,\n>>> then how is it implemented?\n>>\n>> The files may change, but it doesn't matter, because there is enough\n>> information in the xlog to correct it all.\n>\n> I'm thinking about these paragraphs in the documentation:\n>\n> \"\"\"\n> Be certain that your backup dump includes all of the files underneath\n> the database cluster directory (e.g., /usr/local/pgsql/data). If you are\n> using tablespaces that do not reside underneath this directory, be\n> careful to include them as well (and be sure that your backup dump\n> archives symbolic links as links, otherwise the restore will mess up\n> your tablespaces).\n>\n> You can, however, omit from the backup dump the files within the\n> pg_xlog/ subdirectory of the cluster directory. This slight complication\n> is worthwhile because it reduces the risk of mistakes when restoring.\n> This is easy to arrange if pg_xlog/ is a symbolic link pointing to\n> someplace outside the cluster directory, which is a common setup anyway\n> for performance reasons.\n> \"\"\"\n>\n> So, pg_start_backup() freezes the data at the time it's called but still\n> data and xlog are changed, in a different way that's safe to backup? Why\n> not run with pg_start_backup() always enabled?\n>\n\nBecause nothing would get vacuumed and your data would just grow and grow.\n\nYour data is held at the point in time when you typed pg_start_backup\nso when you restore your data is back at that point. If you need to go\nforward you need the xlog. (hence point in time backup....)\n\nThis is all part of the mvcc feature that PostgreSQL has.\n\nPostgreSQL never delete anything until nothing can read it anymore, So\nif you vacuum during a backup it will only delete stuff that was\nfinished with before the backup started.\n\nIf you don't do a pg_start_backup first you don't have this promise\nthat vacuum will not remove somthing you need. (Oh I think checkpoints\nmight come into this as well but I'm not sure how)\n\nOr at least thats my understanding...\n\nSo if your base backup takes a while I would advise running vacuum\nafterwards. But then if your running autovacuum there is probably very\nlittle need to worry.\n\nPeter Childs\n", "msg_date": "Wed, 15 Oct 2008 15:22:39 +0100", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "2008/10/15 Aidan Van Dyk <[email protected]>:\n> * Ivan Voras <[email protected]> [081015 10:05]:\n>\n>> So, pg_start_backup() freezes the data at the time it's called but still\n>> data and xlog are changed, in a different way that's safe to backup? Why\n>> not run with pg_start_backup() always enabled?\n>\n> I think your missing the whole point of \"pg_start_backup()\".\n> pg_start_backup()\" is *part* of a full PITR/backup run. i.e. you use it\n> when you have an archive command working as well. It's *not* mean tto\n> just allow you to do a filesystem copy inside a running data directory.\n\nPossibly - that's why I'm sticking to this thread :) My context is\ndoing full filesystem-only copies/backups of the database (xlogs &\nall) - is pg_start_backup() applicable?\n", "msg_date": "Wed, 15 Oct 2008 16:58:46 +0200", "msg_from": "\"Ivan Voras\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "Ivan Voras <[email protected]> writes:\n> Matthew Wakeling wrote:\n>> The files may change, but it doesn't matter, because there is enough\n>> information in the xlog to correct it all.\n\n> I'm thinking about these paragraphs in the documentation:\n\n>> You can, however, omit from the backup dump the files within the\n>> pg_xlog/ subdirectory of the cluster directory.\n\nThe assumption is that a PITR backup configuration provides a separate\npathway for the xlog files to get to the slave database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Oct 2008 11:51:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies " }, { "msg_contents": "On Wed, Oct 15, 2008 at 8:58 AM, Ivan Voras <[email protected]> wrote:\n> 2008/10/15 Aidan Van Dyk <[email protected]>:\n>> * Ivan Voras <[email protected]> [081015 10:05]:\n>>\n>>> So, pg_start_backup() freezes the data at the time it's called but still\n>>> data and xlog are changed, in a different way that's safe to backup? Why\n>>> not run with pg_start_backup() always enabled?\n>>\n>> I think your missing the whole point of \"pg_start_backup()\".\n>> pg_start_backup()\" is *part* of a full PITR/backup run. i.e. you use it\n>> when you have an archive command working as well. It's *not* mean tto\n>> just allow you to do a filesystem copy inside a running data directory.\n>\n> Possibly - that's why I'm sticking to this thread :) My context is\n> doing full filesystem-only copies/backups of the database (xlogs &\n> all) - is pg_start_backup() applicable?\n\nJust an FYI, there are some issues with using filesystems that support\nsnapshots, depending on the OS / filesystem. For instance, the LVM,\nwhich linux uses that allows snapshots, has issues with write barriers\nand also has a maximum throughput of about 300Meg/second. It's all a\ntrade-off, but I don't run my db files on LVM because of those two\nproblems.\n", "msg_date": "Wed, 15 Oct 2008 11:22:14 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "\nOn Wed, 2008-10-15 at 16:05 +0200, Ivan Voras wrote:\n\n> So, pg_start_backup() freezes the data at the time it's called but\n> still\n> data and xlog are changed, in a different way that's safe to backup?\n\nNo, that's not how it works. The pg_start_backup() records the point\nthat we must rollforward from. There is no freezing.\n\n> Why\n> not run with pg_start_backup() always enabled?\n\nIt's not a mode that can be enabled/disabled. Its a starting point.\n\nYou should run pg_start_backup() each time you run a backup, just like\nthe fine manual describes.\n\nCheck your backups...\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Fri, 17 Oct 2008 20:35:41 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup strategies" }, { "msg_contents": "Reuven M. Lerner wrote:\n> I'm running a medium-traffic Web site that has been running for a few \n> years, and which uses about four PostgreSQL databases on a regular \n> basis. I'm currently running 8.2, although I'm planning to upgrade to \n> 8.3 in the coming week or two, in part because of the problems that \n> I'm having. The databases consume a combined total of 35 GB. Like a \n> good boy, I've been backing the system up overnight, when we have less \n> traffic, since the site began to run. I use pg_dump to back up, \n> saving both schemas and data for a full restore in case of failure. \n> pg_dump typically executes from another machine on a local network; if \n> it would help to run pg_dump locally, then I'm certainly open to doing \n> that.\n>\nOne point to note with continuous archiving, if your four databases are \nin the same cluster, then restoring the archive will restore all your \ndatabases to the same point in time. So you cannot, as far as I am \naware, restore just one of the four databases.\n\nHoward.\n", "msg_date": "Tue, 21 Oct 2008 16:28:51 +0100", "msg_from": "Howard Cole <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Backup strategies" } ]
[ { "msg_contents": "I have an interesting performance improvement need. As part of the automatic\ntest suite we run in our development environment, we re-initialize our test\ndatabase a number of times in order to ensure it is clean before running a\ntest. We currently do this by dropping the public schema and then recreating\nour tables (roughly 30 tables total). After that we do normal inserts, etc,\nbut never with very much data. My question is, what settings can we tweak to\nimprove performance is this scenario? Specifically, if there was a way to\ntell Postgres to keep all operations in memory, that would probably be\nideal.\n\n \n\nWe actually tried running Postgres off of a RAM disk and this did help a\nreasonable amount, but we're running under Windows and setting up the RAM\ndisk is a hassle and all of our developers would need to do it.\n\n \n\n \n\nAny tips would be appreciated.\n\n \n\n \n\n--Rainer\n\n\n\n\n\n\n\n\n\n\n\nI have an interesting performance improvement need. As part\nof the automatic test suite we run in our development environment, we\nre-initialize our test database a number of times in order to ensure it is\nclean before running a test. We currently do this by dropping the public schema\nand then recreating our tables (roughly 30 tables total). After that we do\nnormal inserts, etc, but never with very much data. My question is, what\nsettings can we tweak to improve performance is this scenario? Specifically, if\nthere was a way to tell Postgres to keep all operations in memory, that would\nprobably be ideal.\n \nWe actually tried running Postgres off of a RAM disk and\nthis did help a reasonable amount, but we’re running under Windows and\nsetting up the RAM disk is a hassle and all of our developers would need to do\nit.\n \n \nAny tips would be appreciated.\n \n \n--Rainer", "msg_date": "Wed, 15 Oct 2008 08:08:27 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "speeding up table creation" }, { "msg_contents": "Rainer Mager wrote:\n>\n> I have an interesting performance improvement need. As part of the \n> automatic test suite we run in our development environment, we \n> re-initialize our test database a number of times in order to ensure \n> it is clean before running a test. We currently do this by dropping \n> the public schema and then recreating our tables (roughly 30 tables \n> total). After that we do normal inserts, etc, but never with very much \n> data. My question is, what settings can we tweak to improve \n> performance is this scenario? Specifically, if there was a way to tell \n> Postgres to keep all operations in memory, that would probably be ideal.\n>\n\nWhat is the test part? In other words, do you start with a known initial \ndatabase with all empty tables then run the tests or is part of the test \nitself the creation of those tables? How much data is in the initial \ndatabase if the tables aren't empty. Creating 30 empty tables should \ntake a trivial amount of time. Also, are there other schemas than public?\n\nA couple ideas/comments:\n\nYou cannot keep the data in memory (that is, you can't disable writing \nto the disk). But since you don't care about data loss, you could turn \noff fsync in postgresql.conf. From a test perspective you should be fine \n- it will only be an issue in the event of a crash and then you can just \nrestart with a fresh load. Remember, however, that any performance \nbenchmarks won't translate to production use (of course they don't \ntranslate if you are using ramdisk anyway).\n\nNote that the system tables are updated whenever you add/delete/modify \ntables. Make sure they are being vacuumed or your performance will \nslowly degrade.\n\nMy approach is to create a database exactly as you want it to be at the \nstart of your tests (fully vacuumed and all) and then use it as a \ntemplate to be used to create the testdb each time. Then you can just \n(with appropriate connection options) run \"dropdb thetestdb\" followed by \n\"createdb --template thetestdbtemplate thetestdb\" which is substantially \nfaster than deleting and recreating tables - especially if they contain \nmuch data.\n\nCheers,\nSteve\n\n", "msg_date": "Tue, 14 Oct 2008 16:56:41 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up table creation" }, { "msg_contents": "On Tue, Oct 14, 2008 at 5:08 PM, Rainer Mager <[email protected]> wrote:\n> I have an interesting performance improvement need. As part of the automatic\n> test suite we run in our development environment, we re-initialize our test\n> database a number of times in order to ensure it is clean before running a\n> test. We currently do this by dropping the public schema and then recreating\n> our tables (roughly 30 tables total). After that we do normal inserts, etc,\n> but never with very much data. My question is, what settings can we tweak to\n> improve performance is this scenario? Specifically, if there was a way to\n> tell Postgres to keep all operations in memory, that would probably be\n> ideal.\n\nI'm not sure we've identified the problem just yet.\n\nDo you have the autovacuum daemon enabled? Are your system catalogs\nbloated with dead tuples? Could you approach this by using a template\ndatabase that was pre-setup for you and you could just \"create\ndatabase test with template test_template\" or something like that?\n\nAlso, if you're recreating a database you might need to analyze it\nfirst before you start firing queries.\n\nPostgreSQL buffers / caches what it can in shared_buffers. The OS\nalso caches data in kernel cache. Having more memory makes it much\nfaster. But if it writes, it needs to write. If the database server\nwhere this is happening doesn't have any important data in it, you\nmight see a boost from disabling fsync, but keep in mind a server that\nloses power / crashes can corrupt all your databases in the db server.\n\nIf you can fix the problem with any of those suggestions you might\nneed to throw hardware at the problem.\n", "msg_date": "Tue, 14 Oct 2008 18:05:57 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up table creation" }, { "msg_contents": "Steve Crawford wrote:\n\n> You cannot keep the data in memory (that is, you can't disable writing \n> to the disk). But since you don't care about data loss, you could turn \n> off fsync in postgresql.conf. From a test perspective you should be fine \n> - it will only be an issue in the event of a crash and then you can just \n> restart with a fresh load. Remember, however, that any performance \n> benchmarks won't translate to production use (of course they don't \n> translate if you are using ramdisk anyway).\n\nAnother thing that may really help, if you're not already doing it, is \nto do all your schema creation inside a single transaction - at least \nassuming you can't use the template database approach.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 15 Oct 2008 15:01:16 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up table creation" } ]
[ { "msg_contents": "Good day,\n\nSo I've been running 8.3 for a few months now and things seem good.\n\nI also note there are some bug fixes and you are up to 8.3.4 now, but\nreading it I don't see that I'm being affected by anything, but please\ntell me if i should plan upgrades to 8.3.4..\n\nThe real issue is my index growth and my requirement for weekly\nre-indexing (which blocks and therefore is more or less a manual\nprocess in a live production environment (fail over, changing vips\netc).\n\nAfter Reindex\n\nMy data\n3.1G /data/cls\n\nMy Indexes\n1.5G /data/clsindex\n\nAfter 2 weeks\n\nMy data\n3.4G /data/cls\n\nMy indexes\n6.8G /data/clsindex\n\nThis is 2 weeks. and throughout those 2 weeks I can see performance\ndegrading. In fact I just bumped the memory footprint of my server to\n32gigs from 8gigs as my entire PostgreSQL DB is now 15gb, this has\nhelped tremendously, everything ends up in memory and there is almost\nzero read access from the DB (that's great), however as my DB grows I\ncan't expect to manage it by adding Terabytes of memory to the system,\nso something is not being done efficiently.\n\nWe are testing with fill factor, but that just seems to allocate space\nand doesn't really decrease my index size on disk. It does however\nseem to help performance a bit, but not a ton.\n\nWhere should I start looking, what type of information should I\nprovide in order for me to help you, help me?\n\nThis is a transactional DB with over 800K updates happening each day.\ntons of updates and deletes. autovac is on and I do a analyze each\nnight after the dump\n\nThanks\nTory\n", "msg_date": "Fri, 17 Oct 2008 09:05:26 -0700", "msg_from": "\"Tory M Blue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index bloat, reindex weekly, suggestions etc?" }, { "msg_contents": ">>> \"Tory M Blue\" <[email protected]> wrote: \n \n> tell me if i should plan upgrades to 8.3.4..\n \nIt's a good idea. It should be painless -- drop in and restart.\n \n> After Reindex\n> My Indexes\n> 1.5G /data/clsindex\n> \n> After 2 weeks\n> My indexes\n> 6.8G /data/clsindex\n> \n> This is 2 weeks. and throughout those 2 weeks I can see performance\n> degrading. In fact I just bumped the memory footprint of my server\nto\n> 32gigs from 8gigs\n \nMake sure that effective_cache_size reflects this.\n \n> This is a transactional DB with over 800K updates happening each\nday.\n> tons of updates and deletes. autovac is on and I do a analyze each\n> night after the dump\n \nMake sure that you run as VACUUM ANALYZE VERBOSE and look at the last\nfew lines; you might need to increase your free space manager\nallocations.\n \n-Kevin\n", "msg_date": "Fri, 17 Oct 2008 11:30:07 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index bloat, reindex weekly, suggestions etc?" }, { "msg_contents": "On Fri, Oct 17, 2008 at 9:30 AM, Kevin Grittner\n<[email protected]> wrote:\n>>>> \"Tory M Blue\" <[email protected]> wrote:\n>\n>> tell me if i should plan upgrades to 8.3.4..\n>\n> It's a good idea. It should be painless -- drop in and restart.\n\n>\n> Make sure that effective_cache_size reflects this.\n>\n\n\nThanks Kevin\n\nDETAIL: A total of 501440 page slots are in use (including overhead).\n501440 page slots are required to track all free space.\nCurrent limits are: 1087500 page slots, 430 relations, using 6401 kB.\nVACUUM\n\nthat looks fine\n\nAnd effective_cache, ya I didn't change that as I wanted to see how\nthe results were with just adding memory and in all honesty it was\nnight and day without changing that param, what will modifying that\nparam accomplish?\n\neffective_cache_size = 7GB (so no not changed and our performance is\n50x better with just the memory upgrade) BTW running Linux..\nshared_buffers = 600MB (have 300 connections specified)\n\nThanks\nTory\n", "msg_date": "Fri, 17 Oct 2008 09:47:49 -0700", "msg_from": "\"Tory M Blue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index bloat, reindex weekly, suggestions etc?" }, { "msg_contents": "On Fri, Oct 17, 2008 at 10:47 AM, Tory M Blue <[email protected]> wrote:\n> DETAIL: A total of 501440 page slots are in use (including overhead).\n> 501440 page slots are required to track all free space.\n> Current limits are: 1087500 page slots, 430 relations, using 6401 kB.\n> VACUUM\n>\n> that looks fine\n\nThat's still a lot of dead space. You might do better with more\naggresive autovacuuming settings.\n", "msg_date": "Fri, 17 Oct 2008 11:02:35 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index bloat, reindex weekly, suggestions etc?" }, { "msg_contents": "On Fri, Oct 17, 2008 at 10:02 AM, Scott Marlowe <[email protected]> wrote:\n> On Fri, Oct 17, 2008 at 10:47 AM, Tory M Blue <[email protected]> wrote:\n>> DETAIL: A total of 501440 page slots are in use (including overhead).\n>> 501440 page slots are required to track all free space.\n>> Current limits are: 1087500 page slots, 430 relations, using 6401 kB.\n>> VACUUM\n>>\n>> that looks fine\n>\n> That's still a lot of dead space. You might do better with more\n> aggresive autovacuuming settings.\n\n\nHmmm, autovac is on by default and it appears I've left most of the\ndefault settings in play, but since I don't get any fsm warnings,\nthings seem to be running cleanly.\n\nautovacuum_max_workers = 5 # max number of autovacuum subprocesses\n\nautovacuum_vacuum_threshold = 1000 # min number of row updates before\n # vacuum\nautovacuum_analyze_threshold = 2000 # min number of row updates before\n # analyze\n\nin fact there is a vac running right now :)\n\nThanks Scott!\n\nTory\n", "msg_date": "Fri, 17 Oct 2008 10:15:12 -0700", "msg_from": "\"Tory M Blue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index bloat, reindex weekly, suggestions etc?" }, { "msg_contents": ">>> \"Tory M Blue\" <[email protected]> wrote: \n> DETAIL: A total of 501440 page slots are in use (including\noverhead).\n> 501440 page slots are required to track all free space.\n> Current limits are: 1087500 page slots, 430 relations, using 6401\nkB.\n \nAs already pointed out, that's a lot of free space. You don't use\nVACUUM FULL on this database, do you? That would keep the data\nrelatively tight but seriously bloat indexes, which is consistent with\nyour symptoms. VACUUM FULL should not be used routinely, it is\nbasically for recovery from serious heap bloat when you don't have\nspace for another copy of the data, and it should usually be followed\nby a REINDEX to clean up the index bloat it causes.\n \n> And effective_cache, ya I didn't change that as I wanted to see how\n> the results were with just adding memory and in all honesty it was\n> night and day without changing that param, what will modifying that\n> param accomplish?\n \nIt may allow PostgreSQL to pick more efficient plans for some of your\nqueries. Be honest with it about the available resources and give it\nthe chance. In particular, you may see fewer queries resorting to\nsequential scans of entire tables.\n \n-Kevin\n", "msg_date": "Fri, 17 Oct 2008 12:35:32 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index bloat, reindex weekly, suggestions etc?" }, { "msg_contents": "On Fri, Oct 17, 2008 at 10:35 AM, Kevin Grittner\n<[email protected]> wrote:\n> As already pointed out, that's a lot of free space. You don't use\n> VACUUM FULL on this database, do you? That would keep the data\n> relatively tight but seriously bloat indexes, which is consistent with\n> your symptoms. VACUUM FULL should not be used routinely, it is\n> basically for recovery from serious heap bloat when you don't have\n> space for another copy of the data, and it should usually be followed\n> by a REINDEX to clean up the index bloat it causes.\n\n\nInteresting, I do run:\n\n \"# vacuum and analyze each db before dumping\npsql $DB -c 'vacuum analyze verbose'\"\"\n\n\nevery night before I dump, is that causing some issues that I'm not aware of?\n\nTory\n", "msg_date": "Fri, 17 Oct 2008 10:48:07 -0700", "msg_from": "\"Tory M Blue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index bloat, reindex weekly, suggestions etc?" }, { "msg_contents": ">>> \"Tory M Blue\" <[email protected]> wrote: \n \n> psql $DB -c 'vacuum analyze verbose'\"\"\n \nThat's fine; you're not using the FULL option.\n \n> every night before I dump\n \nAnother thing that could cause bloat is a long-running transaction. \nCheck for that. If your database is being updated during a pg_dump or\npg_dumpall, that would count as a long-running transaction. You might\npossibly want to look at going to the PITR backup technique for your\nregular backups.\n \n-Kevin\n", "msg_date": "Fri, 17 Oct 2008 12:53:58 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index bloat, reindex weekly, suggestions etc?" }, { "msg_contents": "2008/10/17 Tory M Blue <[email protected]>\n\n>\n> The real issue is my index growth and my requirement for weekly\n> re-indexing (which blocks and therefore is more or less a manual\n> process in a live production environment (fail over, changing vips\n> etc).\n>\n\nBTW: Can't you simply recreate indexes online? Since postgresql accepts\nmultiple indexes of same definition, this may look like:\n1) create index concurrently index_alt\n2) analyze index_alt\n3) drop index_orig\nBoth index_alt and index_orig having same definition\n\n2008/10/17 Tory M Blue <[email protected]>\n\nThe real issue is my index growth and my requirement for weekly\nre-indexing  (which blocks and therefore is more or less a manual\nprocess in a live production environment (fail over, changing vips\netc).BTW: Can't you simply recreate indexes online? Since postgresql accepts multiple indexes of same definition, this may look like:1) create index concurrently index_alt2) analyze index_alt\n3) drop index_origBoth index_alt and index_orig having same definition", "msg_date": "Sat, 18 Oct 2008 10:00:19 +0300", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index bloat, reindex weekly, suggestions etc?" }, { "msg_contents": "On Fri, Oct 17, 2008 at 11:00 PM, Віталій Тимчишин <[email protected]> wrote:\n>\n>\n> 2008/10/17 Tory M Blue <[email protected]>\n>>\n>> The real issue is my index growth and my requirement for weekly\n>> re-indexing (which blocks and therefore is more or less a manual\n>> process in a live production environment (fail over, changing vips\n>> etc).\n>\n> BTW: Can't you simply recreate indexes online? Since postgresql accepts\n> multiple indexes of same definition, this may look like:\n> 1) create index concurrently index_alt\n> 2) analyze index_alt\n> 3) drop index_orig\n> Both index_alt and index_orig having same definition\n\nSorry for the late response. After some testing, this is in fact a\nsolution that will and does work. It's really much simpler than doing\nswitch overs and taking systems down to re index.\n\nThanks for the clue stick over the head, I know we looked into this in\nthe past and it was a non starter, but it seems to be working just\nfine!!\n\nThanks\n\nTory.\n", "msg_date": "Mon, 3 Nov 2008 17:30:51 -0800", "msg_from": "\"Tory M Blue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index bloat, reindex weekly, suggestions etc?" } ]
[ { "msg_contents": "Hello friends ...\n\nI'm evaluating the performance of algorithms for optimization of queries.\nI am comparing results between the algorithm of Dynamic Programming and an\nimplementation of Kruskal's algorithm. When submitting a query that makes\nreference to only 2 tables of my base, logically the same \"Query Plan\" is\nshown. But the \"Total runtime\" displayed by the command \"Explain-Analyze\"\npresents a variation of time very high:\n\nDynamic Programming Total runtime: 1204.220 ms\n\nKruskal Total runtime: 3744.879 ms\n\nNo change of data (insert, delete, update) in the tables was made during\nthe tests. The same query was submitted several times (with Kruskal and\nDynamic Programming algorithms) and the variation of results persists.\n\nThe \"explain analyze\" only reports the time to run *execute* the query.\nWith the same \"Query Plan\", does not understand why this variation occurs.\n\nIn annex the Query Plans\n\nIf someone can help me.\n\nThank's for attention.\n\nTarcizio Bini", "msg_date": "Sat, 18 Oct 2008 16:14:09 -0300 (BRT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Explain Analyze - Total runtime very differentes" }, { "msg_contents": "[email protected] wrote:\n> Hello friends ...\n> \n> I'm evaluating the performance of algorithms for optimization of queries.\n> I am comparing results between the algorithm of Dynamic Programming and an\n> implementation of Kruskal's algorithm. When submitting a query that makes\n> reference to only 2 tables of my base, logically the same \"Query Plan\" is\n> shown. But the \"Total runtime\" displayed by the command \"Explain-Analyze\"\n> presents a variation of time very high:\n> \n> Dynamic Programming Total runtime: 1204.220 ms\n> \n> Kruskal Total runtime: 3744.879 ms\n> \n> No change of data (insert, delete, update) in the tables was made during\n> the tests. The same query was submitted several times (with Kruskal and\n> Dynamic Programming algorithms) and the variation of results persists.\n> \n> The \"explain analyze\" only reports the time to run *execute* the query.\n> With the same \"Query Plan\", does not understand why this variation occurs.\n> \n> In annex the Query Plans\n\nsure it it not something as simple as a caching effect - ie you run the \nslow variant first and pg and/or the OS buffered data and the repeated \nexecution just got a benefit from that ?\n\nTry running all variations a few dozend times both in cached and \nuncached state and you should see the difference getting leveled out.\n\n\nStefan\n", "msg_date": "Sun, 19 Oct 2008 10:55:50 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain Analyze - Total runtime very differentes" } ]
[ { "msg_contents": "Hi!!!\n\nI'm evaluating the performance of algorithms for optimization of queries.\nI am comparing results between the algorithm of Dynamic Programming and\nGEQO algorithm.\n\nI use the base and queries provided by the benchmark OSDL DBT3 with scale\nfactor = 1. The queries were submitted randomly, I did the cleaning of\nmemory cache between tests (drop_caches) and restarting the database\npostgres.\n\nStay surprise with the running time of the querie 11 of DBT3. These\nresults are an average of the \"Total Runtime\" after 20 executions:\n\nDynamic Programming Total runtime: 83.122,003 ms\nRun-time optimizer algorithm (Dynamic Programming): 0,453 ms\n\nGEQO Total runtime: 18.266,819 ms\nRun-time optimizer algorithm (GEQO): 0,440 ms\n\nThe GEQO optimizer Run-time is a little faster, not affecting the final\nresult. In theory the dynamic programming algorithm to choose the best\nplan, resulting in less time to run the query.\n\nWho knows what is happening?\n\nIn annex the Query Plans and Query 11 of DBT3.\n\nThank's for attention.\n\nTarcizio Bini.", "msg_date": "Tue, 21 Oct 2008 12:40:19 -0300 (BRT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "\"Mysterious\" - Dynamic Programming X GEQO" } ]
[ { "msg_contents": "Hello,\n\nI have to choose a dedicated server to host a big 8.3 database.\nThe global size of the database (indexes included) will grow by 40 Go every \nyear (40 millions of lines/year)\nReal data (indexes excluded) will be around 5-7 Go/year.\nI need to store 4 years of activity.\nVery few simultaneous users (~4).\n100000 rows added every day via csv imports.\nThe application will be a reporting application.\nMain statements: aggregation of 10000 to 10millions of line.\nVast majority will hit 300000 lines (90% of the connected users), few will \nhit more than 10 millions (10% of the connected users, there may never be 2 \nsimultaneous users of this category).\n20sec - 30sec for such a statement is acceptable.\n\nIt's quite easy to choose CPU (xeon quad core 2.66, maybe dual xeon), RAM \n(8-12Go) but I still hesitate for hard disks.\n\nthese are options possible with the hosters I usually work with:\n\nOption 0)\nRAID1 750Go SATA2\n\nOption 1)\nRAID1 750Go SATA2 + 500Go USB disk\n\nOption 2)\nRAID1 SAS 15000rpm 147 Go hard disk + 500Go USB\n\nOption 2+)\nRAID1 SATA2 SSD intel X25-M 80Go + 500Go USB\n\nOption 3)\nRAID5 SATA2 5x750Go\n\nOption 4)\nRAID10 SAS 15000rpm 4x146 Go\n\nOption 5)\nRAID10 SATA2 4x250 Go\n\nAny other better option that I could ask for ?\n\nWhat would be the best choice in case of an external USB drive : using it \nfor indexes or x_log ?\n\nAnd what is the best option to backup such a database ?\nNightly dump ?\ndata folder zip ?\n\nthanks for your help. \n\n\n", "msg_date": "Thu, 23 Oct 2008 17:10:08 +0200", "msg_from": "\"Lionel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware HD choice..." }, { "msg_contents": "On Thu, Oct 23, 2008 at 9:10 AM, Lionel <[email protected]> wrote:\n> Hello,\n>\n> I have to choose a dedicated server to host a big 8.3 database.\n> The global size of the database (indexes included) will grow by 40 Go every\n> year (40 millions of lines/year)\n> Real data (indexes excluded) will be around 5-7 Go/year.\n> I need to store 4 years of activity.\n> Very few simultaneous users (~4).\n> 100000 rows added every day via csv imports.\n> The application will be a reporting application.\n> Main statements: aggregation of 10000 to 10millions of line.\n> Vast majority will hit 300000 lines (90% of the connected users), few will\n> hit more than 10 millions (10% of the connected users, there may never be 2\n> simultaneous users of this category).\n> 20sec - 30sec for such a statement is acceptable.\n\nIn that case, throw memory at the problem first, then lots of hard\ndrives on a good RAID controller.\n\n> It's quite easy to choose CPU (xeon quad core 2.66, maybe dual xeon), RAM\n> (8-12Go) but I still hesitate for hard disks.\n\nI've had better luck with opterons than Xeons, but they're both pretty\ngood nowadays. I'd look at at least 16 Gigs ram, if you can afford it\nget 32.\n\n> these are options possible with the hosters I usually work with:\n>\n> Option 0)\n> RAID1 750Go SATA2\n>\n> Option 1)\n> RAID1 750Go SATA2 + 500Go USB disk\n>\n> Option 2)\n> RAID1 SAS 15000rpm 147 Go hard disk + 500Go USB\n>\n> Option 2+)\n> RAID1 SATA2 SSD intel X25-M 80Go + 500Go USB\n>\n> Option 3)\n> RAID5 SATA2 5x750Go\n>\n> Option 4)\n> RAID10 SAS 15000rpm 4x146 Go\n>\n> Option 5)\n> RAID10 SATA2 4x250 Go\n>\n> Any other better option that I could ask for ?\n\nYes, more drives. 4 drives in a RAID10 is a good start. If you could\nget 8 or 12 in one that's even better.\n\n>\n> What would be the best choice in case of an external USB drive : using it\n> for indexes or x_log ?\n\nNot to use one. Generally USB transfer speeds and external USB drives\naren't reliable or fast enough for serious server use. I say this\nwith two very nice external USB drives sitting next to me. They store\nmy videos, not customer data.\n\n> And what is the best option to backup such a database ?\n\nPITR\n\n-- When fascism comes to America, it will be draped in a flag and\ncarrying a cross - Sinclair Lewis\n", "msg_date": "Thu, 23 Oct 2008 10:38:34 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware HD choice..." }, { "msg_contents": "On Thu, Oct 23, 2008 at 10:38 AM, Scott Marlowe <[email protected]> wrote:\n>>\n>> Any other better option that I could ask for ?\n>\n> Yes, more drives. 4 drives in a RAID10 is a good start. If you could\n> get 8 or 12 in one that's even better.\n>\n\nNote that for transactional databases SAS drives are usually\nnoticeably better, but for reporting databases, SATA drives are\ngenerally fine, with 70-80% the sustained transfer rate at less than\nhalf the cost per megabyte. I'd recommend 8 SATA drives over 4 SAS\ndrives for a reporting database. You'll spend about the same on twice\nthe number of drives but you'll get much more storage, which is often\nuseful when you need to work with large datasets.\n", "msg_date": "Thu, 23 Oct 2008 11:16:44 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware HD choice..." }, { "msg_contents": "Lionel wrote:\n> Hello,\n> \n> I have to choose a dedicated server to host a big 8.3 database.\n> The global size of the database (indexes included) will grow by 40 Go every \n> year (40 millions of lines/year)\n> Real data (indexes excluded) will be around 5-7 Go/year.\n> I need to store 4 years of activity.\n> Very few simultaneous users (~4).\n> 100000 rows added every day via csv imports.\n> The application will be a reporting application.\n> Main statements: aggregation of 10000 to 10millions of line.\n> Vast majority will hit 300000 lines (90% of the connected users), few will \n> hit more than 10 millions (10% of the connected users, there may never be 2 \n> simultaneous users of this category).\n> 20sec - 30sec for such a statement is acceptable.\n> \n> It's quite easy to choose CPU (xeon quad core 2.66, maybe dual xeon), RAM \n> (8-12Go) but I still hesitate for hard disks.\n\nIf the number of users is low or your queries are complex, a faster CPU\nwith fewer cores will serve you better because PostgreSQL cannot split a\nsingle query across multiple CPUs/cores. It will also speed up your CSV\nimports (be sure to do them as a single transaction or with COPY). If\nyou can bear the heat (and the electricity bill) you can get 3.4 GHz\nXeons. Be sure to use a 64-bit OS and lots of memory (16 GB+).\n\n> Option 5)\n> RAID10 SATA2 4x250 Go\n\nGood enough.\n\n> Any other better option that I could ask for ?\n\nYes, 8x250 :) You need as much drives as possible - not for capacity or\nreliability but for speed. Use RAID 5 or RAID 6 only if the database\nisn't going to be updated often (for example, if you add records to the\ndatabase only several times a day, it's ok).\n\n> What would be the best choice in case of an external USB drive : using it \n> for indexes or x_log ?\n\nSkip any USB-connected drives for production environments. It's not\nworth it. If new data isn't going to be added to the database\ncontinuously, you don't need a separate x_log. Better use the drive as a\n(hot, if possible) spare for RAID in case one of the RAID drives\nmalfunctions.", "msg_date": "Thu, 23 Oct 2008 22:47:45 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware HD choice..." }, { "msg_contents": "If you are doing batch inserts of data, and want to have reporting queries\nconcurrently running, make sure you have the pg_xlogs on a different disk\nthan the data/indexes. 2 drives RAID 1 for OS + xlogs works great (and\nthese can be SAS if you choose, have a separate partition -- ext2 if it is\nlinux -- for the xlogs. Then you can easily go with storage capacity and\nSATA for the main reporting portion. You just don't want the inserts in\nbatches to slow the whole thing to a crawl due to xlog writes on the same\ndrive array as the reporting.\nHowever, if you can only get a few disks, it is a lot harder to choose\nbetween one large array and two of them split without experimenting with\nboth on real data and queries. It is a quick and easy performance win if\nyou have 6+ disks and do enough writes.\n\nAlso, if you intend to have lots of data organized by a time field, and\nexpect to do the reporting/aggregation queries on subsets of that data\nbounded by time, partitioning by time can have huge benefits. Partition by\nmonth, for example, and sequential scans will only flow to the months of\ninterest if the queries have the right lmits on the date in the where\nclause.\n\nPartitioning WILL take more development and tuning time, so don't do it\nunless you know you need it... though if the reporting is mostly restricted\nto time windows, the impact it has on improving runtimes of aggregation\nqueries is immense. However, partitioning won't help at all until you have\nenough data to justify it.\n\nOn Thu, Oct 23, 2008 at 10:16 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Thu, Oct 23, 2008 at 10:38 AM, Scott Marlowe <[email protected]>\n> wrote:\n> >>\n> >> Any other better option that I could ask for ?\n> >\n> > Yes, more drives. 4 drives in a RAID10 is a good start. If you could\n> > get 8 or 12 in one that's even better.\n> >\n>\n> Note that for transactional databases SAS drives are usually\n> noticeably better, but for reporting databases, SATA drives are\n> generally fine, with 70-80% the sustained transfer rate at less than\n> half the cost per megabyte. I'd recommend 8 SATA drives over 4 SAS\n> drives for a reporting database. You'll spend about the same on twice\n> the number of drives but you'll get much more storage, which is often\n> useful when you need to work with large datasets.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIf you are doing batch inserts of data, and want to have reporting queries concurrently running, make sure you have the pg_xlogs on a different disk than the data/indexes.   2 drives RAID 1 for OS + xlogs works great (and these can be SAS if you choose, have a separate partition -- ext2 if it is linux -- for the xlogs.  Then you can easily go with storage capacity and SATA for the main reporting portion.  You just don't want the inserts in batches to slow the whole thing to a crawl due to xlog writes on the same drive array as the reporting.\nHowever, if you can only get a few disks, it is a lot harder to choose between one large array and two of them split without experimenting with both on real data and queries.  It is a quick and easy performance win if you have 6+ disks and do enough writes.\nAlso, if you intend to have lots of data organized by a time field, and expect to do the reporting/aggregation queries on subsets of that data bounded by time, partitioning by time can have huge benefits.  Partition by month, for example, and sequential scans will only flow to the months of interest if the queries have the right lmits on the date in the where clause. \nPartitioning WILL take more development and tuning time, so don't do it unless you know you need it... though if the reporting is mostly restricted to time windows, the impact it has on improving runtimes of aggregation queries is immense.  However, partitioning won't help at all until you have enough data to justify it.\nOn Thu, Oct 23, 2008 at 10:16 AM, Scott Marlowe <[email protected]> wrote:\nOn Thu, Oct 23, 2008 at 10:38 AM, Scott Marlowe <[email protected]> wrote:\n>>\n>> Any other better option that I could ask for ?\n>\n> Yes, more drives.  4 drives in a RAID10 is a good start.  If you could\n> get 8 or 12 in one that's even better.\n>\n\nNote that for transactional databases SAS drives are usually\nnoticeably better, but for reporting databases, SATA drives are\ngenerally fine, with 70-80% the sustained transfer rate at less than\nhalf the cost per megabyte.  I'd recommend 8 SATA drives over 4 SAS\ndrives for a reporting database.  You'll spend about the same on twice\nthe number of drives but you'll get much more storage, which is often\nuseful when you need to work with large datasets.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 23 Oct 2008 19:48:34 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware HD choice..." }, { "msg_contents": "On Thu, Oct 23, 2008 at 8:48 PM, Scott Carey <[email protected]> wrote:\n> If you are doing batch inserts of data, and want to have reporting queries\n> concurrently running, make sure you have the pg_xlogs on a different disk\n> than the data/indexes. 2 drives RAID 1 for OS + xlogs works great (and\n\n From the OPs original post I'd guess that one big RAID 10 would serve\nhim best, but yeah, you need to test to really see.\n\n> Also, if you intend to have lots of data organized by a time field, and\n> expect to do the reporting/aggregation queries on subsets of that data\n> bounded by time, partitioning by time can have huge benefits. Partition by\n> month, for example, and sequential scans will only flow to the months of\n> interest if the queries have the right lmits on the date in the where\n> clause.\n\nI second this. Partitioning in time in past reporting databases\nresulted in huge performance improvements for select queries.\n", "msg_date": "Thu, 23 Oct 2008 23:41:49 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware HD choice..." }, { "msg_contents": "On Thu, 23 Oct 2008 23:41:49 -0600\n\"Scott Marlowe\" <[email protected]> wrote:\n\n> On Thu, Oct 23, 2008 at 8:48 PM, Scott Carey <[email protected]> wrote:\n> > If you are doing batch inserts of data, and want to have reporting queries\n> > concurrently running, make sure you have the pg_xlogs on a different disk\n> > than the data/indexes. 2 drives RAID 1 for OS + xlogs works great (and\n> \n> From the OPs original post I'd guess that one big RAID 10 would serve\n> him best, but yeah, you need to test to really see.\n\nHas anybody a performance comparison for postgresql between the various RAID\nlevels ?\n\nmany thanks in advance\n\nregards\n\n", "msg_date": "Fri, 24 Oct 2008 09:26:45 +0200", "msg_from": "Lutz Steinborn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware HD choice..." }, { "msg_contents": "\n\"Scott Marlowe\" wrote:\n> I second this. Partitioning in time in past reporting databases\n> resulted in huge performance improvements for select queries.\n\nMost statements will load data from a single year, but multiple monthes.\nI have a integer field containing the year and will use it for \npartitionning.\nIt will also help a lot to remove one year after 4 years of activity.\n\n\nActually this same database is used with 2 millions of lines per year \n(instead of 30). It is loaded with 3 years and runs quite fast \nunpartitionned on a 4 years old single SCSI HD with 2Go of RAM, single core \npentium4.\nIt runs a LOT faster on a quad xeon 2.83GHz with 8Go of ram and SATA HD, \nwhich is quite common now for dedicated servers.\n\nI tried partitionning on it: it showed no performance gain for such a small \nsize, but it is an evidence that it will help with 30 millions of \nlines/year.\n\nOK, thanks to all your recommandations, I will ask hosters for a RAID10 \n4x250go SATA.\n\n\n", "msg_date": "Fri, 24 Oct 2008 12:08:53 +0200", "msg_from": "\"Lionel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware HD choice..." } ]
[ { "msg_contents": "Hello,\n\nI have a question regarding the maintenance_work_mem and the index creation. \n\nI have a dedicated postgresql server with 16GB of RAM. \nThe shared_buffers is 4GB and the maintenance_work_mem is 2GB. \n\nI have a table with 28 mil records and 2 one-column indexes: \n1. First index is for an integer column - size on disk 606MB\n2. The second index is for a varchar column (15 characters usually) - size on disc 851MB.\n\nWhen I create the first index (for the integer column) it fits in the memory and it takes 1 minute to be created and is using around 1.7GB of the maintenance work memory...\n\nThe second index is swapping on pgsql_tmp and it takes 26 minutes to be created so it looks like the 2GB of maintenance work memory is not enough to create a 851MB index...\n\nSo my question is the 2GB of maintenance work memory would be enough only for indexes 600MB or smaller on disk? It looks like for creating an index is required a maintenance work memory 3 times larger than the size of the index on disk or I am missing other parameters?\n\nThanks a lot,\nIoana\n \n\n\n\n\n\n __________________________________________________________________\nYahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now at\nhttp://ca.toolbar.yahoo.com.\n\n", "msg_date": "Mon, 27 Oct 2008 07:30:07 -0700 (PDT)", "msg_from": "Ioana Danes <[email protected]>", "msg_from_op": true, "msg_subject": "maintenance_work_mem and create index" } ]
[ { "msg_contents": "Hi,\n\nI've got an OLTP application which occasionally suffers from slow\ncommit time. The process in question does something like this:\n\n1. Do work\n2. begin transaction\n3. insert record\n4. commit transaction\n5. Do more work\n6. begin transaction\n7. update record\n8. commit transaction\n9. Do more work\n\nThe vast majority of the time, everything runs very quickly. The\nmedian processing time for the whole thing is 7ms.\n\nHowever, occasionally, processing time will jump up significantly -\nthe average processing time is around 20ms with the maximum processing\ntime taking 2-4 seconds for a small percentage of transactions. Ouch!\n\nTurning on statement logging and analyzing the logs of the application\nitself shows that step #4 is the culprit of the vast majority of the\nslow transactions.\n\nSoftware: CentOS 4.7, PostgreSQL 8.3.4, Slony-I 1.2.15 (the database\nin question is replicated using slony)\n\nHardware: 2x Xeon 5130, 4GB RAM, 6-disk RAID10 15k RPM, BBU on the controller\n\nNotable configuration changes:\nshared_buffers = 800MB\ntemp_buffers = 200MB\nwork_mem = 16M\nmaintenance_work_mem = 800MB\nvacuum_cost_delay = 10\ncheckpoint_segments = 10\neffective_cache_size = 2500MB\n\nI found this post[1] from a while back which was informative:\n\nBoth situations affect me in that I have Slony which I believe\nexecutes triggers upon commit, and looking at the disk IO stats, there\nis an elevated level of IO activity during this time, but it doesn't\nappear to be heavy enough to cause the type of delays I am seeing.\n\nReading this page[2] indicates that I may want to increase my\ncheckpoint_segments, checkpoint_timeout and bgwriter settings, but\nlooking at pg_stat_bgwriter seems to indicate that my current settings\nare probably OK?\n\n# select * from pg_stat_bgwriter;\n checkpoints_timed | checkpoints_req | buffers_checkpoint |\nbuffers_clean | maxwritten_clean | buffers_backend | buffers_alloc\n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n 3834 | 105 | 3091905 |\n25876 | 110 | 2247576 | 2889873\n\nAny suggestions on how to proceed and debug the problem from here?\n\nMy only other guess is that there is some sort of locking issues going\non which is slowing things down and that it may also be slony related,\nas I also see a high number of slony related queries taking longer\nthan 1 second...\n\nThanks\n\n-Dave\n\n[1] http://archives.postgresql.org/pgsql-performance/2008-01/msg00005.php\n[2] http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n", "msg_date": "Mon, 27 Oct 2008 17:23:37 -0700", "msg_from": "\"David Rees\" <[email protected]>", "msg_from_op": true, "msg_subject": "Occasional Slow Commit" }, { "msg_contents": "On Mon, Oct 27, 2008 at 8:23 PM, David Rees <[email protected]> wrote:\n> Hi,\n>\n> I've got an OLTP application which occasionally suffers from slow\n> commit time. The process in question does something like this:\n>\n> 1. Do work\n> 2. begin transaction\n> 3. insert record\n> 4. commit transaction\n> 5. Do more work\n> 6. begin transaction\n> 7. update record\n> 8. commit transaction\n> 9. Do more work\n>\n> The vast majority of the time, everything runs very quickly. The\n> median processing time for the whole thing is 7ms.\n>\n> However, occasionally, processing time will jump up significantly -\n> the average processing time is around 20ms with the maximum processing\n> time taking 2-4 seconds for a small percentage of transactions. Ouch!\n>\n> Turning on statement logging and analyzing the logs of the application\n> itself shows that step #4 is the culprit of the vast majority of the\n> slow transactions.\n>\n> Software: CentOS 4.7, PostgreSQL 8.3.4, Slony-I 1.2.15 (the database\n> in question is replicated using slony)\n>\n> Hardware: 2x Xeon 5130, 4GB RAM, 6-disk RAID10 15k RPM, BBU on the controller\n>\n> Notable configuration changes:\n> shared_buffers = 800MB\n> temp_buffers = 200MB\n> work_mem = 16M\n> maintenance_work_mem = 800MB\n> vacuum_cost_delay = 10\n> checkpoint_segments = 10\n> effective_cache_size = 2500MB\n>\n> I found this post[1] from a while back which was informative:\n>\n> Both situations affect me in that I have Slony which I believe\n> executes triggers upon commit, and looking at the disk IO stats, there\n> is an elevated level of IO activity during this time, but it doesn't\n> appear to be heavy enough to cause the type of delays I am seeing.\n>\n> Reading this page[2] indicates that I may want to increase my\n> checkpoint_segments, checkpoint_timeout and bgwriter settings, but\n> looking at pg_stat_bgwriter seems to indicate that my current settings\n> are probably OK?\n>\n> # select * from pg_stat_bgwriter;\n> checkpoints_timed | checkpoints_req | buffers_checkpoint |\n> buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc\n> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n> 3834 | 105 | 3091905 |\n> 25876 | 110 | 2247576 | 2889873\n>\n> Any suggestions on how to proceed and debug the problem from here?\n>\n> My only other guess is that there is some sort of locking issues going\n> on which is slowing things down and that it may also be slony related,\n> as I also see a high number of slony related queries taking longer\n> than 1 second...\n\nI bet your problem is disk syncing. Your xlogs are on the data volume\nso any type of burst activity can push back commit times. If this is\nthe case, you have basically three solutions to this problem:\n*) buy more disks (i's start with pushing the xlogs out to dedicated volume)\n*) disable fsync (very unsafe) or synchronous commit (somewhat less unsafe)\n*) checkpoint/bgwriter tuning: can provide incremental gains. This is\nnot magic...at best you can smooth out bursty checkpoints. If your\nproblems are really serious (yours don't seem to be), you have to look\nat the previous options.\n\nHave you temporarily disabling slony to see if the problem goes away?\n(My guess is it's not slony).\n\nmerlin\n", "msg_date": "Tue, 28 Oct 2008 08:45:52 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Occasional Slow Commit" }, { "msg_contents": "On Mon, Oct 27, 2008 at 05:23:37PM -0700, David Rees wrote:\n\n> However, occasionally, processing time will jump up significantly -\n> the average processing time is around 20ms with the maximum processing\n> time taking 2-4 seconds for a small percentage of transactions. Ouch!\n> \n> Turning on statement logging and analyzing the logs of the application\n> itself shows that step #4 is the culprit of the vast majority of the\n> slow transactions.\n\nMy bet is that you're waiting on checkpoints. Given that you're on\n8.3, start fiddling with the checkpoint_completion_target parameter.\n0.7 might help.\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Tue, 28 Oct 2008 08:47:16 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Occasional Slow Commit" }, { "msg_contents": "On Mon, 27 Oct 2008, David Rees wrote:\n\n> Software: CentOS 4.7, PostgreSQL 8.3.4, Slony-I 1.2.15 (the database\n> in question is replicated using slony)\n> Hardware: 2x Xeon 5130, 4GB RAM, 6-disk RAID10 15k RPM, BBU on the controller\n\nThe CentOS 4.7 kernel will happily buffer about 1.6GB of writes with that \nmuch RAM, and the whole thing can get slammed onto disk during the final \nfsync portion of the checkpoint. What you should do first is confirm \nwhether or not the slow commits line up with the end of the checkpoint, \nwhich is easy to see if you turn on log_checkpoints. That gives you \ntimings for the write and fsync phases of the checkpoint which can also be \ninformative.\n\n> Reading this page[2] indicates that I may want to increase my\n> checkpoint_segments, checkpoint_timeout and bgwriter settings, but\n> looking at pg_stat_bgwriter seems to indicate that my current settings\n> are probably OK?\n>\n> # select * from pg_stat_bgwriter;\n> checkpoints_timed | checkpoints_req | buffers_checkpoint |\n> 3834 | 105 | 3,091,905 |\n> buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc\n> 25876 | 110 | 2,247,576 | 2,889,873\n\nI reformatted the above to show what's happening a bit better. Most of \nyour checkpoints are the timed ones, which are unlikely to cause \ninterference from a slow commit period (the writes are spread out over 5 \nminutes in those cases). It's quite possible the slow periods are coming \nonly from the occasional requested checkpoints, which normally show up \nbecause checkpoint_segments is too low and you chew through segments too \nfast. If you problems line up with checkpoint time, you would likely gain \nsome benefit from increasing checkpoint_segments to spread out the \ncheckpoint writes further; the 10 you're using now is on the low side for \nyour hardware.\n\nIf the problems don't go away after that, you may be suffering from \nexcessive Linux kernel buffering instead. I've got a blog entry showing \nhow I tracked down a similar long pause on a Linux server at \nhttp://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html you \nmay find helpful for determining if your issue is this one (which is \npretty common on RHEL systems having relatively large amounts of RAM) or \nif it's something else, like the locking you mentioned.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n", "msg_date": "Wed, 29 Oct 2008 09:26:32 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Occasional Slow Commit" }, { "msg_contents": "On Wed, Oct 29, 2008 at 6:26 AM, Greg Smith <[email protected]> wrote:\n> The CentOS 4.7 kernel will happily buffer about 1.6GB of writes with that\n> much RAM, and the whole thing can get slammed onto disk during the final\n> fsync portion of the checkpoint. What you should do first is confirm\n> whether or not the slow commits line up with the end of the checkpoint,\n> which is easy to see if you turn on log_checkpoints. That gives you timings\n> for the write and fsync phases of the checkpoint which can also be\n> informative.\n\nOK, log_checkpoints is turned on to see if any delays correspond to\ncheckpoint activity...\n\n>> Reading this page[2] indicates that I may want to increase my\n>> checkpoint_segments, checkpoint_timeout and bgwriter settings, but\n>> looking at pg_stat_bgwriter seems to indicate that my current settings\n>> are probably OK?\n>>\n>> # select * from pg_stat_bgwriter;\n>> checkpoints_timed | checkpoints_req | buffers_checkpoint |\n>> 3834 | 105 | 3,091,905 |\n>> buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc\n>> 25876 | 110 | 2,247,576 | 2,889,873\n>\n> I reformatted the above to show what's happening a bit better.\n\nSorry, gmail killed the formatting.\n\n> Most of your\n> checkpoints are the timed ones, which are unlikely to cause interference\n> from a slow commit period (the writes are spread out over 5 minutes in those\n> cases). It's quite possible the slow periods are coming only from the\n> occasional requested checkpoints, which normally show up because\n> checkpoint_segments is too low and you chew through segments too fast. If\n> you problems line up with checkpoint time, you would likely gain some\n> benefit from increasing checkpoint_segments to spread out the checkpoint\n> writes further; the 10 you're using now is on the low side for your\n> hardware.\n\nOK, I've also bumped up checkpoint_segments to 20 and\ncheckpoint_completion_target to 0.7 in an effort to reduce the effect\nof checkpoints.\n\n> If the problems don't go away after that, you may be suffering from\n> excessive Linux kernel buffering instead. I've got a blog entry showing how\n> I tracked down a similar long pause on a Linux server at\n> http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html you\n> may find helpful for determining if your issue is this one (which is pretty\n> common on RHEL systems having relatively large amounts of RAM) or if it's\n> something else, like the locking you mentioned.\n\nAh, interesting. I've also turned down the dirty_ratio and\ndirty_background_ratio as suggested, but I don't think this would be\naffecting things here. The rate of IO on this server is very low\ncompared to what it's capable of.\n\nThanks for the suggestions, I'll report back with results.\n\n-Dave\n", "msg_date": "Wed, 29 Oct 2008 15:30:19 -0700", "msg_from": "\"David Rees\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occasional Slow Commit" }, { "msg_contents": "(Resending this, the first one got bounced by mail.postgresql.org)\n\nOn Wed, Oct 29, 2008 at 3:30 PM, David Rees <[email protected]> wrote:\n> On Wed, Oct 29, 2008 at 6:26 AM, Greg Smith <[email protected]> wrote:\n>> What you should do first is confirm\n>> whether or not the slow commits line up with the end of the checkpoint,\n>> which is easy to see if you turn on log_checkpoints. That gives you timings\n>> for the write and fsync phases of the checkpoint which can also be\n>> informative.\n>\n> OK, log_checkpoints is turned on to see if any delays correspond to\n> checkpoint activity...\n\nWell, I'm pretty sure the delays are not checkpoint related. None of\nthe slow commits line up at all with the end of checkpoints.\n\nThe period of high delays occur during the same period of time each\nweek, but it's not during a particularly high load period on the\nsystems.\n\nIt really seems like there must be something running in the background\nthat is not showing up on the system activity logs, like a background\nRAID scrub or something.\n\nHere are a couple representative checkpoint complete messages from the logs:\n\n2008-10-31 20:12:03 UTC: : : LOG: checkpoint complete: wrote 285\nbuffers (0.3%); 0 transaction log file(s) added, 0 removed, 0\nrecycled; write=57.933 s, sync=0.053 s, total=57.990 s\n2008-10-31 20:17:33 UTC: : : LOG: checkpoint complete: wrote 437\nbuffers (0.4%); 0 transaction log file(s) added, 0 removed, 0\nrecycled; write=87.891 s, sync=0.528 s, total=88.444 s\n2008-10-31 20:22:05 UTC: : : LOG: checkpoint complete: wrote 301\nbuffers (0.3%); 0 transaction log file(s) added, 0 removed, 1\nrecycled; write=60.774 s, sync=0.033 s, total=60.827 s\n2008-10-31 20:27:46 UTC: : : LOG: checkpoint complete: wrote 504\nbuffers (0.5%); 0 transaction log file(s) added, 0 removed, 0\nrecycled; write=101.037 s, sync=0.049 s, total=101.122 s\n\nDuring this period of time there was probably 100 different queries\nwriting/commiting data that took longer than a second (3-4 seconds\nseems to be the worst).\n\nThe RAID controller on this machine is some sort of MegaRAID\ncontroller. I'll have to see if there is some sort of scheduled scan\nrunning during this period of time.\n\nOne of the replicate nodes is an identical machine which I just\nnoticed has the same slony commit slow downs logged even though it's\nonly receiving data from slony from the primary node. There are two\nother nodes listening in on the same subscription, but these two nodes\ndon't show the same slony commit slow downs, but these machines use a\nslightly different raid controller and are about 9 months newer than\nthe other two.\n\nI'm still hoping that the checkpoint tuning has reduced commit latency\nduring busy periods, I should have more data after the weekend.\n\n-Dave\n", "msg_date": "Fri, 31 Oct 2008 17:14:28 -0700", "msg_from": "\"David Rees\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occasional Slow Commit" }, { "msg_contents": "On Fri, Oct 31, 2008 at 4:14 PM, David Rees <[email protected]> wrote:\n> Well, I'm pretty sure the delays are not checkpoint related. None of\n> the slow commits line up at all with the end of checkpoints.\n>\n> The period of high delays occur during the same period of time each\n> week, but it's not during a particularly high load period on the\n> systems.\n>\n> It really seems like there must be something running in the background\n> that is not showing up on the system activity logs, like a background\n> RAID scrub or something.\n\nOK, I finally had a chance to dig at this problem some more, and after\nfutzing around with the MegaCli tools (major PITA, btw), I was able to\nconfirm that there is a feature called \"Patrol Read\" on this LSI\nMegaraid SAS card which runs a weekly background read scan of the\ndisks looking for errors. It is during this time period that I get\nlots of slow commits and transactions.\n\nFWIW, the card identifies itself from lspci as this:\n\nLSI Logic / Symbios Logic MegaRAID SAS\nSubsystem: Intel Corporation SROMBSAS18E RAID Controller\n\nI also found that my write cache was set to WriteThrough instead of\nWriteBack, defeating the purpose of having a BBU and that my secondary\nserver apparently doesn't have a BBU on it. :-(\n\nAnyway, has anyone done any benchmarking of MegaRAID SAS controllers?\nI am configuring my arrays to use these settings:\n\nRead Policy: Normal (Normal, Read ahead & Adaptive read head)\nWrite Policy: Writeback (Writeback, Writethrough)\nDisable Writeback if bad BBU\nIO Policy: Direct (Direct, Cached)\nDisk Cache: Enable (Enable, Disable, Unchanged)\n\nThe only setting I'm really concerned about is the Disk Cache setting\n- is it safe to assume that the controller will do the right thing\nwith regards to flushing the disk cache when appropriate to avoid data\nloss? LSI RAID cards seem to be pretty well respected, so I'd have to\nguess yes.\n\nThanks\n\nDave\n", "msg_date": "Wed, 5 Nov 2008 19:25:31 -0800", "msg_from": "\"David Rees\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occasional Slow Commit" }, { "msg_contents": "> I also found that my write cache was set to WriteThrough instead of\n> WriteBack, defeating the purpose of having a BBU and that my secondary\n> server apparently doesn't have a BBU on it. :-(\n\nNote also that several RAID controllers will periodically drop the\nwrite-back mode during battery capacity tests. If you care about\nconsistently/deterministically having full performance (with\nwhite-backed battery protected caching), you probably want to confirm\nyour controller behavior here.\n\n(I've seen this on at least LSI based controllers in Dell 2950:s, and\nalso on some 3wares.)\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Thu, 6 Nov 2008 11:21:47 +0100", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Occasional Slow Commit" }, { "msg_contents": "On Thu, Nov 6, 2008 at 2:21 AM, Peter Schuller\n<[email protected]> wrote:\n>> I also found that my write cache was set to WriteThrough instead of\n>> WriteBack, defeating the purpose of having a BBU and that my secondary\n>> server apparently doesn't have a BBU on it. :-(\n>\n> Note also that several RAID controllers will periodically drop the\n> write-back mode during battery capacity tests. If you care about\n> consistently/deterministically having full performance (with\n> white-backed battery protected caching), you probably want to confirm\n> your controller behavior here.\n>\n> (I've seen this on at least LSI based controllers in Dell 2950:s, and\n> also on some 3wares.)\n\nI can confirm that this is the case by reviewing the logs stored on\nthe MegaRAID controller that have a BBU and had WriteBack configured.\nThe controller also lets you know (using the MegaCli utility) what\nsetting is configured, and what setting is in effect.\n\nIn the case of the machines without a BBU on them, they are configured\nto be in WriteBack, but are actually running in WriteThrough.\n\n-Dave\n", "msg_date": "Thu, 6 Nov 2008 07:47:27 -0800", "msg_from": "\"David Rees\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occasional Slow Commit" }, { "msg_contents": "On Thu, Nov 6, 2008 at 8:47 AM, David Rees <[email protected]> wrote:\n>\n> In the case of the machines without a BBU on them, they are configured\n> to be in WriteBack, but are actually running in WriteThrough.\n\nI'm pretty sure the LSIs will refuse to actually run in writeback without a BBU.\n", "msg_date": "Thu, 6 Nov 2008 09:07:06 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Occasional Slow Commit" }, { "msg_contents": "On Thu, Nov 6, 2008 at 8:07 AM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Nov 6, 2008 at 8:47 AM, David Rees <[email protected]> wrote:\n>>\n>> In the case of the machines without a BBU on them, they are configured\n>> to be in WriteBack, but are actually running in WriteThrough.\n>\n> I'm pretty sure the LSIs will refuse to actually run in writeback without a BBU.\n\nIt's configurable (at least on the MegaRAID cards I'm using). There's\nan option you can turn on that lets you run in writeback when the BBU\nis offline.\n\n-Dave\n", "msg_date": "Thu, 6 Nov 2008 10:46:56 -0800", "msg_from": "\"David Rees\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occasional Slow Commit" } ]
[ { "msg_contents": "Hi there,\n\nI configured OpenSolaris on our OpenSolaris Machine. Specs:\n\n2x Quad 2.6 Ghz Xeon\n64 GB of memory\n16x 15k5 SAS\n\nThe filesystem is configured using ZFS, and I think I have found a \nconfiguration that performs fairly well.\n\nI installed the standard PostgreSQL that came with the OpenSolaris disk \n(8.3), and later added support for PostGIS. All fime.\nI also tried to tune postgresql.conf to maximize performance and also \nmemory usage.\n\nSince PostgreSQL is the only thing running on this machine, we want it \nto take full advantage of the hardware. For the ZFS cache, we have 8 GB \nreserved. The rest can be used by postgres.\n\nThe problem is getting it to use that much. At the moment, it only uses \nalmost 9 GB, so by far not enough. The problem is getting it to use \nmore... I hope you can help me with working config.\n\nHere are the parameters I set in the config file:\n\nshared_buffers = 8192MB\nwork_mem = 128MB\nmaintenance_work_mem = 2048MB\nmax_fsm_pages = 204800\nmax_fsm_relations = 2000\n\nDatabase is about 250 GB in size, so we really need to have as much data \nas possible in memory.\n\nI hope you can help us tweak a few parameters to make sure all memory \nwill be used.\n\n\n\n", "msg_date": "Thu, 30 Oct 2008 16:15:40 +0100", "msg_from": "Christiaan Willemsen <[email protected]>", "msg_from_op": true, "msg_subject": "Configuring for maximum memory usage" }, { "msg_contents": "Hi,\nyou could set effective_cache_size to a high value (free memory on your \nserver that is used for caching).\nChristiaan Willemsen wrote:\n> Hi there,\n>\n> I configured OpenSolaris on our OpenSolaris Machine. Specs:\n>\n> 2x Quad 2.6 Ghz Xeon\n> 64 GB of memory\n> 16x 15k5 SAS\n>\n> The filesystem is configured using ZFS, and I think I have found a \n> configuration that performs fairly well.\n>\n> I installed the standard PostgreSQL that came with the OpenSolaris \n> disk (8.3), and later added support for PostGIS. All fime.\n> I also tried to tune postgresql.conf to maximize performance and also \n> memory usage.\n>\n> Since PostgreSQL is the only thing running on this machine, we want it \n> to take full advantage of the hardware. For the ZFS cache, we have 8 \n> GB reserved. The rest can be used by postgres.\n>\n> The problem is getting it to use that much. At the moment, it only \n> uses almost 9 GB, so by far not enough. The problem is getting it to \n> use more... I hope you can help me with working config.\n>\n> Here are the parameters I set in the config file:\n>\n> shared_buffers = 8192MB\n> work_mem = 128MB\n> maintenance_work_mem = 2048MB\n> max_fsm_pages = 204800\n> max_fsm_relations = 2000\n>\n> Database is about 250 GB in size, so we really need to have as much \n> data as possible in memory.\n>\n> I hope you can help us tweak a few parameters to make sure all memory \n> will be used.\n>\n>\n>\n>\n\n", "msg_date": "Thu, 30 Oct 2008 16:42:55 +0100", "msg_from": "Ulrich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "Christiaan Willemsen wrote:\n> Hi there,\n\n> The problem is getting it to use that much. At the moment, it only uses \n> almost 9 GB, so by far not enough. The problem is getting it to use \n> more... I hope you can help me with working config.\n\nPostgreSQL is only going to use what it needs. It relies on the OS for \nmuch of the caching etc...\n\n> \n> Here are the parameters I set in the config file:\n> \n> shared_buffers = 8192MB\n\nI wouldn't take this any higher.\n\n> work_mem = 128MB\n\nThis is quite high but it might be o.k. depending on what you are doing.\n\n> maintenance_work_mem = 2048MB\n\nThis is only used during maintenance so you won't see this much.\n\n> max_fsm_pages = 204800\n> max_fsm_relations = 2000\n\nThis uses very little memory.\n\n> \n> Database is about 250 GB in size, so we really need to have as much data \n> as possible in memory.\n> \n> I hope you can help us tweak a few parameters to make sure all memory \n> will be used.\n\nYou are missing effective_cache_size. Try setting that to 32G.\n\nYou also didn't mention checkpoint_segments (which isn't memory but \nstill important) and default_statistics_target (which isn't memory but \nstill important).\n\nJoshua D. Drake\n\n\n> \n> \n> \n> \n\n", "msg_date": "Thu, 30 Oct 2008 08:45:12 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "\nJoshua D. Drake wrote:\n>\n> PostgreSQL is only going to use what it needs. It relies on the OS for \n> much of the caching etc...\n>\nSo that would actually mean that I could raise the setting of the ARC \ncache to far more than 8 GB? As I said, our database is 250 GB, So I \nwould expect that postgres needs more than it is using right now... \nSeveral tables have over 500 million records (obviously partitioned). \nAt the moment we are doing queries over large datasets, So I would \nassume that postgress would need a bit more memory than this..\n>\n> You are missing effective_cache_size. Try setting that to 32G.\nThat one was set to 24 GB. But this setting only tells posgres how much \ncaching it can expect from the OS? This is not actually memory that it \nwill allocate, is it?\n>\n> You also didn't mention checkpoint_segments (which isn't memory but \n> still important) and default_statistics_target (which isn't memory but \n> still important).\n>\nis at the moment set to:\n\ncheckpoint_segments = 40\ndefault_statistics_target is set to default (I think that is 10)\n\nThanks already,\n\nChristiaan\n", "msg_date": "Thu, 30 Oct 2008 16:58:01 +0100", "msg_from": "Christiaan Willemsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "On Thu, 2008-10-30 at 16:58 +0100, Christiaan Willemsen wrote:\n> Joshua D. Drake wrote:\n> >\n> > PostgreSQL is only going to use what it needs. It relies on the OS for \n> > much of the caching etc...\n> >\n> So that would actually mean that I could raise the setting of the ARC \n> cache to far more than 8 GB? As I said, our database is 250 GB, So I \n> would expect that postgres needs more than it is using right now... \n> Several tables have over 500 million records (obviously partitioned). \n> At the moment we are doing queries over large datasets, So I would \n> assume that postgress would need a bit more memory than this..\n\nWell I actually can't answer this definitely. My knowledge of Solaris is\nslimmer than my knowledge of other operating systems. However it appears\nfrom a brief google that ARC cache is some predefined file level cache\nthat you can set with Solaris? If so, then you want that to be high\nenough to keep your most active relations hot in cache.\n\nRemember that PostgreSQL doesn't cache anything on its own so if you do\nwant to hit disk it has to be in file cache.\n\n> >\n> > You are missing effective_cache_size. Try setting that to 32G.\n> That one was set to 24 GB. But this setting only tells posgres how much \n> caching it can expect from the OS? This is not actually memory that it \n> will allocate, is it?\n\nThat is correct it is not an actual allocation but it does vastly effect\nyour query plans. PostgreSQL uses this parameter to help determine if a\nsomeone is likely to be cached (see comment about file cache above).\n\n> >\n> > You also didn't mention checkpoint_segments (which isn't memory but \n> > still important) and default_statistics_target (which isn't memory but \n> > still important).\n> >\n> is at the moment set to:\n> \n> checkpoint_segments = 40\n> default_statistics_target is set to default (I think that is 10)\n> \n\n10 is likely way too low. Try 150 and make sure you analyze after.\n\nAs I recall some other databases allow you to say, \"you have this much\nmemory, use it\". PostgreSQL doesn't do that. You just give it pointers\nand it will use as much as it needs within the limits of the OS. The key\nword here is needs.\n\nThere is obviously some variance to that (like work_mem).\n\nJoshua D. Drake\n\n\n\n-- \n\n", "msg_date": "Thu, 30 Oct 2008 09:05:52 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "You must either increase the memory that ZFS uses, or increase Postgresql\nshard_mem and work_mem to get the aggregate of the two to use more RAM.\n\nI believe, that you have not told ZFS to reserve 8GB, but rather told it to\nlimit itself to 8GB.\n\nSome comments below:\n\nOn Thu, Oct 30, 2008 at 8:15 AM, Christiaan Willemsen <\[email protected]> wrote:\n\n> Hi there,\n>\n> I configured OpenSolaris on our OpenSolaris Machine. Specs:\n>\n> 2x Quad 2.6 Ghz Xeon\n> 64 GB of memory\n> 16x 15k5 SAS\n>\nIf you do much writing, and even moreso with ZFS, it is critical to put the\nWAL log on a different ZFS volume (and perhaps disks) than the data and\nindexes.\n\n\n>\n>\n> The filesystem is configured using ZFS, and I think I have found a\n> configuration that performs fairly well.\n>\n> I installed the standard PostgreSQL that came with the OpenSolaris disk\n> (8.3), and later added support for PostGIS. All fime.\n> I also tried to tune postgresql.conf to maximize performance and also\n> memory usage.\n>\n> Since PostgreSQL is the only thing running on this machine, we want it to\n> take full advantage of the hardware. For the ZFS cache, we have 8 GB\n> reserved. The rest can be used by postgres.\n>\n\nWhat setting reserves (but does not limit) ZFS to a memory size? I am not\nfamiliar with one that behaves that way, but I could be wrong. Try setting\nthis to 48GB (leaving 16 for the db and misc).\n\n\n>\n> The problem is getting it to use that much. At the moment, it only uses\n> almost 9 GB, so by far not enough. The problem is getting it to use more...\n> I hope you can help me with working config.\n>\n\nAre you counting both the memory used by postgres and the memory used by the\nZFS ARC cache? It is the combination you are interested in, and performance\nwill be better if it is biased towards one being a good chunk larger than\nthe other. In my experience, if you are doing more writes, a larger file\nsystem cache is better, if you are doing reads, a larger postgres cache is\nbetter (the overhead of calling read() in 8k chunks to the os, even if it is\ncached, causes CPU use to increase).\n\n\n>\n> Here are the parameters I set in the config file:\n>\n> shared_buffers = 8192MB\n\nYou probably want shared_buffers + the ZFS ARC cache (\"advanced\" file system\ncache for those unfamiliar with ZFS) to be about 56GB, unless you have a lot\nof connections and heavily use temp tables or work_mem. In that case make\nthe total less.\nI recommend trying:\nshared_buffers = 48GB , ZFS limited to 8GB and\nshared_buffers = 8GB, ZFS limited to 48GB\n\n>\n> work_mem = 128MB\n> maintenance_work_mem = 2048MB\n> max_fsm_pages = 204800\n> max_fsm_relations = 2000\n>\n\nIf you do very large aggregates, you may need even 1GB on work_mem.\nHowever, a setting that high would require very careful tuning and reduction\nof space used by shared_buffers and the ZFS ARC. Its dangerous since each\nconnection with a large aggregate or sort may consume a lot of memory.\n\n\n>\n> Database is about 250 GB in size, so we really need to have as much data as\n> possible in memory.\n>\n> I hope you can help us tweak a few parameters to make sure all memory will\n> be used.\n>\n\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYou must either increase the memory that ZFS uses, or increase Postgresql shard_mem and work_mem to get the aggregate of the two to use more RAM.I believe, that you have not told ZFS to reserve 8GB, but rather told it to limit itself to 8GB.  \nSome comments below:On Thu, Oct 30, 2008 at 8:15 AM, Christiaan Willemsen <[email protected]> wrote:\nHi there,\n\nI configured OpenSolaris on our OpenSolaris Machine. Specs:\n\n2x Quad 2.6 Ghz Xeon\n64 GB of memory\n16x 15k5 SASIf you do much writing, and even moreso with ZFS, it is critical to put the WAL log on a different ZFS volume (and perhaps disks) than the data and indexes. \n\n\nThe filesystem is configured using ZFS, and I think I have found a configuration that performs fairly well.\n\nI installed the standard PostgreSQL that came with the OpenSolaris disk (8.3), and later added support for PostGIS. All fime.\nI also tried to tune postgresql.conf to maximize performance and also memory usage.\n\nSince PostgreSQL is the only thing running on this machine, we want it to take full advantage of the hardware. For the ZFS cache, we have 8 GB reserved. The rest can be used by postgres.\nWhat setting reserves (but does not limit) ZFS to a memory size?  I am not familiar with one that behaves that way, but I could be wrong.  Try setting this to 48GB (leaving 16 for the db and misc).\n \nThe problem is getting it to use that much. At the moment, it only uses almost 9 GB, so by far not enough. The problem is getting it to use more... I hope you can help me with working config.\nAre you counting both the memory used by postgres and the memory used by the ZFS ARC cache?  It is the combination you are interested in, and performance will be better if it is biased towards one being a good chunk larger than the other.  In my experience, if you are doing more writes, a larger file system cache is better, if you are doing reads, a larger postgres cache is better (the overhead of calling read() in 8k chunks to the os, even if it is cached, causes CPU use to increase). \n \nHere are the parameters I set in the config file:\n\nshared_buffers = 8192MBYou probably want shared_buffers + the ZFS ARC cache (\"advanced\" file system cache for those unfamiliar with ZFS) to be about 56GB, unless you have a lot of connections and heavily use temp tables or work_mem.  In that case make the total less.\nI recommend trying:shared_buffers = 48GB , ZFS limited to 8GB andshared_buffers = 8GB, ZFS limited to 48GB \n\nwork_mem = 128MB\nmaintenance_work_mem = 2048MB\nmax_fsm_pages = 204800\nmax_fsm_relations = 2000\nIf you do very large aggregates, you may  need  even 1GB on work_mem.  However, a setting that high would require very careful tuning and reduction of space used by shared_buffers and the ZFS ARC.  Its dangerous since each connection with a large aggregate or sort may consume a lot of memory.\n \nDatabase is about 250 GB in size, so we really need to have as much data as possible in memory.\n\nI hope you can help us tweak a few parameters to make sure all memory will be used.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 30 Oct 2008 09:18:23 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "On Thu, Oct 30, 2008 at 9:05 AM, Joshua D. Drake <[email protected]>wrote:\n\n> On Thu, 2008-10-30 at 16:58 +0100, Christiaan Willemsen wrote:\n> > Joshua D. Drake wrote:\n> > >\n> > > PostgreSQL is only going to use what it needs. It relies on the OS for\n> > > much of the caching etc...\n> > >\n> > So that would actually mean that I could raise the setting of the ARC\n> > cache to far more than 8 GB? As I said, our database is 250 GB, So I\n> > would expect that postgres needs more than it is using right now...\n> > Several tables have over 500 million records (obviously partitioned).\n> > At the moment we are doing queries over large datasets, So I would\n> > assume that postgress would need a bit more memory than this..\n>\n> Well I actually can't answer this definitely. My knowledge of Solaris is\n> slimmer than my knowledge of other operating systems. However it appears\n> from a brief google that ARC cache is some predefined file level cache\n> that you can set with Solaris? If so, then you want that to be high\n> enough to keep your most active relations hot in cache.\n>\n> Remember that PostgreSQL doesn't cache anything on its own so if you do\n> want to hit disk it has to be in file cache.\n\n\nBy my understanding, this is absolutely false. Postgres caches pages from\ntables/indexes in shared_buffers. You can make this very large if you wish.\n\n\nSolaris ZFS ARC is the filesystem cache area for ZFS, it will yield to other\napps by default, but you will\nget better performance and consistency if you limit it to not compete with\napps you know you want in memory, like a database.\n\nIn older versions, the postgres shared_buffers page cache was not very\nefficient, and the OS page caches were jsut better, so setting\nshared_buffers too large was a bad idea.\nHowever, postgres uses a reasonable eviction algorithm now that doesn't\nevict recently used items as readily as it used to, or let full table scans\nkick out heavily accessed data (8.3 +).\nThe current tradeoff is that going from postgres to the OS cache incurs CPU\noverhead for reading. But The OS may be better at caching more relevant\npages.\n A good OS cache, like the ZFS ARC, is much more sophisticated in the\nalgorithms used for determining what to cache and what to evict, and so it\nmay be better at limiting disk usage. But accessing it versus the postgres\npage cache in shared_buffers incurs extra CPU cost, as both caches must look\nfor, load, and potentially evict, rather than one.\n\n\n>\n>\n> > >\n> > > You are missing effective_cache_size. Try setting that to 32G.\n> > That one was set to 24 GB. But this setting only tells posgres how much\n> > caching it can expect from the OS? This is not actually memory that it\n> > will allocate, is it?\n>\n> That is correct it is not an actual allocation but it does vastly effect\n> your query plans. PostgreSQL uses this parameter to help determine if a\n> someone is likely to be cached (see comment about file cache above).\n>\n\nIt should be set to the expected size of the OS file cache (the size of the\nZFS ARC cache in this case).\nHowever, it will have very little impact on large data queries that don't\nuse indexes. It has larger impact\nfor things that may do index scans when shared_buffers is small comapred to\nfile system cache + shared_buffers.\n\n>\n> > >\n> > > You also didn't mention checkpoint_segments (which isn't memory but\n> > > still important) and default_statistics_target (which isn't memory but\n> > > still important).\n> > >\n> > is at the moment set to:\n> >\n> > checkpoint_segments = 40\n> > default_statistics_target is set to default (I think that is 10)\n> >\n>\n> 10 is likely way too low. Try 150 and make sure you analyze after.\n>\n> As I recall some other databases allow you to say, \"you have this much\n> memory, use it\". PostgreSQL doesn't do that. You just give it pointers\n> and it will use as much as it needs within the limits of the OS. The key\n> word here is needs.\n>\n> There is obviously some variance to that (like work_mem).\n>\n> Joshua D. Drake\n>\n>\n>\n> --\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Thu, Oct 30, 2008 at 9:05 AM, Joshua D. Drake <[email protected]> wrote:\nOn Thu, 2008-10-30 at 16:58 +0100, Christiaan Willemsen wrote:\n> Joshua D. Drake wrote:\n> >\n> > PostgreSQL is only going to use what it needs. It relies on the OS for\n> > much of the caching etc...\n> >\n> So that would actually mean that I could raise the setting of the ARC\n> cache to far more than 8 GB? As I said, our database is 250 GB, So I\n> would expect that postgres needs more than it is using right now...\n> Several tables have over 500 million  records (obviously partitioned).\n> At the moment we are doing queries over large datasets, So I would\n> assume that postgress would need a bit more memory than this..\n\nWell I actually can't answer this definitely. My knowledge of Solaris is\nslimmer than my knowledge of other operating systems. However it appears\nfrom a brief google that ARC cache is some predefined file level cache\nthat you can set with Solaris? If so, then you want that to be high\nenough to keep your most active relations hot in cache.\n\nRemember that PostgreSQL doesn't cache anything on its own so if you do\nwant to hit disk it has to be in file cache.By my understanding, this is absolutely false.  Postgres caches pages from tables/indexes in shared_buffers. You can make this very large if you wish.  \nSolaris ZFS ARC is the filesystem cache area for ZFS, it will yield to other apps by default, but you willget better performance and consistency if you limit it to not compete with apps you know you want in memory, like a database.\nIn older versions, the postgres shared_buffers page cache was not very efficient, and the OS page\ncaches were jsut better, so setting shared_buffers too large was a bad\nidea.\nHowever, postgres uses a reasonable eviction algorithm now that doesn't\nevict recently used items as readily as it used to, or let full table\nscans kick out heavily accessed data (8.3 +).\nThe current tradeoff is that going from postgres to the OS cache incurs\nCPU overhead for reading.  But The OS may be better at caching more relevant pages.  A good OS cache, like the ZFS ARC, is much more sophisticated in\nthe algorithms used for determining what to cache and what to evict,\nand so it may be better at limiting disk usage.  But accessing it versus the postgres page cache in shared_buffers incurs extra CPU cost, as both caches must look for, load, and potentially evict, rather than one. \n\n\n> >\n> > You are missing effective_cache_size. Try setting that to 32G.\n> That one was set to 24 GB. But this setting only tells posgres how much\n> caching it can expect from the OS? This is not actually memory that it\n> will allocate, is it?\n\nThat is correct it is not an actual allocation but it does vastly effect\nyour query plans. PostgreSQL uses this parameter to help determine if a\nsomeone is likely to be cached (see comment about file cache above).\nIt should be set to the expected size of the OS file cache (the size of the ZFS ARC cache in this case).However, it will have very little impact on large data queries that don't use indexes.  It has larger impact\nfor things that may do index scans when shared_buffers is small comapred to file system cache + shared_buffers.\n\n> >\n> > You also didn't mention checkpoint_segments (which isn't memory but\n> > still important) and default_statistics_target (which isn't memory but\n> > still important).\n> >\n> is at the moment set to:\n>\n> checkpoint_segments = 40\n> default_statistics_target is set to default (I think that is 10)\n>\n\n10 is likely way too low. Try 150 and make sure you analyze after.\n\nAs I recall some other databases allow you to say, \"you have this much\nmemory, use it\". PostgreSQL doesn't do that. You just give it pointers\nand it will use as much as it needs within the limits of the OS. The key\nword here is needs.\n\nThere is obviously some variance to that (like work_mem).\n\nJoshua D. Drake\n\n\n\n--\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 30 Oct 2008 09:46:22 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "On Thu, 2008-10-30 at 09:46 -0700, Scott Carey wrote:\n\n> \n> Remember that PostgreSQL doesn't cache anything on its own so\n> if you do\n> want to hit disk it has to be in file cache.\n> \n> By my understanding, this is absolutely false. Postgres caches pages\n> from tables/indexes in shared_buffers. You can make this very large if\n> you wish. \n\nYou can make it very large with a potentially serious performance hit.\nIt is very expensive to manage large amounts of shared buffers. It can\nalso nail your IO on checkpoint if you are not careful (even with\ncheckpoint smoothing). You are correct that I did not explain what I\nmeant very well because shared buffers are exactly that, shared\nbuffers. \n\nHowever that isn't the exact same thing as a \"cache\" at least as I was\ntrying to describe it. shared buffers are used to keep track of pages\n(as well as some other stuff) and their current status. That is not the\nsame as caching a relation.\n\nIt is not possible to pin a relation to memory using PostgreSQL.\nPostgreSQL relies on the operating system for that type of caching. \n\nJoshua D. Drake\n\n\n\n\n\n-- \n\n", "msg_date": "Thu, 30 Oct 2008 09:55:30 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "Hi Scott,\n\nThaks for the clear answers!\n\nScott Carey wrote:\n> You must either increase the memory that ZFS uses, or increase \n> Postgresql shard_mem and work_mem to get the aggregate of the two to \n> use more RAM.\n>\n> I believe, that you have not told ZFS to reserve 8GB, but rather told \n> it to limit itself to 8GB. \nThat is correct, but since it will use the whole 8 GB anyway, I can just \nas easily say that it will reseve that memory ;)\n>\n> Some comments below:\n>\n> On Thu, Oct 30, 2008 at 8:15 AM, Christiaan Willemsen \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Hi there,\n>\n> I configured OpenSolaris on our OpenSolaris Machine. Specs:\n>\n> 2x Quad 2.6 Ghz Xeon\n> 64 GB of memory\n> 16x 15k5 SAS\n>\n> If you do much writing, and even moreso with ZFS, it is critical to \n> put the WAL log on a different ZFS volume (and perhaps disks) than the \n> data and indexes.\nI already did that. I also have a separate disk pair for the ZFS intent log.\n\n> Are you counting both the memory used by postgres and the memory used \n> by the ZFS ARC cache? It is the combination you are interested in, \n> and performance will be better if it is biased towards one being a \n> good chunk larger than the other. In my experience, if you are doing \n> more writes, a larger file system cache is better, if you are doing \n> reads, a larger postgres cache is better (the overhead of calling \n> read() in 8k chunks to the os, even if it is cached, causes CPU use to \n> increase).\nNo, the figure I gave is this is without the ARC cache.\n> If you do very large aggregates, you may need even 1GB on work_mem. \n> However, a setting that high would require very careful tuning and \n> reduction of space used by shared_buffers and the ZFS ARC. Its \n> dangerous since each connection with a large aggregate or sort may \n> consume a lot of memory.\nWell, some taks may need a lot, but I guess most wil do fine with the \nsettings we used right now.\n\nSo It looks like I can tune the ARC to use more memory, and also \nincrease shared_mem to let postgres cache more tables?\n\n\n\n\n\n\nHi Scott,\n\nThaks for the clear answers!\n\nScott Carey wrote:\nYou must either increase the memory that ZFS uses, or\nincrease Postgresql shard_mem and work_mem to get the aggregate of the\ntwo to use more RAM.\n\nI believe, that you have not told ZFS to reserve 8GB, but rather told\nit to limit itself to 8GB.  \n\nThat is correct, but since it will use the whole 8 GB anyway, I can\njust as easily say that it will reseve that memory ;)\n\nSome comments below:\n\nOn Thu, Oct 30, 2008 at 8:15 AM, Christiaan\nWillemsen <[email protected]>\nwrote:\nHi\nthere,\n\nI configured OpenSolaris on our OpenSolaris Machine. Specs:\n\n2x Quad 2.6 Ghz Xeon\n64 GB of memory\n16x 15k5 SAS\n\nIf you do much writing, and even moreso with ZFS, it is critical\nto put the WAL log on a different ZFS volume (and perhaps disks) than\nthe data and indexes.\n\n\n\nI already did that. I also have a separate disk pair for the ZFS intent\nlog.\n\n\nAre you counting both the memory used by\npostgres and the memory used by the ZFS ARC cache?  It is the\ncombination you are interested in, and performance will be better if it\nis biased towards one being a good chunk larger than the other.  In my\nexperience, if you are doing more writes, a larger file system cache is\nbetter, if you are doing reads, a larger postgres cache is better (the\noverhead of calling read() in 8k chunks to the os, even if it is\ncached, causes CPU use to increase). \n\n\nNo, the figure I gave is this is without the ARC cache. \n\nIf you do very large aggregates, you may \nneed  even 1GB on work_mem.  However, a setting that high would require\nvery careful tuning and reduction of space used by shared_buffers and\nthe ZFS ARC.  Its dangerous since each connection with a large\naggregate or sort may consume a lot of memory.\n\n\nWell, some taks may need a lot, but I guess most wil do fine with the\nsettings we used right now.\n\nSo It looks like I can tune the ARC to use more memory, and also\nincrease shared_mem to let postgres cache more tables?", "msg_date": "Thu, 30 Oct 2008 17:56:16 +0100", "msg_from": "Christiaan Willemsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "Joshua D. Drake wrote:\n\n> However that isn't the exact same thing as a \"cache\" at least as I was\n> trying to describe it. shared buffers are used to keep track of pages\n> (as well as some other stuff) and their current status. That is not the\n> same as caching a relation.\n\nUm, having a page in shared buffers means exactly that the page is\ncached (a.k.a. it won't have to be read from the lower level next time).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 30 Oct 2008 14:00:23 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "On Thu, 2008-10-30 at 14:00 -0300, Alvaro Herrera wrote:\n> Joshua D. Drake wrote:\n> \n> > However that isn't the exact same thing as a \"cache\" at least as I was\n> > trying to describe it. shared buffers are used to keep track of pages\n> > (as well as some other stuff) and their current status. That is not the\n> > same as caching a relation.\n> \n> Um, having a page in shared buffers means exactly that the page is\n> cached (a.k.a. it won't have to be read from the lower level next time).\n> \n\nYes but that wasn't my point. Sorry if I wasn't being clear enough.\n\nJoshua D. Drake\n\n-- \n\n", "msg_date": "Thu, 30 Oct 2008 10:26:26 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": ">\n> If you do very large aggregates, you may need even 1GB on work_mem.\n> However, a setting that high would require very careful tuning and reduction\n> of space used by shared_buffers and the ZFS ARC. Its dangerous since each\n> connection with a large aggregate or sort may consume a lot of memory.\n>\n> Well, some taks may need a lot, but I guess most wil do fine with the\n> settings we used right now.\n>\n> So It looks like I can tune the ARC to use more memory, and also increase\n> shared_mem to let postgres cache more tables?\n>\n\nI would recommend tuning one upwards, and leaving the other smaller. The\nworst case is when they are both similarly sized, it leaves the most\nopportunity for duplication of data, and produces the worst feedback on\ncheckpoint writes.\nYou may want to compare the performance with:\n larger ARC and smaller shared_buffers\nvs\n smaller ARC and larger shared_buffers\n\nThe results will be rather dependent on how you use postgres, the types of\nqueries you do, and for writes, how you tune checkpoints and such.\n\n\n\nIf you do very large aggregates, you may \nneed  even 1GB on work_mem.  However, a setting that high would require\nvery careful tuning and reduction of space used by shared_buffers and\nthe ZFS ARC.  Its dangerous since each connection with a large\naggregate or sort may consume a lot of memory.\n\n\nWell, some taks may need a lot, but I guess most wil do fine with the\nsettings we used right now.\n\nSo It looks like I can tune the ARC to use more memory, and also\nincrease shared_mem to let postgres cache more tables?\n\nI would recommend tuning one upwards, and leaving the other smaller.  The worst case is when they are both similarly sized, it leaves the most opportunity for duplication of data, and produces the worst feedback on checkpoint writes.\nYou may want to compare the performance with: larger ARC and smaller shared_buffers vs smaller ARC and larger shared_buffersThe results will be rather dependent on how you use postgres, the types of queries you do, and for writes, how you tune checkpoints and such.", "msg_date": "Thu, 30 Oct 2008 10:37:23 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "On Thu, Oct 30, 2008 at 9:55 AM, Joshua D. Drake <[email protected]>wrote:\n\n> On Thu, 2008-10-30 at 09:46 -0700, Scott Carey wrote:\n>\n> >\n> > Remember that PostgreSQL doesn't cache anything on its own so\n> > if you do\n> > want to hit disk it has to be in file cache.\n> >\n> > By my understanding, this is absolutely false. Postgres caches pages\n> > from tables/indexes in shared_buffers. You can make this very large if\n> > you wish.\n>\n> You can make it very large with a potentially serious performance hit.\n> It is very expensive to manage large amounts of shared buffers. It can\n> also nail your IO on checkpoint if you are not careful (even with\n> checkpoint smoothing). You are correct that I did not explain what I\n> meant very well because shared buffers are exactly that, shared\n> buffers.\n\n\nYou can slam your I/O by havnig too large of either OS file cache or\nshared_buffers, and you have to tune both.\nIn the case of large shared_buffers you have to tune postgres and especially\nthe background writer and checkpoints.\nIn the case of a large OS cache, you have to tune parameters to limit the\nammount of dirty pages there and force writes out smoothly.\nBoth layers attempt to delay writes for their own, often similar reasons,\nand suffer when a large sync comes along with lots of dirty data.\n\nRecent ZFS changes have been made to limit this, (\nhttp://blogs.sun.com/roch/entry/the_new_zfs_write_throttle)\nin earlier ZFS versions, this is what usually killed databases -- ZFS in\nsome situations would delay writes too long (even if \"long\" is 5 seconds)\nand get in trouble. This still has to be tuned well, combined with good\ncheckpoint tuning in Postgres as you mention. For Linux, there are similar\nissues that have to be tuned on many kernels, or up to 40% of RAM can fill\nwith dirty pages not written to disk.\n\nLetting the OS do it doesn't get rid of the problem, both levels of cache\nshare very similar issues with large sizes and dirty pages followed by a\nsync.\n\nThe buffer cache in shared_buffers is a lot more efficient for large\nscanning queries -- A select count(*) test will be CPU bound if it comes\nfrom shared_buffers or the OS page cache, and in the former case I have seen\nit execute up to 50% faster than the latter, by avoiding calling out to the\nOS to get pages, purely as a result of less CPU used.\n\n\n\n>\n>\n> However that isn't the exact same thing as a \"cache\" at least as I was\n> trying to describe it. shared buffers are used to keep track of pages\n> (as well as some other stuff) and their current status. That is not the\n> same as caching a relation.\n>\n> It is not possible to pin a relation to memory using PostgreSQL.\n> PostgreSQL relies on the operating system for that type of caching.\n>\n\nThe OS can't pin a relation either, from its point of view its all just a\nbunch of disk data blocks, not relations -- so it is all roughly\nequivalent. The OS can do a bit better job at data prefetch on sequential\nscans or other predictable seek sequences (ARC stands for Adaptive\nReplacement Cache) than postgres currently does (no intelligent prefetch in\npostgres AFAIK).\n\nSo I apologize if I made it sound like Postgres cached the actual relation,\nits just pages -- but it is basically the same thing as the OS cache, but\nkept in process closer to the code that needs it. Its a cache that prevents\ndisk reads.\n\nMy suggestion for the OP is to try it both ways, and see what is better for\nhis workload / OS / Hardware combination.\n\n\n>\n> Joshua D. Drake\n>\n>\n>\n>\n>\n> --\n>\n>\n\nOn Thu, Oct 30, 2008 at 9:55 AM, Joshua D. Drake <[email protected]> wrote:\nOn Thu, 2008-10-30 at 09:46 -0700, Scott Carey wrote:\n\n>\n>         Remember that PostgreSQL doesn't cache anything on its own so\n>         if you do\n>         want to hit disk it has to be in file cache.\n>\n> By my understanding, this is absolutely false.  Postgres caches pages\n> from tables/indexes in shared_buffers. You can make this very large if\n> you wish.\n\nYou can make it very large with a potentially serious performance hit.\nIt is very expensive to manage large amounts of shared buffers. It can\nalso nail your IO on checkpoint if you are not careful (even with\ncheckpoint smoothing). You are correct that I did not explain what I\nmeant very well because shared buffers are exactly that, shared\nbuffers.You can slam your I/O by havnig too large of either OS file cache or shared_buffers, and you have to tune both. In the case of large shared_buffers you have to tune postgres and especially the background writer and checkpoints.\nIn the case of a large OS cache, you have to tune parameters to limit the ammount of dirty pages there and force writes out smoothly.  Both layers attempt to delay writes for their own, often similar reasons, and suffer when a large sync comes along with lots of dirty data.  \nRecent ZFS changes have been made to limit this, (http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle)in earlier ZFS versions, this is what usually killed databases -- ZFS in some situations would delay writes too long (even if \"long\" is 5 seconds) and get in trouble.  This still has to be tuned well, combined with good checkpoint tuning in Postgres as you mention. For Linux, there are similar issues that have to be tuned on many kernels, or up to 40% of RAM can fill with dirty pages not written to disk.\nLetting the OS do it doesn't get rid of the problem, both levels of cache share very similar issues with large sizes and dirty pages followed by a sync. The buffer cache in shared_buffers is a lot more efficient for large scanning queries -- A select count(*) test will be CPU bound if it comes from shared_buffers or the OS page cache, and in the former case I have seen it execute up to 50% faster than the latter, by avoiding calling out to the OS to get pages, purely as a result of less CPU used.\n  \n\nHowever that isn't the exact same thing as a \"cache\" at least as I was\ntrying to describe it. shared buffers are used to keep track of pages\n(as well as some other stuff) and their current status. That is not the\nsame as caching a relation.\n\nIt is not possible to pin a relation to memory using PostgreSQL.\nPostgreSQL relies on the operating system for that type of caching.\nThe OS can't pin a relation either, from its point of view its all just a bunch of disk data blocks, not relations -- so it is all roughly equivalent.  The OS can do a bit better job at data prefetch on sequential scans or other predictable seek sequences (ARC stands for Adaptive Replacement Cache) than postgres currently does (no intelligent prefetch in postgres AFAIK).\nSo I apologize if I made it sound like Postgres cached the actual relation, its just pages -- but it is basically the same thing as the OS cache, but kept in process closer to the code that needs it.  Its a cache that prevents disk reads.\nMy suggestion for the OP is to try it both ways, and see what is better for his workload / OS / Hardware combination. \n\nJoshua D. Drake\n\n\n\n\n\n--", "msg_date": "Thu, 30 Oct 2008 11:27:09 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "Thanks guys,\n\nLots of info here that I didn't know about! Since I have one of the \nlatest Opensolaris builds, I guess the write throttle feature is \nalready in there. Sadly, the blog doesn't say what build has it \nincluded.\n\nFor writes, I do everything synchronized because we really need a \nconsistent database on disk. We can see that during large inserts, the \nintend log is used a lot.\n\nWhat I'm going to te testing is a smaller shared_buffers value, and a \nlarge ARC cache, and exactly the other way around.\n\nAnother question: since we have huge tables with hundreds of millions \nor rows, we partitioned the database (it actually is creating the \npartitions dynamically now on inserts with very good performance :D ), \nbut the question is: is the size of the partions important for the \nmemory parameters in config file? How can we determine the optimal \nsize of the partition. obviously, when doing selects, you want those \npreferably only needing a single partition for speed. At the moment, \nthat is for the majority of situations the case. But there might be \nsome other things to think about...\n\nKind regards,\n\nChristiaan\n\n\nOn Oct 30, 2008, at 7:27 PM, Scott Carey wrote:\n\n>\n>\n> On Thu, Oct 30, 2008 at 9:55 AM, Joshua D. Drake \n> <[email protected]> wrote:\n> On Thu, 2008-10-30 at 09:46 -0700, Scott Carey wrote:\n>\n> >\n> > Remember that PostgreSQL doesn't cache anything on its own \n> so\n> > if you do\n> > want to hit disk it has to be in file cache.\n> >\n> > By my understanding, this is absolutely false. Postgres caches \n> pages\n> > from tables/indexes in shared_buffers. You can make this very \n> large if\n> > you wish.\n>\n> You can make it very large with a potentially serious performance hit.\n> It is very expensive to manage large amounts of shared buffers. It can\n> also nail your IO on checkpoint if you are not careful (even with\n> checkpoint smoothing). You are correct that I did not explain what I\n> meant very well because shared buffers are exactly that, shared\n> buffers.\n>\n> You can slam your I/O by havnig too large of either OS file cache or \n> shared_buffers, and you have to tune both.\n> In the case of large shared_buffers you have to tune postgres and \n> especially the background writer and checkpoints.\n> In the case of a large OS cache, you have to tune parameters to \n> limit the ammount of dirty pages there and force writes out smoothly.\n> Both layers attempt to delay writes for their own, often similar \n> reasons, and suffer when a large sync comes along with lots of dirty \n> data.\n>\n> Recent ZFS changes have been made to limit this, (http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle \n> )\n> in earlier ZFS versions, this is what usually killed databases -- \n> ZFS in some situations would delay writes too long (even if \"long\" \n> is 5 seconds) and get in trouble. This still has to be tuned well, \n> combined with good checkpoint tuning in Postgres as you mention. For \n> Linux, there are similar issues that have to be tuned on many \n> kernels, or up to 40% of RAM can fill with dirty pages not written \n> to disk.\n>\n> Letting the OS do it doesn't get rid of the problem, both levels of \n> cache share very similar issues with large sizes and dirty pages \n> followed by a sync.\n>\n> The buffer cache in shared_buffers is a lot more efficient for large \n> scanning queries -- A select count(*) test will be CPU bound if it \n> comes from shared_buffers or the OS page cache, and in the former \n> case I have seen it execute up to 50% faster than the latter, by \n> avoiding calling out to the OS to get pages, purely as a result of \n> less CPU used.\n>\n>\n>\n>\n> However that isn't the exact same thing as a \"cache\" at least as I was\n> trying to describe it. shared buffers are used to keep track of pages\n> (as well as some other stuff) and their current status. That is not \n> the\n> same as caching a relation.\n>\n> It is not possible to pin a relation to memory using PostgreSQL.\n> PostgreSQL relies on the operating system for that type of caching.\n>\n> The OS can't pin a relation either, from its point of view its all \n> just a bunch of disk data blocks, not relations -- so it is all \n> roughly equivalent. The OS can do a bit better job at data prefetch \n> on sequential scans or other predictable seek sequences (ARC stands \n> for Adaptive Replacement Cache) than postgres currently does (no \n> intelligent prefetch in postgres AFAIK).\n>\n> So I apologize if I made it sound like Postgres cached the actual \n> relation, its just pages -- but it is basically the same thing as \n> the OS cache, but kept in process closer to the code that needs it. \n> Its a cache that prevents disk reads.\n>\n> My suggestion for the OP is to try it both ways, and see what is \n> better for his workload / OS / Hardware combination.\n>\n>\n> Joshua D. Drake\n>\n>\n>\n>\n>\n> --\n>\n>\n\n\nThanks guys,Lots of info here that I didn't know about! Since I have one of the latest Opensolaris builds, I guess the write throttle feature is already in there. Sadly, the blog doesn't say what build has it included.For writes, I do everything synchronized because we really need a consistent database on disk. We can see that during large inserts, the intend log is used a lot. What  I'm going to te testing is a smaller shared_buffers value, and a large ARC cache, and exactly the other way around.Another question: since we have huge tables with hundreds of millions or rows, we partitioned the database (it actually is creating the partitions dynamically now on inserts with very good performance :D ), but the question is: is the size of the partions important for the memory parameters in config file? How can we determine the optimal size of the partition. obviously, when doing selects, you want those preferably only needing a single partition for speed. At the moment, that is for the majority of situations the case. But there might be some other things to think about...Kind regards,Christiaan On Oct 30, 2008, at 7:27 PM, Scott Carey wrote:On Thu, Oct 30, 2008 at 9:55 AM, Joshua D. Drake <[email protected]> wrote: On Thu, 2008-10-30 at 09:46 -0700, Scott Carey wrote: > >         Remember that PostgreSQL doesn't cache anything on its own so >         if you do >         want to hit disk it has to be in file cache. > > By my understanding, this is absolutely false.  Postgres caches pages > from tables/indexes in shared_buffers. You can make this very large if > you wish. You can make it very large with a potentially serious performance hit. It is very expensive to manage large amounts of shared buffers. It can also nail your IO on checkpoint if you are not careful (even with checkpoint smoothing). You are correct that I did not explain what I meant very well because shared buffers are exactly that, shared buffers.You can slam your I/O by havnig too large of either OS file cache or shared_buffers, and you have to tune both. In the case of large shared_buffers you have to tune postgres and especially the background writer and checkpoints. In the case of a large OS cache, you have to tune parameters to limit the ammount of dirty pages there and force writes out smoothly.  Both layers attempt to delay writes for their own, often similar reasons, and suffer when a large sync comes along with lots of dirty data.  Recent ZFS changes have been made to limit this, (http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle)in earlier ZFS versions, this is what usually killed databases -- ZFS in some situations would delay writes too long (even if \"long\" is 5 seconds) and get in trouble.  This still has to be tuned well, combined with good checkpoint tuning in Postgres as you mention. For Linux, there are similar issues that have to be tuned on many kernels, or up to 40% of RAM can fill with dirty pages not written to disk. Letting the OS do it doesn't get rid of the problem, both levels of cache share very similar issues with large sizes and dirty pages followed by a sync. The buffer cache in shared_buffers is a lot more efficient for large scanning queries -- A select count(*) test will be CPU bound if it comes from shared_buffers or the OS page cache, and in the former case I have seen it execute up to 50% faster than the latter, by avoiding calling out to the OS to get pages, purely as a result of less CPU used.    However that isn't the exact same thing as a \"cache\" at least as I was trying to describe it. shared buffers are used to keep track of pages (as well as some other stuff) and their current status. That is not the same as caching a relation. It is not possible to pin a relation to memory using PostgreSQL. PostgreSQL relies on the operating system for that type of caching. The OS can't pin a relation either, from its point of view its all just a bunch of disk data blocks, not relations -- so it is all roughly equivalent.  The OS can do a bit better job at data prefetch on sequential scans or other predictable seek sequences (ARC stands for Adaptive Replacement Cache) than postgres currently does (no intelligent prefetch in postgres AFAIK). So I apologize if I made it sound like Postgres cached the actual relation, its just pages -- but it is basically the same thing as the OS cache, but kept in process closer to the code that needs it.  Its a cache that prevents disk reads. My suggestion for the OP is to try it both ways, and see what is better for his workload / OS / Hardware combination.  Joshua D. Drake --", "msg_date": "Thu, 30 Oct 2008 22:06:25 +0100", "msg_from": "Christiaan Willemsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Configuring for maximum memory usage" }, { "msg_contents": "On Thu, Oct 30, 2008 at 2:06 PM, Christiaan Willemsen <\[email protected]> wrote:\n\n> Thanks guys,\n> Lots of info here that I didn't know about! Since I have one of the latest\n> Opensolaris builds, I guess the write throttle feature is already in there.\n> Sadly, the blog doesn't say what build has it included.\n>\n\nIf I recall correctly, it went in at about build 89 or so (I think the\nbottom of the link I provided has a comment to that effect). So its in\nthere now, but not in OpenSolaris 2008.05.\n\n\n>\n> For writes, I do everything synchronized because we really need a\n> consistent database on disk. We can see that during large inserts, the\n> intend log is used a lot.\n>\n\nThe DB synchronizes the WAL log automatically, and the table and index data\nare written non-synchronously until the commit at the end of a checkpoint,\nin which case sync is called on them. This keeps things consistent on\ndisk. With ZFS, each block written is always consistent, with a checksum\nkept in the parent block. There are no partial page writes, ever. In\ntheory, you can disable full page writes on the WAL log if there is a\nbottleneck there since ZFS guarantees fully transactional consistent state\nof the file system, even if you have a RAID controller or hardware failure\nthat causes a partial write. But WAL log activity is probably not your\nbottleneck so turning off full page writes on the WAL log is not necessary.\n\n\n\n>\n> What I'm going to te testing is a smaller shared_buffers value, and a\n> large ARC cache, and exactly the other way around.\n>\n> Another question: since we have huge tables with hundreds of millions or\n> rows, we partitioned the database (it actually is creating the partitions\n> dynamically now on inserts with very good performance :D ), but the question\n> is: is the size of the partions important for the memory parameters in\n> config file? How can we determine the optimal size of the partition.\n> obviously, when doing selects, you want those preferably only needing a\n> single partition for speed. At the moment, that is for the majority of\n> situations the case. But there might be some other things to think about...\n>\n> Kind regards,\n>\n> Christiaan\n>\n>\n> On Oct 30, 2008, at 7:27 PM, Scott Carey wrote:\n>\n>\n>\n> On Thu, Oct 30, 2008 at 9:55 AM, Joshua D. Drake <[email protected]>wrote:\n>\n>> On Thu, 2008-10-30 at 09:46 -0700, Scott Carey wrote:\n>>\n>> >\n>> > Remember that PostgreSQL doesn't cache anything on its own so\n>> > if you do\n>> > want to hit disk it has to be in file cache.\n>> >\n>> > By my understanding, this is absolutely false. Postgres caches pages\n>> > from tables/indexes in shared_buffers. You can make this very large if\n>> > you wish.\n>>\n>> You can make it very large with a potentially serious performance hit.\n>> It is very expensive to manage large amounts of shared buffers. It can\n>> also nail your IO on checkpoint if you are not careful (even with\n>> checkpoint smoothing). You are correct that I did not explain what I\n>> meant very well because shared buffers are exactly that, shared\n>> buffers.\n>\n>\n> You can slam your I/O by havnig too large of either OS file cache or\n> shared_buffers, and you have to tune both.\n> In the case of large shared_buffers you have to tune postgres and\n> especially the background writer and checkpoints.\n> In the case of a large OS cache, you have to tune parameters to limit the\n> ammount of dirty pages there and force writes out smoothly.\n> Both layers attempt to delay writes for their own, often similar reasons,\n> and suffer when a large sync comes along with lots of dirty data.\n>\n> Recent ZFS changes have been made to limit this, (\n> http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle)\n> in earlier ZFS versions, this is what usually killed databases -- ZFS in\n> some situations would delay writes too long (even if \"long\" is 5 seconds)\n> and get in trouble. This still has to be tuned well, combined with good\n> checkpoint tuning in Postgres as you mention. For Linux, there are similar\n> issues that have to be tuned on many kernels, or up to 40% of RAM can fill\n> with dirty pages not written to disk.\n>\n> Letting the OS do it doesn't get rid of the problem, both levels of cache\n> share very similar issues with large sizes and dirty pages followed by a\n> sync.\n>\n> The buffer cache in shared_buffers is a lot more efficient for large\n> scanning queries -- A select count(*) test will be CPU bound if it comes\n> from shared_buffers or the OS page cache, and in the former case I have seen\n> it execute up to 50% faster than the latter, by avoiding calling out to the\n> OS to get pages, purely as a result of less CPU used.\n>\n>\n>\n>>\n>>\n>> However that isn't the exact same thing as a \"cache\" at least as I was\n>> trying to describe it. shared buffers are used to keep track of pages\n>> (as well as some other stuff) and their current status. That is not the\n>> same as caching a relation.\n>>\n>> It is not possible to pin a relation to memory using PostgreSQL.\n>> PostgreSQL relies on the operating system for that type of caching.\n>>\n>\n> The OS can't pin a relation either, from its point of view its all just a\n> bunch of disk data blocks, not relations -- so it is all roughly\n> equivalent. The OS can do a bit better job at data prefetch on sequential\n> scans or other predictable seek sequences (ARC stands for Adaptive\n> Replacement Cache) than postgres currently does (no intelligent prefetch in\n> postgres AFAIK).\n>\n> So I apologize if I made it sound like Postgres cached the actual relation,\n> its just pages -- but it is basically the same thing as the OS cache, but\n> kept in process closer to the code that needs it. Its a cache that prevents\n> disk reads.\n>\n> My suggestion for the OP is to try it both ways, and see what is better for\n> his workload / OS / Hardware combination.\n>\n>\n>>\n>> Joshua D. Drake\n>>\n>>\n>>\n>>\n>>\n>> --\n>>\n>>\n>\n>\n\nOn Thu, Oct 30, 2008 at 2:06 PM, Christiaan Willemsen <[email protected]> wrote:\nThanks guys,Lots of info here that I didn't know about! Since I have one of the latest Opensolaris builds, I guess the write throttle feature is already in there. Sadly, the blog doesn't say what build has it included.\nIf I recall correctly, it went in at about build 89 or so (I think the bottom of the link I provided has a comment to that effect).  So its in there now, but not in OpenSolaris 2008.05. \nFor writes, I do everything synchronized because we really need a consistent database on disk. We can see that during large inserts, the intend log is used a lot. \nThe DB synchronizes the WAL log automatically, and the table and index data are written non-synchronously until the commit at the end of a checkpoint, in which case sync is called on them.  This keeps things consistent on disk.  With ZFS, each block written is always consistent, with a checksum kept in the parent block.  There are no partial page writes, ever.  In theory, you can disable full page writes on the WAL log if there is a bottleneck there since ZFS guarantees fully transactional consistent state of the file system, even if you have a RAID controller or hardware failure that causes a partial write.  But WAL log activity is probably not your bottleneck so turning off full page writes on the WAL log is not necessary.\n What  I'm going to te testing is a smaller shared_buffers value, and a large ARC cache, and exactly the other way around.\nAnother question: since we have huge tables with hundreds of millions or rows, we partitioned the database (it actually is creating the partitions dynamically now on inserts with very good performance :D ), but the question is: is the size of the partions important for the memory parameters in config file? How can we determine the optimal size of the partition. obviously, when doing selects, you want those preferably only needing a single partition for speed. At the moment, that is for the majority of situations the case. But there might be some other things to think about...\nKind regards,Christiaan On Oct 30, 2008, at 7:27 PM, Scott Carey wrote:\nOn Thu, Oct 30, 2008 at 9:55 AM, Joshua D. Drake <[email protected]> wrote:\n On Thu, 2008-10-30 at 09:46 -0700, Scott Carey wrote: > >         Remember that PostgreSQL doesn't cache anything on its own so\n >         if you do >         want to hit disk it has to be in file cache. > > By my understanding, this is absolutely false.  Postgres caches pages > from tables/indexes in shared_buffers. You can make this very large if\n > you wish. You can make it very large with a potentially serious performance hit. It is very expensive to manage large amounts of shared buffers. It can also nail your IO on checkpoint if you are not careful (even with\n checkpoint smoothing). You are correct that I did not explain what I meant very well because shared buffers are exactly that, shared buffers.You can slam your I/O by havnig too large of either OS file cache or shared_buffers, and you have to tune both. \nIn the case of large shared_buffers you have to tune postgres and especially the background writer and checkpoints. In the case of a large OS cache, you have to tune parameters to limit the ammount of dirty pages there and force writes out smoothly.  \nBoth layers attempt to delay writes for their own, often similar reasons, and suffer when a large sync comes along with lots of dirty data.  Recent ZFS changes have been made to limit this, (http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle)\nin earlier ZFS versions, this is what usually killed databases -- ZFS in some situations would delay writes too long (even if \"long\" is 5 seconds) and get in trouble.  This still has to be tuned well, combined with good checkpoint tuning in Postgres as you mention. For Linux, there are similar issues that have to be tuned on many kernels, or up to 40% of RAM can fill with dirty pages not written to disk.\nLetting the OS do it doesn't get rid of the problem, both levels of cache share very similar issues with large sizes and dirty pages followed by a sync. The buffer cache in shared_buffers is a lot more efficient for large scanning queries -- A select count(*) test will be CPU bound if it comes from shared_buffers or the OS page cache, and in the former case I have seen it execute up to 50% faster than the latter, by avoiding calling out to the OS to get pages, purely as a result of less CPU used.\n    However that isn't the exact same thing as a \"cache\" at least as I was\n trying to describe it. shared buffers are used to keep track of pages (as well as some other stuff) and their current status. That is not the same as caching a relation. It is not possible to pin a relation to memory using PostgreSQL.\n PostgreSQL relies on the operating system for that type of caching. The OS can't pin a relation either, from its point of view its all just a bunch of disk data blocks, not relations -- so it is all roughly equivalent.  The OS can do a bit better job at data prefetch on sequential scans or other predictable seek sequences (ARC stands for Adaptive Replacement Cache) than postgres currently does (no intelligent prefetch in postgres AFAIK).\nSo I apologize if I made it sound like Postgres cached the actual relation, its just pages -- but it is basically the same thing as the OS cache, but kept in process closer to the code that needs it.  Its a cache that prevents disk reads.\nMy suggestion for the OP is to try it both ways, and see what is better for his workload / OS / Hardware combination. \n Joshua D. Drake --", "msg_date": "Fri, 31 Oct 2008 08:47:20 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring for maximum memory usage" } ]
[ { "msg_contents": "Hi everybody,\n\nI am running a bake/load test and I am seeing sudden, daily shifts\nfrom CPU utilization to IO wait. The load harness has been running\nfor 3 weeks and should be putting a uniform load on the application.\nThe application processes data on a daily basis and a sawtooth CPU\npattern on the database is expected as more values are added\nthroughout the day and processing resets with the next day. Each day,\nI see the CPU utilization climb as expected until a shift occurs and\nit spends the rest of the day primarily in IO wait.\n\nLooking at pg_statio_user_tables, I can see that during the CPU\nintense timeframe, most of the results come from the buffer cache\n(hits). During the IO wait, most of the results are being read in\n(misses). Examples from each timeframe (CPU/IO) are included below.\nFor each sample, I issued pg_stat_reset(), waited briefly, and then\nqueried pg_statio_user_tables.\n\n*during CPU Intense timeframe*\ndb=# select * from pg_statio_user_tables;\n relid | schemaname | relname |\nheap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit |\ntoast_blks_read | toast_blks_hit | tidx_blks_read | tidx_blks_hit\n-------+------------+-----------------------------------+----------------+---------------+---------------+--------------+-----------------+----------------+----------------+---------------\n 16612 | public | tablea |\n1 | 1346782 | 1 | 55956 | 0 |\n 0 | 0 | 0\n 16619 | public | tableb |\n0 | 579 | 0 | 1158 | |\n | |\n\n*during IO WAIT timeframe*\ndb=# select * from pg_statio_user_tables;\n relid | schemaname | relname |\nheap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit |\ntoast_blks_read | toast_blks_hit | tidx_blks_read | tidx_blks_hit\n-------+------------+-----------------------------------+----------------+---------------+---------------+--------------+-----------------+----------------+----------------+---------------\n 16612 | public | tablea |\n244146 | 594 | 4885 | 3703 |\n0 | 0 | 0 | 0\n 16619 | public | tableb |\n418 | 589 | 432 | 1613 | |\n | |\n\n\n\nAnother thing to note, we have VACUUM ANALYZE running on an hourly\ninterval and the switch from CPU to IO wait appears to always coincide\nwith a vacuum.\n\nWhat might cause this shift?\n\nI have tried adjusting buffer_cache from 512 MB to 1024 MB, but this\ndid not appear to have an impact.\n\nI also tried upping the work_mem from 1MB to 10MB, and this did not\nappear to have an impact either.\n\nAny ideas? Thanks for your help!\n\nOliver\n\n\nWe're running Postgresql 8.2.9\n", "msg_date": "Thu, 30 Oct 2008 15:41:40 -0600", "msg_from": "\"Oliver Johnson\" <[email protected]>", "msg_from_op": true, "msg_subject": "CPU utilization vs. IO wait, shared buffers?" }, { "msg_contents": "On Thursday 30 October 2008, \"Oliver Johnson\" <[email protected]> \nwrote:\n> Another thing to note, we have VACUUM ANALYZE running on an hourly\n> interval and the switch from CPU to IO wait appears to always coincide\n> with a vacuum.\n>\n> What might cause this shift?\n\nThe extra disk access caused by vacuum? That seems pretty obvious.\n\nUse auto-vacuum. There's no reason to vacuum your entire database every hour \n(doing so reads from disk the entirety of every table and index, and \ngenerates some write activity).\n\n-- \nAlan\n", "msg_date": "Thu, 30 Oct 2008 14:48:58 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU utilization vs. IO wait, shared buffers?" }, { "msg_contents": "On Thu, Oct 30, 2008 at 3:41 PM, Oliver Johnson\n<[email protected]> wrote:\n\n> Another thing to note, we have VACUUM ANALYZE running on an hourly\n> interval and the switch from CPU to IO wait appears to always coincide\n> with a vacuum.\n\nWhy are you not using autovacuum with appropriate wait parameters to\nkeep it out of your way? Autovacuum tends to make pretty good\ndecisions and you can adjust the aggressiveness with which it kicks in\nif you need to.\n\n> What might cause this shift?\n>\n> I have tried adjusting buffer_cache from 512 MB to 1024 MB, but this\n> did not appear to have an impact.\n\nDo you mean shared_buffers? It may well be that larger shared_buffers\naren't going to help if you're dealing with a largely random\ntransactional load. that said, 1G shared_buffers is not that big\nnowadays. I'm assuming by your testing methods you're on a real db\nserver with several dozen gigs of ram...\n\n> I also tried upping the work_mem from 1MB to 10MB, and this did not\n> appear to have an impact either.\n\nLook into upping your checkpoint_segments (64 or so is reasonable for\na large production server) and possibly increasing your\ncheckpoint_completion_target to something closer to 1.0 (0.7 to 0.8)\nand see if that helps.\n", "msg_date": "Thu, 30 Oct 2008 16:41:23 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU utilization vs. IO wait, shared buffers?" }, { "msg_contents": "On Thu, Oct 30, 2008 at 4:41 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Oct 30, 2008 at 3:41 PM, Oliver Johnson\n> <[email protected]> wrote:\n>\n>> Another thing to note, we have VACUUM ANALYZE running on an hourly\n>> interval and the switch from CPU to IO wait appears to always coincide\n>> with a vacuum.\n>\n> Why are you not using autovacuum with appropriate wait parameters to\n> keep it out of your way? Autovacuum tends to make pretty good\n> decisions and you can adjust the aggressiveness with which it kicks in\n> if you need to.\n\nThanks for the quick feedback. I struggled with autovacuum in a past\nlife and developed a favor for explicit table level vacuums. Also,\nvacuum'ing did not jump out to me as a culprit originally as there is\nno significant impact (or indicators of duress) during the early day\nvacuums.\n\nYou and Alan have brought up some good points, though. I turned\nautovacuum on and increased the checkpoint_segments. I will let it\nrun over night and see how things look.\n\nThanks again.\n\n\n>\n>> What might cause this shift?\n>>\n>> I have tried adjusting buffer_cache from 512 MB to 1024 MB, but this\n>> did not appear to have an impact.\n>\n> Do you mean shared_buffers? It may well be that larger shared_buffers\n> aren't going to help if you're dealing with a largely random\n> transactional load. that said, 1G shared_buffers is not that big\n> nowadays. I'm assuming by your testing methods you're on a real db\n> server with several dozen gigs of ram...\n>\n>> I also tried upping the work_mem from 1MB to 10MB, and this did not\n>> appear to have an impact either.\n>\n> Look into upping your checkpoint_segments (64 or so is reasonable for\n> a large production server) and possibly increasing your\n> checkpoint_completion_target to something closer to 1.0 (0.7 to 0.8)\n> and see if that helps.\n>\n", "msg_date": "Thu, 30 Oct 2008 17:38:14 -0600", "msg_from": "\"Oliver Johnson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU utilization vs. IO wait, shared buffers?" }, { "msg_contents": "Quoting Oliver Johnson <[email protected]>:\n\n> Hi everybody,\n>\n> I am running a bake/load test and I am seeing sudden, daily shifts\n> from CPU utilization to IO wait. The load harness has been running\n> for 3 weeks and should be putting a uniform load on the application.\n> The application processes data on a daily basis and a sawtooth CPU\n> pattern on the database is expected as more values are added\n> throughout the day and processing resets with the next day. Each day,\n> I see the CPU utilization climb as expected until a shift occurs and\n> it spends the rest of the day primarily in IO wait.\n>\n> Looking at pg_statio_user_tables, I can see that during the CPU\n> intense timeframe, most of the results come from the buffer cache\n> (hits). During the IO wait, most of the results are being read in\n> (misses). Examples from each timeframe (CPU/IO) are included below.\n> For each sample, I issued pg_stat_reset(), waited briefly, and then\n> queried pg_statio_user_tables.\n>\n> *during CPU Intense timeframe*\n> db=# select * from pg_statio_user_tables;\n> relid | schemaname | relname |\n> heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit |\n> toast_blks_read | toast_blks_hit | tidx_blks_read | tidx_blks_hit\n> -------+------------+-----------------------------------+----------------+---------------+---------------+--------------+-----------------+----------------+----------------+---------------\n> 16612 | public | tablea |\n> 1 | 1346782 | 1 | 55956 | 0 |\n> 0 | 0 | 0\n> 16619 | public | tableb |\n> 0 | 579 | 0 | 1158 | |\n> | |\n>\n> *during IO WAIT timeframe*\n> db=# select * from pg_statio_user_tables;\n> relid | schemaname | relname |\n> heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit |\n> toast_blks_read | toast_blks_hit | tidx_blks_read | tidx_blks_hit\n> -------+------------+-----------------------------------+----------------+---------------+---------------+--------------+-----------------+----------------+----------------+---------------\n> 16612 | public | tablea |\n> 244146 | 594 | 4885 | 3703 |\n> 0 | 0 | 0 | 0\n> 16619 | public | tableb |\n> 418 | 589 | 432 | 1613 | |\n> | |\n>\n>\n>\n> Another thing to note, we have VACUUM ANALYZE running on an hourly\n> interval and the switch from CPU to IO wait appears to always coincide\n> with a vacuum.\n>\n> What might cause this shift?\n>\n> I have tried adjusting buffer_cache from 512 MB to 1024 MB, but this\n> did not appear to have an impact.\n>\n> I also tried upping the work_mem from 1MB to 10MB, and this did not\n> appear to have an impact either.\n>\n> Any ideas? Thanks for your help!\n>\n> Oliver\n>\n>\n> We're running Postgresql 8.2.9\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nFirst of all, stop me if you're not running Linux -- that's the only \nOS I know. :) Second, if you're not running a fairly recent 2.6 kernel \n(2.6.18/RHEL 5 or later), you should probably upgrade, because the \nperformance stats are better. 2.6.25 is better still.\n\nNext, if you haven't already, install the \"sysstat\" package. My \nrecollection is that it does not install by default on most distros. \nIt should -- go beat up on the distributors. :)\n\nNow you have \"iostat\" installed. That will give you detailed \ninformation on both processor and I/O activity. Use the command\n\n$ iostat -cdmtx 10 999999 | tee iostat.log\n\nThis will sample the processor(s), all the devices, and on 2.6.25 or \nlater kernels, all the *partitions*. This last is important if you \nhave things in different filesystems.\n\nWhat you will probably see is samples where the I/O wait is high \ncorrelated with high levels of read activity (reads per second and \nread megabytes per second) and high device utilization. That means you \nare reading data from disk and the processors are waiting for it. What \ncan you do about it?\n\n1. Add RAM. This will let Linux put more stuff in page cache, making \nit have to read less.\n2. Experiment with the four I/O schedulers. You can change them at run \ntime (as \"root\").\n\nI've put a little bit of this on line -- it's fairly detailed, and \nit's not PostgreSQL-specific, but you can get an indication of the \nconcepts. By the way, I am working on some scripts that will actually \nintegrate this type of monitoring and analysis with PostgreSQL. What \nthey will do is load the raw Linux data into a PostgreSQL database and \nprovide analysis queries and other tools. But for now, see if this \nmakes any sense to you:\n\nhttp://cougar.rubyforge.org/svn/trunk/procmodel/IO-Counters/beamer/handout.pdf\n", "msg_date": "Fri, 31 Oct 2008 03:18:53 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: CPU utilization vs. IO wait, shared buffers?" } ]
[ { "msg_contents": "I've run across a strange problem with PG 8.3.3 not using indexes on a\nparticular table after building the table during a transaction.\n\nYou can see a transcript of the issue here:\n\nhttp://gist.github.com/21154\n\nInterestingly, if I create another temp table 'CREATE TEMP TABLE AS\nSELECT * FROM act' as seen on line 107, then add the same indexes to\nthat table, PG will use the indexes. While it's not in the gist\ntranscript, even an extremely simple query like:\n\nSELECT * FROM act WHERE act_usr_id = 1;\n\nwill not use the index on the original act table, but the jefftest and\njefftest2 tables both work fine. As you can probably see in the\ntranscript, the tables have been ANALYZEd. I even tried 'enable\nseqscan=0;' and that made the cost really high for the seq scan, but the\nplanner still chose the seq scan.\n\nThe issue does not affect 8.2.3 nor does it affect 8.3.4. I didn't see\nany mention of a fix for this sort of thing in 8.3.4's release notes. I\nwas wondering if this is a known bug in 8.3.3 (and maybe other 8.3.x\nversions) and just didn't make it into the release notes of 8.3.4?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 916-647-6411\tFAX: 916-405-4032\n\n", "msg_date": "Thu, 30 Oct 2008 17:18:06 -0700", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": true, "msg_subject": "Index usage problem on 8.3.3" }, { "msg_contents": "Jeff Frost <[email protected]> writes:\n> I've run across a strange problem with PG 8.3.3 not using indexes on a\n> particular table after building the table during a transaction.\n\nThis may be a HOT side-effect ... is pg_index.indcheckxmin set for\nthe index?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Oct 2008 20:31:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage problem on 8.3.3 " }, { "msg_contents": "Tom Lane wrote:\n> Jeff Frost <[email protected]> writes:\n> \n>> I've run across a strange problem with PG 8.3.3 not using indexes on a\n>> particular table after building the table during a transaction.\n>> \n>\n> This may be a HOT side-effect ... is pg_index.indcheckxmin set for\n> the index?\n> \nYep, sure enough, the 'act' table's indexes have it set and jefftest and\njefftest2's indexes do not.\n\nselect c.relname,i.indcheckxmin from pg_class c, pg_index i WHERE\ni.indexrelid = c.oid AND c.relname IN ('act_act_usr_id', 'act_arrived',\n'act_closing', 'act_place');\n relname | indcheckxmin\n----------------+--------------\n act_closing | t\n act_act_usr_id | t\n act_place | t\n act_arrived | t\n(4 rows)\n\n\nconsdb=# select c.relname,i.indcheckxmin from pg_class c, pg_index i\nWHERE i.indexrelid = c.oid AND c.relname IN\n('jefftest2_jefftest_usr_id', 'jefftest2_arrived', 'jefftest2_closing',\n'jefftest2_place');\n relname | indcheckxmin\n---------------------------+--------------\n jefftest2_jefftest_usr_id | f\n jefftest2_place | f\n jefftest2_arrived | f\n jefftest2_closing | f\n(4 rows)\n\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 916-647-6411\tFAX: 916-405-4032\n\n\n\n\n\n\n\nTom Lane wrote:\n\nJeff Frost <[email protected]> writes:\n \n\nI've run across a strange problem with PG 8.3.3 not using indexes on a\nparticular table after building the table during a transaction.\n \n\n\nThis may be a HOT side-effect ... is pg_index.indcheckxmin set for\nthe index?\n \n\nYep, sure enough, the 'act' table's indexes have it set and jefftest\nand jefftest2's indexes do not.\n\nselect c.relname,i.indcheckxmin  from pg_class c, pg_index i WHERE\ni.indexrelid = c.oid AND c.relname IN ('act_act_usr_id', 'act_arrived',\n'act_closing', 'act_place');\n    relname     | indcheckxmin\n----------------+--------------\n act_closing    | t\n act_act_usr_id | t\n act_place      | t\n act_arrived    | t\n(4 rows)\n\n\nconsdb=# select c.relname,i.indcheckxmin  from pg_class c, pg_index i\nWHERE i.indexrelid = c.oid AND c.relname IN\n('jefftest2_jefftest_usr_id', 'jefftest2_arrived', 'jefftest2_closing',\n'jefftest2_place');\n          relname          | indcheckxmin\n---------------------------+--------------\n jefftest2_jefftest_usr_id | f\n jefftest2_place           | f\n jefftest2_arrived         | f\n jefftest2_closing         | f\n(4 rows)\n\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 916-647-6411\tFAX: 916-405-4032", "msg_date": "Thu, 30 Oct 2008 17:52:17 -0700", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index usage problem on 8.3.3" }, { "msg_contents": "Jeff Frost <[email protected]> writes:\n> Tom Lane wrote:\n>> This may be a HOT side-effect ... is pg_index.indcheckxmin set for\n>> the index?\n>> \n> Yep, sure enough, the 'act' table's indexes have it set and jefftest and\n> jefftest2's indexes do not.\n\nOkay. What that means is that the indexes were created on data that had\nalready been inserted and updated to some extent, resulting in\nHOT-update chains that turned out to be illegal for the new indexes.\nThe way we deal with this is to mark the indexes as not usable by any\nquery that can still see the dead HOT-updated tuples.\n\nYour best bet for dodging the problem is probably to break the operation\ninto two transactions, if that's possible. INSERT and UPDATE in the\nfirst xact, create the indexes at the start of the second. (Hmm ...\nI'm not sure if that's sufficient if there are other concurrent\ntransactions; but it's certainly necessary.) Another possibility is\nto create the indexes just after data load, before you start updating\nthe columns they're on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Oct 2008 21:14:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage problem on 8.3.3 " }, { "msg_contents": "Tom Lane wrote:\n> Okay. What that means is that the indexes were created on data that had\n> already been inserted and updated to some extent, resulting in\n> HOT-update chains that turned out to be illegal for the new indexes.\n> The way we deal with this is to mark the indexes as not usable by any\n> query that can still see the dead HOT-updated tuples.\n>\n> Your best bet for dodging the problem is probably to break the operation\n> into two transactions, if that's possible. INSERT and UPDATE in the\n> first xact, create the indexes at the start of the second. (Hmm ...\n> I'm not sure if that's sufficient if there are other concurrent\n> transactions; but it's certainly necessary.) Another possibility is\n> to create the indexes just after data load, before you start updating\n> the columns they're on.\n>\n> \t\nThanks Tom!\n\nAny idea why I don't see it on 8.3.4?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 916-647-6411\tFAX: 916-405-4032\n\n", "msg_date": "Thu, 30 Oct 2008 18:22:19 -0700", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index usage problem on 8.3.3" }, { "msg_contents": "Jeff Frost <[email protected]> writes:\n> Tom Lane wrote:\n>> Okay. What that means is that the indexes were created on data that had\n>> already been inserted and updated to some extent, resulting in\n>> HOT-update chains that turned out to be illegal for the new indexes.\n>> The way we deal with this is to mark the indexes as not usable by any\n>> query that can still see the dead HOT-updated tuples.\n\n> Any idea why I don't see it on 8.3.4?\n\nI think it's more likely some small difference in your test conditions\nthan any real version-to-version difference. In particular I think the\n\"still see\" test might be influenced by the ages of transactions running\nconcurrently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Oct 2008 21:27:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage problem on 8.3.3 " }, { "msg_contents": "On Thu, 30 Oct 2008, Tom Lane wrote:\n\n>> Any idea why I don't see it on 8.3.4?\n>\n> I think it's more likely some small difference in your test conditions\n> than any real version-to-version difference. In particular I think the\n> \"still see\" test might be influenced by the ages of transactions running\n> concurrently.\n\nInteresting. This is on a test server which has no other concurrent \ntransactions and it acts the same way after I stopped 8.3.4 and started up \n8.3.3 again as it did before stopping 8.3.3 to bring up 8.3.4. Hrmm..I'm not \nsure that makes sense. So, I did the test with the sql script on 8.3.3, then \nshut down 8.3.3, started up 8.3.4 on the same data dir, ran the test \nsuccessfully. Next I shut down 8.3.4 and started 8.3.3 and verified that the \nbehavior was still the same on 8.3.3. I wonder what else I might be doing \ndifferently.\n\nThe good news is that making the indexes before the updates seems to make the \nplanner happy!\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 916-647-6411\tFAX: 916-405-4032\n", "msg_date": "Thu, 30 Oct 2008 18:49:21 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index usage problem on 8.3.3 " }, { "msg_contents": "Jeff Frost <[email protected]> writes:\n> On Thu, 30 Oct 2008, Tom Lane wrote:\n>>> Any idea why I don't see it on 8.3.4?\n>> \n>> I think it's more likely some small difference in your test conditions\n>> than any real version-to-version difference. In particular I think the\n>> \"still see\" test might be influenced by the ages of transactions running\n>> concurrently.\n\n> Interesting. This is on a test server which has no other concurrent \n> transactions and it acts the same way after I stopped 8.3.4 and started up \n> 8.3.3 again as it did before stopping 8.3.3 to bring up 8.3.4. Hrmm..I'm not \n> sure that makes sense. So, I did the test with the sql script on 8.3.3, then \n> shut down 8.3.3, started up 8.3.4 on the same data dir, ran the test \n> successfully. Next I shut down 8.3.4 and started 8.3.3 and verified that the \n> behavior was still the same on 8.3.3. I wonder what else I might be doing \n> differently.\n\nHuh. That does sound like it's a version-to-version difference.\nThere's nothing in the CVS log that seems related though. Are you\nwilling to post your test case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Oct 2008 21:55:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage problem on 8.3.3 " }, { "msg_contents": "Tom Lane wrote:\n> Jeff Frost <[email protected]> writes:\n> \n>> On Thu, 30 Oct 2008, Tom Lane wrote:\n>> \n>>>> Any idea why I don't see it on 8.3.4?\n>>>> \n>>> I think it's more likely some small difference in your test conditions\n>>> than any real version-to-version difference. In particular I think the\n>>> \"still see\" test might be influenced by the ages of transactions running\n>>> concurrently.\n>>> \n>\n> \n>> Interesting. This is on a test server which has no other concurrent \n>> transactions and it acts the same way after I stopped 8.3.4 and started up \n>> 8.3.3 again as it did before stopping 8.3.3 to bring up 8.3.4. Hrmm..I'm not \n>> sure that makes sense. So, I did the test with the sql script on 8.3.3, then \n>> shut down 8.3.3, started up 8.3.4 on the same data dir, ran the test \n>> successfully. Next I shut down 8.3.4 and started 8.3.3 and verified that the \n>> behavior was still the same on 8.3.3. I wonder what else I might be doing \n>> differently.\n>> \n>\n> Huh. That does sound like it's a version-to-version difference.\n> There's nothing in the CVS log that seems related though. Are you\n> willing to post your test case?\n> \t\n> \nIt's a customer DB, so I'll contact them and see if we can boil it down\nto a test case with no sensitive data.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 916-647-6411\tFAX: 916-405-4032\n\n\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\nJeff Frost <[email protected]> writes:\n \n\nOn Thu, 30 Oct 2008, Tom Lane wrote:\n \n\n\nAny idea why I don't see it on 8.3.4?\n \n\nI think it's more likely some small difference in your test conditions\nthan any real version-to-version difference. In particular I think the\n\"still see\" test might be influenced by the ages of transactions running\nconcurrently.\n \n\n\n\n \n\nInteresting. This is on a test server which has no other concurrent \ntransactions and it acts the same way after I stopped 8.3.4 and started up \n8.3.3 again as it did before stopping 8.3.3 to bring up 8.3.4. Hrmm..I'm not \nsure that makes sense. So, I did the test with the sql script on 8.3.3, then \nshut down 8.3.3, started up 8.3.4 on the same data dir, ran the test \nsuccessfully. Next I shut down 8.3.4 and started 8.3.3 and verified that the \nbehavior was still the same on 8.3.3. I wonder what else I might be doing \ndifferently.\n \n\n\nHuh. That does sound like it's a version-to-version difference.\nThere's nothing in the CVS log that seems related though. Are you\nwilling to post your test case?\n\t\n \n\nIt's a customer DB, so I'll contact them and see if we can boil it down\nto a test case with no sensitive data.\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 916-647-6411\tFAX: 916-405-4032", "msg_date": "Thu, 30 Oct 2008 19:47:25 -0700", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index usage problem on 8.3.3" }, { "msg_contents": "Jeff Frost <[email protected]> writes:\n> Tom Lane wrote:\n>> Huh. That does sound like it's a version-to-version difference.\n>> There's nothing in the CVS log that seems related though. Are you\n>> willing to post your test case?\n>> \n> It's a customer DB, so I'll contact them and see if we can boil it down\n> to a test case with no sensitive data.\n\nWell, if there was a change it seems to have been in the right direction\n;-) so this is mostly just idle curiosity. Don't jump through hoops to\nget a test case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Oct 2008 22:51:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage problem on 8.3.3 " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Jeff Frost <[email protected]> writes:\n>> Tom Lane wrote:\n>>> Huh. That does sound like it's a version-to-version difference.\n>>> There's nothing in the CVS log that seems related though. Are you\n>>> willing to post your test case?\n>>> \n>> It's a customer DB, so I'll contact them and see if we can boil it down\n>> to a test case with no sensitive data.\n>\n> Well, if there was a change it seems to have been in the right direction\n> ;-) so this is mostly just idle curiosity. Don't jump through hoops to\n> get a test case.\n\nAssuming it's not a bug...\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Fri, 31 Oct 2008 08:53:26 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage problem on 8.3.3" }, { "msg_contents": "On Fri, 31 Oct 2008, Gregory Stark wrote:\n\n> Tom Lane <[email protected]> writes:\n>\n>> Jeff Frost <[email protected]> writes:\n>>> Tom Lane wrote:\n>>>> Huh. That does sound like it's a version-to-version difference.\n>>>> There's nothing in the CVS log that seems related though. Are you\n>>>> willing to post your test case?\n>>>>\n>>> It's a customer DB, so I'll contact them and see if we can boil it down\n>>> to a test case with no sensitive data.\n>>\n>> Well, if there was a change it seems to have been in the right direction\n>> ;-) so this is mostly just idle curiosity. Don't jump through hoops to\n>> get a test case.\n>\n> Assuming it's not a bug...\n\nWell, after boiling down my test case to the bare essentials, I was unable to \nreproduce the different behavior between 8.3.3 and 8.3.4. Now, I've gone back \nto the original script and can't reproduce the behavior I previously saw on \n8.3.4 and my screen session doesn't have enough scrollback to look at what \nhappened previously. I was thinking perhaps I had inadvertently committed the \ntransaction, but then the act would have been dropped as it's a temp table \ncreated with ON COMMIT DROP. But, I've tested 3 times in a row and every time \n8.3.4 uses the seq scan just like 8.3.3 now, so I must've done something \ndifferently to get that result as Tom had originally suggested. I just can't \nthink what it might have been. Perhaps it's time to buy some glasses. :-/\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 916-647-6411\tFAX: 916-405-4032\n", "msg_date": "Fri, 31 Oct 2008 10:58:28 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index usage problem on 8.3.3" } ]
[ { "msg_contents": "What is the general consensus around here ... to have many smaller tables, or have a few large tables?\n\nI'm looking at a db model where a client has around 5500 spatial (PostGIS) tables, where the volume of each one varies \ngreatly ... from a few hundred rows to over 250,000.\n\nSpeed is of the utmost importance. I'm investigating various options, like grouping the tables based on a common \nattribute or spatial type (POINT, LINESTRING, etc) into many multi-million tuple tables.\n\nOr, table inheritance could be my friend here, in terms of performance. Ie. Using inheritance and constraint exclusion, \nthe query planner could quickly isolate the tables of interest. It's orders of magnitude faster to perform a sequential \nscan through a relatively small table than it is to do an index scan on a large, likely unclustered table. The question \nis, can the underlying OS handle thousands of tables in a tablespace? Would it overwhelm the parser to perform \nconstraint exclusion on 50-100 tables? Can it be done relatively quickly?\n\nClearly, serious testing is in order, but I just wanted to get a feel for things before I dive in.\n\nCheers,\nKevin\n", "msg_date": "Mon, 03 Nov 2008 16:29:22 -0800", "msg_from": "Kevin Neufeld <[email protected]>", "msg_from_op": true, "msg_subject": "many tables vs large tables" }, { "msg_contents": "\nHello ,\nI am trying to install postgresql-8.1.5 and postgresql-8.2.5 in linux (Linux \nversion 2.6.25-14.fc9.i686 (mockbuild@) (gcc version 4.3.0 20080428 (Red Hat \n4.3.0-8) (GCC) ) #1 SMP Thu May 1 06:28:41 EDT 2008).but during compilation \nit is showing following error.\n\nmake[3]: *** [plperl.o] Error 1\nmake[3]: Leaving directory `/home/postgres/postgresql-8.1.5/src/pl/plperl'\nmake[2]: *** [all] Error 1\nmake[2]: Leaving directory `/home/postgres/postgresql-8.1.5/src/pl'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/home/postgres/postgresql-8.1.5/src'\nmake: *** [all] Error 2\n\nPlease tell me how I can avoid this kind of error.\n\nThanks & regard\nPraveen kumar.\n\n\n", "msg_date": "Tue, 4 Nov 2008 19:50:49 +0530", "msg_from": "\"praveen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Installation Error of postgresql-8.1.5 with perl." }, { "msg_contents": "\n\n Hello ,\n I am trying to install postgresql-8.1.5 and postgresql-8.2.5 in linux \n(Linux\n version 2.6.25-14.fc9.i686 (mockbuild@) (gcc version 4.3.0 20080428 (Red \nHat\n 4.3.0-8) (GCC) ) #1 SMP Thu May 1 06:28:41 EDT 2008).but during compilation\n it is showing following error.\n\nI configure with following options.\n./configure --prefix=/home/local/pgsql/ --without-readline --with-perl --with-python \n --with-tcl --with-tclconfig=/usr/src/tcl8.4.16/unix --enable-nls\nbut when I execute command \"make \" that time I got following errors.\n\n make[3]: *** [plperl.o] Error 1\n make[3]: Leaving directory `/home/postgres/postgresql-8.1.5/src/pl/plperl'\n make[2]: *** [all] Error 1\n make[2]: Leaving directory `/home/postgres/postgresql-8.1.5/src/pl'\n make[1]: *** [all] Error 2\n make[1]: Leaving directory `/home/postgres/postgresql-8.1.5/src'\n make: *** [all] Error 2\n\n Please tell me how I can avoid this kind of error.\n\n Thanks & regard\n Praveen kumar.\n\n\n\n", "msg_date": "Tue, 4 Nov 2008 20:17:00 +0530", "msg_from": "\"praveen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation Error of postgresql-8.1.5 with perl." }, { "msg_contents": "\"praveen\" <[email protected]> writes:\n> but when I execute command \"make \" that time I got following errors.\n\n> make[3]: *** [plperl.o] Error 1\n> make[3]: Leaving directory `/home/postgres/postgresql-8.1.5/src/pl/plperl'\n> make[2]: *** [all] Error 1\n> make[2]: Leaving directory `/home/postgres/postgresql-8.1.5/src/pl'\n> make[1]: *** [all] Error 2\n> make[1]: Leaving directory `/home/postgres/postgresql-8.1.5/src'\n> make: *** [all] Error 2\n\n1. You've snipped away the actual error message, so no one can tell\nwhat went wrong.\n\n2. It is completely inappropriate to cross-post to four different\nmailing lists.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Nov 2008 09:49:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation Error of postgresql-8.1.5 with perl. " }, { "msg_contents": "Hello Tom,\nDuring configure I find the error in config.log file\nchecking for flags to link embedded Perl... Can't locate ExtUtils/Embed.pm \nin @INC (@INC contains: /usr/lib/perl5/5.10.0/i386-linux-thread-multi \n/usr/lib/perl5/5.10.0 \n/usr/lib/perl5/site_perl/5.10.0/i386-linux-thread-multi \n/usr/lib/perl5/site_perl/5.10.0 /usr/lib/perl5/site_perl/5.8.8 \n/usr/lib/perl5/site_perl/5.8.7 /usr/lib/perl5/site_perl/5.8.6 \n/usr/lib/perl5/site_perl/5.8.5 /usr/lib/perl5/site_perl \n/usr/lib/perl5/vendor_perl/5.10.0/i386-linux-thread-multi \n/usr/lib/perl5/vendor_perl/5.10.0 /usr/lib/perl5/vendor_perl/5.8.8 \n/usr/lib/perl5/vendor_perl/5.8.7 /usr/lib/perl5/vendor_perl/5.8.6 \n/usr/lib/perl5/vendor_perl/5.8.5 /usr/lib/perl5/vendor_perl .).\n\nand when I execute the \"make\" command that time I find the following \nerrors.\n\nmake -C error SUBSYS.o\nmake[4]: Entering directory \n`/home/postgres/postgresql-8.1.5/src/backend/utils/error'\ngcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement \n -Wendif-labels -fno-strict-aliasing -fpic -I. -I../../../src/include -D_GNU_SOURCE \n -I/usr/lib/perl5/5.10.0/i386-linux-thread-multi/CORE -c -o plperl.o \nplperl.c\nplperl.c:67:20: error: EXTERN.h: No such file or directory\nplperl.c:68:18: error: perl.h: No such file or directory\nplperl.c:69:18: error: XSUB.h: No such file or directory\nppport.h:174:24: error: patchlevel.h: No such file or directory\nppport.h:177:44: error: could_not_find_Perl_patchlevel.h: No such file or \ndirectory\nppport.h:371: error: expected �)� before �*� token\nppport.h:563: warning: type defaults to �int� in declaration of �SV�\nppport.h:563: error: expected �;�, �,� or �)� before �*� token\nplperl.c:122: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�*� token\nplperl.c:123: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�*� token\nplperl.c:145: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�*� token\nplperl.c:147: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�*� token\nplperl.c:324: error: �plperl_proc_hash� undeclared (first use in this \nfunction)\nplperl.c:374: error: �SV� undeclared (first use in this function)\nplperl.c:374: error: �res� undeclared (first use in this function)\nplperl.c:417: error: expected �)� before �*� token\nplperl.c:451: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�*� token\nplperl.c:480: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�*� token\nplperl.c:576: error: expected �)� before �*� token\nplperl.c:736: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�*� token\nplperl.c:831: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�void�\nplperl.c:832: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�void�\nplperl.c:844: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�*� token\nplperl.c:939: error: expected �=�, �,�, �;�, �asm� or �__attribute__� before \n�*� token\nplperl.c:1000: error: �SV� undeclared (first use in this function)\nplperl.c:1000: error: �perlret� undeclared (first use in this function)\nplperl.c:1001: warning: ISO C90 forbids mixed declarations and code\nplperl.c:1003: error: �array_ret� undeclared (first use in this function)\nplperl.c:1032: warning: implicit declaration of function \n�plperl_call_perl_func�\nplperl.c:1051: warning: implicit declaration of function �SvTYPE�\nplperl.c:1051: error: �SVt_RV� undeclared (first use in this function)\nplperl.c:1052: warning: implicit declaration of function �SvRV�\nplperl.c:1052: error: �SVt_PVAV� undeclared (first use in this function)\nplperl.c:1055: error: �svp� undeclared (first use in this function)\nplperl.c:1056: error: �AV� undeclared (first use in this function)\nplperl.c:1056: error: �rav� undeclared (first use in this function)\nplperl.c:1056: error: expected expression before �)� token\nplperl.c:1058: warning: implicit declaration of function �av_fetch�\nplperl.c:1060: warning: implicit declaration of function \n�plperl_return_next�\nplperl.c:1064: error: �SVt_NULL� undeclared (first use in this function)\nplperl.c:1095: warning: implicit declaration of function �SvOK�\nplperl.c:1096: error: �SVt_PVHV� undeclared (first use in this function)\nplperl.c:1114: warning: implicit declaration of function \n�plperl_build_tuple_result�\nplperl.c:1114: error: �HV� undeclared (first use in this function)\nplperl.c:1114: error: expected expression before �)� token\nplperl.c:1114: warning: assignment makes pointer from integer without a cast\nplperl.c:1122: warning: implicit declaration of function �SvROK�\nplperl.c:1125: warning: implicit declaration of function \n�plperl_convert_to_pg_array�\nplperl.c:1126: warning: implicit declaration of function �SvREFCNT_dec�\nplperl.c:1130: warning: implicit declaration of function �SvPV�\nplperl.c:1130: error: �na� undeclared (first use in this function)\nplperl.c:1130: warning: assignment makes pointer from integer without a cast\nplperl.c: In function �plperl_trigger_handler�:\nplperl.c:1150: error: �SV� undeclared (first use in this function)\nplperl.c:1150: error: �perlret� undeclared (first use in this function)\nplperl.c:1151: warning: ISO C90 forbids mixed declarations and code\nplperl.c:1152: error: �svTD� undeclared (first use in this function)\nplperl.c:1153: error: �HV� undeclared (first use in this function)\nplperl.c:1153: error: �hvTD� undeclared (first use in this function)\nplperl.c:1172: error: expected expression before �)� token\nplperl.c:1183: error: �SVt_NULL� undeclared (first use in this function)\nplperl.c:1202: error: �na� undeclared (first use in this function)\nplperl.c:1253: error: �SV� undeclared (first use in this function)\nplperl.c:1253: error: �svp� undeclared (first use in this function)\nplperl.c:1276: error: �plperl_proc_hash� undeclared (first use in this \nfunction)\nplperl.c:1468: error: �plperl_proc_desc� has no member named �reference�\nplperl.c:1479: error: �IV� undeclared (first use in this function)\nplperl.c:1479: error: expected �)� before �prodesc�\nplperl.c:1490: error: expected �=�, �,�, �;�, �asm� or �__attribute__� \nbefore �*� token\nplperl.c:1544: error: expected �=�, �,�, �;�, �asm� or �__attribute__� \nbefore �*� token\nplperl.c:1613: error: expected �=�, �,�, �;�, �asm� or �__attribute__� \nbefore �*� token\nplperl.c:1657: error: expected �)� before �*� token\nplperl.c:1770: error: expected �=�, �,�, �;�, �asm� or �__attribute__� \nbefore �*� token\nplperl.c:1844: error: expected �=�, �,�, �;�, �asm� or �__attribute__� \nbefore �*� token\n\n\n\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"praveen\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, November 04, 2008 8:19 PM\nSubject: Re: Installation Error of postgresql-8.1.5 with perl.\n\n\n> \"praveen\" <[email protected]> writes:\n>> but when I execute command \"make \" that time I got following errors.\n>\n>> make[3]: *** [plperl.o] Error 1\n>> make[3]: Leaving directory \n>> `/home/postgres/postgresql-8.1.5/src/pl/plperl'\n>> make[2]: *** [all] Error 1\n>> make[2]: Leaving directory `/home/postgres/postgresql-8.1.5/src/pl'\n>> make[1]: *** [all] Error 2\n>> make[1]: Leaving directory `/home/postgres/postgresql-8.1.5/src'\n>> make: *** [all] Error 2\n>\n> 1. You've snipped away the actual error message, so no one can tell\n> what went wrong.\n>\n> 2. It is completely inappropriate to cross-post to four different\n> mailing lists.\n>\n> regards, tom lane\n> \n\n\n", "msg_date": "Wed, 5 Nov 2008 12:31:21 +0530", "msg_from": "\"praveen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation Error of postgresql-8.1.5 with perl. " }, { "msg_contents": "\"praveen\" <[email protected]> writes:\n> During configure I find the error in config.log file\n> checking for flags to link embedded Perl... Can't locate ExtUtils/Embed.pm \n> in @INC (@INC contains: /usr/lib/perl5/5.10.0/i386-linux-thread-multi \n\nWell, there's your problem ...\n\nFYI, our current Fedora RPMs show both ExtUtils::MakeMaker and\nExtUtils::Embed as required to build Postgres from source.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Nov 2008 10:24:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation Error of postgresql-8.1.5 with perl. " }, { "msg_contents": "Thanks a lot , Tom Lane.\nI installed below mentioned RPMs and now it is working\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"praveen\" <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Wednesday, November 05, 2008 8:54 PM\nSubject: Re: [ADMIN] Installation Error of postgresql-8.1.5 with perl.\n\n\n> \"praveen\" <[email protected]> writes:\n>> During configure I find the error in config.log file\n>> checking for flags to link embedded Perl... Can't locate \n>> ExtUtils/Embed.pm\n>> in @INC (@INC contains: /usr/lib/perl5/5.10.0/i386-linux-thread-multi\n>\n> Well, there's your problem ...\n>\n> FYI, our current Fedora RPMs show both ExtUtils::MakeMaker and\n> ExtUtils::Embed as required to build Postgres from source.\n>\n> regards, tom lane\n>\n> -- \n> Sent via pgsql-admin mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-admin\n> \n\n\n", "msg_date": "Thu, 6 Nov 2008 10:52:22 +0530", "msg_from": "\"praveen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation Error of postgresql-8.1.5 with perl. " } ]
[ { "msg_contents": "Hi,\n\nI have a small problem. I have one view which sum's a another tables field and \nuses that sum for several things including filtering. Every time it uses the \nthat summarised field in other queries or views, the planner seems to duplicate \nthe SUM. Isn't it possible for the planner only to do the SUM once and reuse it?\n\nI've done a small example illustrating my problem. I'm using 8.3.3. Hope \nsomebody can tell my what I'm doing wrong or why this is happening. The problem \nisn't great and I think I can work around it, using a stable function for doing \nthe SUM, but I'm still wondering.\n\nBest regards\n\nMartin\n\n----------------------\nEXAMPLE:\n\n\nCREATE TABLE test (\n id INTEGER,\n name TEXT\n);\n\nINSERT INTO test (id, name) VALUES(1, 'ddd');\n\nCREATE TABLE test_use (\n test_id INTEGER,\n number INTEGER\n);\n\nINSERT INTO test_use (test_id, number) VALUES(1, 3);\nINSERT INTO test_use (test_id, number) VALUES(1, 4);\nINSERT INTO test_use (test_id, number) VALUES(1, 1);\nINSERT INTO test_use (test_id, number) VALUES(1, 27);\n\nCREATE OR REPLACE VIEW v_test_with_number AS\n SELECT\n t.*,\n (SELECT SUM(number) FROM test_use WHERE test_id = t.id) as numbers\n FROM test t;\n\nCREATE OR REPLACE VIEW v_test_with_number_filtered AS\n SELECT * FROM v_test_with_number WHERE numbers > 0;\n\nEXPLAIN SELECT * FROM v_test_with_number;\n QUERY PLAN\n------------------------------------------------------------------------\n Seq Scan on test t (cost=0.00..45274.00 rows=1230 width=36)\n SubPlan\n -> Aggregate (cost=36.78..36.79 rows=1 width=4)\n -> Seq Scan on test_use (cost=0.00..36.75 rows=11 width=4)\n Filter: (test_id = $0)\n(5 rows)\n\nEXPLAIN SELECT * FROM v_test_with_number_filtered;\n QUERY PLAN\n------------------------------------------------------------------------\n Seq Scan on test t (cost=0.00..60360.97 rows=410 width=36)\n Filter: ((subplan) > 0)\n SubPlan\n -> Aggregate (cost=36.78..36.79 rows=1 width=4)\n -> Seq Scan on test_use (cost=0.00..36.75 rows=11 width=4)\n Filter: (test_id = $0)\n -> Aggregate (cost=36.78..36.79 rows=1 width=4)\n -> Seq Scan on test_use (cost=0.00..36.75 rows=11 width=4)\n Filter: (test_id = $0)\n(9 rows)\n", "msg_date": "Tue, 4 Nov 2008 11:56:25 +0100", "msg_from": "Martin Kjeldsen <[email protected]>", "msg_from_op": true, "msg_subject": "Aggregate weirdness" }, { "msg_contents": "Martin Kjeldsen <[email protected]> writes:\n> CREATE OR REPLACE VIEW v_test_with_number AS\n> SELECT\n> t.*,\n> (SELECT SUM(number) FROM test_use WHERE test_id = t.id) as numbers\n> FROM test t;\n\nThis is a bad way to do it --- the sub-select isn't readily optimizable.\nTry something like\n\n SELECT t.id, t.name, sum(test_use.number) AS numbers\n FROM test_use\n JOIN test t ON test_use.test_id = t.id\n GROUP BY t.id, t.name;\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Nov 2008 08:39:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Aggregate weirdness " } ]
[ { "msg_contents": "Dear All,\n\n\nRecently i have released the next version of the epqa. which is a very\nuseful tool for, gives input for optimizing psql queries, and fine tuning\nit.\n\nepqa is tool similar like, pqa. But designed and implemented to parse log\nfiles which is in GB's. Report is similar like that.\n\nMore information can be got from http://epqa.sourceforge.net/\n\n\nExpecting suggestions, feedbacks, clarfications @ [email protected]\n\nNote: This is to propagate the open source which can help for postgres\nusers.\nThis is not a spam, or advertisement.\n\nRegards\nSathiyaMoorthy\n\nDear All,Recently i have released the next version of the  epqa. which is a very useful tool for, gives input for optimizing psql queries, and fine tuning it.epqa is tool similar like, pqa. But designed and implemented to parse log files which is in GB's. Report is similar like that.\nMore information can be got from http://epqa.sourceforge.net/Expecting suggestions, feedbacks, clarfications @ [email protected]\nNote: This is to propagate the open source which can help for postgres users.This is not a spam, or advertisement.RegardsSathiyaMoorthy", "msg_date": "Tue, 4 Nov 2008 17:55:51 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "epqa; postgres performance optimizer support tool; opensource." }, { "msg_contents": "Sure , i 'll try with our database log\n\nRegards\nsathish\n\nOn Tue, Nov 4, 2008 at 5:55 PM, sathiya psql <[email protected]> wrote:\n\n> Dear All,\n>\n>\n> Recently i have released the next version of the epqa. which is a very\n> useful tool for, gives input for optimizing psql queries, and fine tuning\n> it.\n>\n> epqa is tool similar like, pqa. But designed and implemented to parse log\n> files which is in GB's. Report is similar like that.\n>\n> More information can be got from http://epqa.sourceforge.net/\n>\n>\n> Expecting suggestions, feedbacks, clarfications @ [email protected]\n>\n> Note: This is to propagate the open source which can help for postgres\n> users.\n> This is not a spam, or advertisement.\n>\n> Regards\n> SathiyaMoorthy\n>\n>\n\n\n-- \nBSG LeatherLink Pvt Limited,\nMail To : [email protected]\nWebsite : http://www.leatherlink.net\nContact : +91 44 65191757\n\nSure , i 'll try with our database log RegardssathishOn Tue, Nov 4, 2008 at 5:55 PM, sathiya psql <[email protected]> wrote:\nDear All,Recently i have released the next version of the  epqa. which is a very useful tool for, gives input for optimizing psql queries, and fine tuning it.\nepqa is tool similar like, pqa. But designed and implemented to parse log files which is in GB's. Report is similar like that.\nMore information can be got from http://epqa.sourceforge.net/Expecting suggestions, feedbacks, clarfications @ [email protected]\nNote: This is to propagate the open source which can help for postgres users.This is not a spam, or advertisement.RegardsSathiyaMoorthy\n-- BSG LeatherLink Pvt Limited,Mail To : [email protected] : http://www.leatherlink.net\nContact : +91 44 65191757", "msg_date": "Tue, 4 Nov 2008 18:38:39 +0530", "msg_from": "\"Sathish Duraiswamy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] epqa; postgres performance optimizer support tool;\n\topensource." }, { "msg_contents": "On Tue, Nov 04, 2008 at 05:55:51PM +0530, sathiya psql wrote:\n> Dear All,\n> \n> \n> Recently i have released the next version of the epqa. which is a very\n> useful tool for, gives input for optimizing psql queries, and fine tuning\n> it.\n\nGenerally, it's good to send announcements like this to\npgsql-announce, which has much lower traffic. :) Sending it to all\nthe lists isn't your best move.\n\n> epqa is tool similar like, pqa. But designed and implemented to parse log\n> files which is in GB's. Report is similar like that.\n> \n> More information can be got from http://epqa.sourceforge.net/\n> \n> \n> Expecting suggestions, feedbacks, clarfications @ [email protected]\n> \n> Note: This is to propagate the open source which can help for postgres\n> users.\n> This is not a spam, or advertisement.\n> \n> Regards\n> SathiyaMoorthy\n\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nPhone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter\nSkype: davidfetter XMPP: [email protected]\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n", "msg_date": "Tue, 4 Nov 2008 08:11:01 -0800", "msg_from": "David Fetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: epqa; postgres performance optimizer support tool;\n\topensource." } ]
[ { "msg_contents": "Hello.\n\nFor a long time already I can see very poor OR performance in postgres.\nIf one have query like \"select something from table where condition1 or\ncondition2\" it may take ages to execute while\n\"select something from table where condition1\" and \"select something from\ntable where condition2\" are executed very fast and\n\"select something from table where condition1 and not condition2 union all\nselect something from table where condition2\" gives required results fast\n\nFor example, in my current query for \"condition1\" optimizer gives 88252, for\n\"condition1 and not condition2\" it is 88258, for \"condition2\" it is 99814.\nAnd for \"condition1 or condition2\" it is 961499627680. And it does perform\nthis way.\n\nAll is more or less good when \"select\" part is easy and query can be easily\nrewritten. But for complex queries it looks ugly and if the query is\nautogenerated, moving autogeneration mechanism from creating simple clean\n\"where\" to unions is not an easy task.\n\nSo the question is: Do I miss something? Can this be optimized?\n\nHello.For a long time already I can see very poor OR performance in postgres. If one have query like \"select something from table where condition1 or condition2\" it may take ages to execute while\"select something from table where condition1\" and \"select something from table where condition2\" are executed very fast and\n\"select something from table where condition1 and not condition2 union all select something from table where condition2\" gives required results fastFor example, in my current query for \"condition1\" optimizer gives 88252, for \"condition1 and not condition2\" it is 88258, for \"condition2\" it is 99814.\nAnd for \"condition1 or condition2\" it is 961499627680. And it does perform this way.All is more or less good when \"select\" part is easy and query can be easily rewritten. But for complex queries it looks ugly and if the query is autogenerated, moving autogeneration mechanism from creating simple clean \"where\" to unions is not an easy task.\nSo the question is: Do I miss something? Can this be optimized?", "msg_date": "Wed, 5 Nov 2008 13:12:32 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL OR performance" }, { "msg_contents": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]> writes:\n> For a long time already I can see very poor OR performance in postgres.\n\nIf you would provide a concrete example rather than handwaving, we might\nbe able to offer some advice ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Nov 2008 10:34:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL OR performance " }, { "msg_contents": "On Wed, 2008-11-05 at 13:12 +0200, Віталій Тимчишин wrote:\n> For a long time already I can see very poor OR performance in\n> postgres. \n> If one have query like \"select something from table where condition1\n> or condition2\" it may take ages to execute while\n> \"select something from table where condition1\" and \"select something\n> from table where condition2\" are executed very fast and\n> \"select something from table where condition1 and not condition2 union\n> all select something from table where condition2\" gives required\n> results fast\n> \n\nWhat version are you using?\n\nHave you run \"VACUUM ANALYZE\"?\n\nNext, do:\n\nEXPLAIN ANALYZE select something from table where condition1 or\ncondition2;\n\nfor each of the queries, unless that query takes so long you don't want\nto wait for the result. In that case, omit the \"ANALYZE\" and just do\n\"EXPLAIN ...\".\n\nThen post those results to the list. These tell us what plans PostgreSQL\nis choosing and what it estimates the costs to be. If it's the output of\nEXPLAIN ANALYZE, it also runs the query and tells us what the costs\nreally are.\n\n>From that, we can see where the planner is going wrong, and what you\nmight do to change it.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Wed, 05 Nov 2008 08:31:40 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "My main message is that I can see this in many queries and many times. But\nOK, I can present exact example.\n\n2008/11/5 Jeff Davis <[email protected]>\n\n> On Wed, 2008-11-05 at 13:12 +0200, Віталій Тимчишин wrote:\n> > For a long time already I can see very poor OR performance in\n> > postgres.\n> > If one have query like \"select something from table where condition1\n> > or condition2\" it may take ages to execute while\n> > \"select something from table where condition1\" and \"select something\n> > from table where condition2\" are executed very fast and\n> > \"select something from table where condition1 and not condition2 union\n> > all select something from table where condition2\" gives required\n> > results fast\n> >\n>\n> What version are you using?\n\n\nServer version 8.3.3\n\n\n>\n>\n> Have you run \"VACUUM ANALYZE\"?\n\n\nI have autovacuum, but for this example I did vacuum analyze of the whole\nDB.\nThe real-life query (autogenerated) looks like the next:\nselect t0.id as pk1,t1.id as pk2 ,t0.run_id as f1_run_id,t1.run_id as\nf2_run_id\nfrom tmpv_unproc_null_production_company_dup_cons_company as t0, (select *\nfrom production.company where run_id in (select id from production.run where\nname='test')) as t1\nwhere\nt0.name = t1.name\nor\n(t0.name,t1.name) in (select s1.name, s2.name from atom_match inner join\natoms_string s1 on atom_match.atom1_id = s1.id inner join atoms_string s2\non atom_match.atom2_id = s2.id where s1.atom_type_id = -1 and\nmatch_function_id = 2)\n\nwith tmpv_unproc_null_production_company_dup_cons_company:\n\ncreate temporary view tmpv_unproc_null_production_company_dup_cons_company\nas select * from production.company where 1=1 and status='unprocessed' and\nrun_id in (select id from production.run where name='test')\n\n>\n>\n> Next, do:\n>\n> EXPLAIN ANALYZE select something from table where condition1 or\n> condition2;\n\n\nwithout analyze is in OR-plan.txt\nAlso plans for only condition1, only condition2 and union is attached", "msg_date": "Thu, 6 Nov 2008 12:46:47 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "For what i see in four OR-plan.txt tou are doing too much \"sequencial scan\"\n. Create some indexes for those tables using the fields that you use an it\nmay help you.\n\nOBS: If you already have lots of indexes in your tables it may be a good\ntime for you re-think your strategy because it´s ot working.\nTips:\n 1 - create indexes for the tables with the fields that you will use in the\nquery if it is your most important query. If you have others querys that are\nused please post those here and we can help you to desing a better plan.\n 2 - You cold give us the configuration os the hardware and the posgresql\nconfiguration file and we can see what is going on.\n\nRegards\n\nOn Thu, Nov 6, 2008 at 8:46 AM, Віталій Тимчишин <[email protected]> wrote:\n\n>\n> My main message is that I can see this in many queries and many times. But\n> OK, I can present exact example.\n>\n> 2008/11/5 Jeff Davis <[email protected]>\n>\n>> On Wed, 2008-11-05 at 13:12 +0200, Віталій Тимчишин wrote:\n>> > For a long time already I can see very poor OR performance in\n>> > postgres.\n>> > If one have query like \"select something from table where condition1\n>> > or condition2\" it may take ages to execute while\n>> > \"select something from table where condition1\" and \"select something\n>> > from table where condition2\" are executed very fast and\n>> > \"select something from table where condition1 and not condition2 union\n>> > all select something from table where condition2\" gives required\n>> > results fast\n>> >\n>>\n>> What version are you using?\n>\n>\n> Server version 8.3.3\n>\n>\n>>\n>>\n>> Have you run \"VACUUM ANALYZE\"?\n>\n>\n> I have autovacuum, but for this example I did vacuum analyze of the whole\n> DB.\n> The real-life query (autogenerated) looks like the next:\n> select t0.id as pk1,t1.id as pk2 ,t0.run_id as f1_run_id,t1.run_id as\n> f2_run_id\n> from tmpv_unproc_null_production_company_dup_cons_company as t0, (select *\n> from production.company where run_id in (select id from production.run where\n> name='test')) as t1\n> where\n> t0.name = t1.name\n> or\n> (t0.name,t1.name) in (select s1.name, s2.name from atom_match inner join\n> atoms_string s1 on atom_match.atom1_id = s1.id inner join atoms_string s2\n> on atom_match.atom2_id = s2.id where s1.atom_type_id = -1 and\n> match_function_id = 2)\n>\n> with tmpv_unproc_null_production_company_dup_cons_company:\n>\n> create temporary view tmpv_unproc_null_production_company_dup_cons_company\n> as select * from production.company where 1=1 and status='unprocessed' and\n> run_id in (select id from production.run where name='test')\n>\n>>\n>>\n>> Next, do:\n>>\n>> EXPLAIN ANALYZE select something from table where condition1 or\n>> condition2;\n>\n>\n> without analyze is in OR-plan.txt\n> Also plans for only condition1, only condition2 and union is attached\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n-- \nHelio Campos Mello de Andrade\n\nFor what i see in four OR-plan.txt tou are doing too much \"sequencial scan\" . Create some indexes for those tables using the fields that you use an it may help you.OBS: If you already have lots of indexes in your tables it may be a good time for you re-think your strategy because it´s ot working.\nTips:   1 - create indexes for the tables with the fields that you will use in the query if it is your most important query. If you have others querys that are used please post those here and we can help you to desing a better plan.\n  2 - You cold give us the configuration os the hardware and the posgresql configuration file and we can see what is going on.RegardsOn Thu, Nov 6, 2008 at 8:46 AM, Віталій Тимчишин <[email protected]> wrote:\nMy main message is that I can see this in many queries and many times. But OK, I can present exact example.\n2008/11/5 Jeff Davis <[email protected]>\nOn Wed, 2008-11-05 at 13:12 +0200, Віталій Тимчишин wrote:\n> For a long time already I can see very poor OR performance in\n> postgres.\n> If one have query like \"select something from table where condition1\n> or condition2\" it may take ages to execute while\n> \"select something from table where condition1\" and \"select something\n> from table where condition2\" are executed very fast and\n> \"select something from table where condition1 and not condition2 union\n> all select something from table where condition2\" gives required\n> results fast\n>\n\nWhat version are you using?Server version 8.3.3 \n\n\nHave you run \"VACUUM ANALYZE\"?I have autovacuum, but for this example I did vacuum analyze of the whole DB.The real-life query (autogenerated) looks like the next:select t0.id as pk1,t1.id as pk2 ,t0.run_id as f1_run_id,t1.run_id as f2_run_id \n\nfrom tmpv_unproc_null_production_company_dup_cons_company as t0, (select * from production.company where run_id in (select id from production.run where name='test')) as t1 where  t0.name = t1.name \n\nor(t0.name,t1.name) in (select s1.name, s2.name from atom_match inner join atoms_string s1 on atom_match.atom1_id = s1.id  inner join atoms_string s2 on atom_match.atom2_id = s2.id where s1.atom_type_id = -1 and match_function_id = 2)\nwith tmpv_unproc_null_production_company_dup_cons_company:create temporary view tmpv_unproc_null_production_company_dup_cons_company as select * from production.company where 1=1 and status='unprocessed' and run_id in (select id from production.run where name='test')\n\n\nNext, do:\n\nEXPLAIN ANALYZE select something from table where condition1 or\ncondition2; without analyze is in OR-plan.txt Also plans for only condition1, only condition2 and union is attached\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Helio Campos Mello de Andrade", "msg_date": "Thu, 6 Nov 2008 10:26:20 -0200", "msg_from": "\"Helio Campos Mello de Andrade\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "2008/11/6 Helio Campos Mello de Andrade <[email protected]>\n\n> For what i see in four OR-plan.txt tou are doing too much \"sequencial scan\"\n> . Create some indexes for those tables using the fields that you use an it\n> may help you.\n>\n> OBS: If you already have lots of indexes in your tables it may be a good\n> time for you re-think your strategy because it´s ot working.\n> Tips:\n> 1 - create indexes for the tables with the fields that you will use in\n> the query if it is your most important query. If you have others querys that\n> are used please post those here and we can help you to desing a better plan.\n\n\nAs you can see from other plans, it do have all the indexes to perform it's\nwork fast (when given part by part). It simply do not wish to use them. My\nquestion: Is this a configuration problem or postgresql optimizer simply\ncan't do such a query rewrite?\n\nActually I did rewrite the query to work properly as you can see from\nunion-plan.txt. My question is if postgresql can do this automatically\nbecause such a rewrite is not always easy/possible (esp. for generated\nqueries)?\n\n2008/11/6 Helio Campos Mello de Andrade <[email protected]>\nFor what i see in four OR-plan.txt tou are doing too much \"sequencial scan\" . Create some indexes for those tables using the fields that you use an it may help you.OBS: If you already have lots of indexes in your tables it may be a good time for you re-think your strategy because it´s ot working.\n\nTips:   1 - create indexes for the tables with the fields that you will use in the query if it is your most important query. If you have others querys that are used please post those here and we can help you to desing a better plan.\nAs you can see from other plans, it do have all the indexes to perform it's work fast (when given part by part). It simply do not wish to use them. My question: Is this a configuration problem or postgresql optimizer simply can't do such a query rewrite?\n Actually I did rewrite the query to work properly as you can see from union-plan.txt. My question is if postgresql can do this automatically because such a rewrite is not always easy/possible (esp. for generated queries)?", "msg_date": "Thu, 6 Nov 2008 18:33:27 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "Віталій Тимчишин wrote:\n> As you can see from other plans, it do have all the indexes to perform it's\n> work fast (when given part by part). It simply do not wish to use them. My\n> question: Is this a configuration problem or postgresql optimizer simply\n> can't do such a query rewrite?\n\nI must admit, I haven't managed to figure out what your query is trying\nto do, but then that's a common problem with autogenerated queries.\n\nThe main question that needs answering is why the planner thinks you're\ngoing to get 1.3 billion rows in the \"or\" query:\n\n\"Nested Loop (cost=4588.13..960900482668.95 rows=1386158171 width=32)\"\n\nYou don't show \"explain analyse\" for this query, so there's no way of\nknowing how many rows get returned but presumably you're expecting\naround 88000. What does \"explain analyse\" return?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 06 Nov 2008 16:44:05 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "As far as i know if you created the indexes properly and postgres sees that\nit will give some improvement he will use those.\n - Look at the page of index creation that we may be forgeting some thing.\n\nhttp://www.postgresql.org/docs/8.3/static/indexes.html\n\nI have to go to the hospital know. Tomorrow i will take a look at the manual\nand try to understand all the necessary for the postgresql use an index.\n\nRegards\n\nOn Thu, Nov 6, 2008 at 2:33 PM, Віталій Тимчишин <[email protected]> wrote:\n\n>\n>\n> 2008/11/6 Helio Campos Mello de Andrade <[email protected]>\n>\n>> For what i see in four OR-plan.txt tou are doing too much \"sequencial\n>> scan\" . Create some indexes for those tables using the fields that you use\n>> an it may help you.\n>>\n>> OBS: If you already have lots of indexes in your tables it may be a good\n>> time for you re-think your strategy because it´s ot working.\n>> Tips:\n>> 1 - create indexes for the tables with the fields that you will use in\n>> the query if it is your most important query. If you have others querys that\n>> are used please post those here and we can help you to desing a better plan.\n>\n>\n> As you can see from other plans, it do have all the indexes to perform it's\n> work fast (when given part by part). It simply do not wish to use them. My\n> question: Is this a configuration problem or postgresql optimizer simply\n> can't do such a query rewrite?\n>\n> Actually I did rewrite the query to work properly as you can see from\n> union-plan.txt. My question is if postgresql can do this automatically\n> because such a rewrite is not always easy/possible (esp. for generated\n> queries)?\n>\n>\n\n\n-- \nHelio Campos Mello de Andrade\n\nAs far as i know if you created the indexes properly and postgres sees that it will give some improvement he will use those. - Look at the page of index creation that we may be forgeting some thing.http://www.postgresql.org/docs/8.3/static/indexes.html\nI have to go to the hospital know. Tomorrow i will take a look at the manual and try to understand all the necessary for the postgresql use an index.RegardsOn Thu, Nov 6, 2008 at 2:33 PM, Віталій Тимчишин <[email protected]> wrote:\n2008/11/6 Helio Campos Mello de Andrade <[email protected]>\n\nFor what i see in four OR-plan.txt tou are doing too much \"sequencial scan\" . Create some indexes for those tables using the fields that you use an it may help you.OBS: If you already have lots of indexes in your tables it may be a good time for you re-think your strategy because it´s ot working.\n\n\nTips:   1 - create indexes for the tables with the fields that you will use in the query if it is your most important query. If you have others querys that are used please post those here and we can help you to desing a better plan.\nAs you can see from other plans, it do have all the indexes to perform it's work fast (when given part by part). It simply do not wish to use them. My question: Is this a configuration problem or postgresql optimizer simply can't do such a query rewrite?\n\n Actually I did rewrite the query to work properly as you can see from union-plan.txt. My question is if postgresql can do this automatically because such a rewrite is not always easy/possible (esp. for generated queries)?\n\n-- Helio Campos Mello de Andrade", "msg_date": "Thu, 6 Nov 2008 14:44:18 -0200", "msg_from": "\"Helio Campos Mello de Andrade\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "2008/11/6 Richard Huxton <[email protected]>\n\n> Віталій Тимчишин wrote:\n> > As you can see from other plans, it do have all the indexes to perform\n> it's\n> > work fast (when given part by part). It simply do not wish to use them.\n> My\n> > question: Is this a configuration problem or postgresql optimizer simply\n> > can't do such a query rewrite?\n>\n> I must admit, I haven't managed to figure out what your query is trying\n> to do, but then that's a common problem with autogenerated queries.\n\n\nThat's easy - I am looking for duplicates from subset of companies. Two\ncompanies are equal when there names are simply equal or there is an entry\nin \"match\" table for names.\n\n\n>\n>\n> The main question that needs answering is why the planner thinks you're\n> going to get 1.3 billion rows in the \"or\" query:\n>\n> \"Nested Loop (cost=4588.13..960900482668.95 rows=1386158171 width=32)\"\n>\n> You don't show \"explain analyse\" for this query, so there's no way of\n> knowing how many rows get returned but presumably you're expecting\n> around 88000. What does \"explain analyse\" return?\n\n\nYes, the query should output exactly same result as in \"Union\" plan. I will\nrun \"slow\" explain analyze now and will repost after it will complete\n(tomorrow?).\nBTW: I'd say planner should think rows estimated as sum of \"ORs\" estimation\nminus intersection, but no more then sum or ORs (if intersection is 0). For\nfirst condition it has rows=525975, for second it has rows=2403 (with other\nplans, of course), so it's strange it has such a high estimation.... It's\nexactly 50% of full cartesian join of merge, so it does think that every\nsecond pair would succeed, that is not true.\n\n2008/11/6 Richard Huxton <[email protected]>\nВіталій Тимчишин wrote:\n> As you can see from other plans, it do have all the indexes to perform it's\n> work fast (when given part by part). It simply do not wish to use them. My\n> question: Is this a configuration problem or postgresql optimizer simply\n> can't do such a query rewrite?\n\nI must admit, I haven't managed to figure out what your query is trying\nto do, but then that's a common problem with autogenerated queries.That's easy - I am looking for duplicates from subset of companies. Two companies are equal when there names are simply equal or there is an entry in \"match\" table for names.\n \n\nThe main question that needs answering is why the planner thinks you're\ngoing to get 1.3 billion rows in the \"or\" query:\n\n\"Nested Loop  (cost=4588.13..960900482668.95 rows=1386158171 width=32)\"\n\nYou don't show \"explain analyse\" for this query, so there's no way of\nknowing how many rows get returned but presumably you're expecting\naround 88000. What does \"explain analyse\" return?Yes, the query should output exactly same result as in \"Union\" plan. I will run \"slow\" explain analyze now and will repost after it will complete (tomorrow?).\nBTW: I'd say planner should think rows estimated as sum of \"ORs\" estimation minus intersection, but no more then sum or ORs (if intersection is 0). For first condition it has rows=525975, for second it has rows=2403 (with other plans, of course), so it's strange it has such a high estimation.... It's exactly 50% of full cartesian join of merge, so it does think that every second pair would succeed, that is not true.", "msg_date": "Thu, 6 Nov 2008 19:37:46 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": ">\n>\n> Yes, the query should output exactly same result as in \"Union\" plan. I will\n> run \"slow\" explain analyze now and will repost after it will complete\n> (tomorrow?).\n> BTW: I'd say planner should think rows estimated as sum of \"ORs\" estimation\n> minus intersection, but no more then sum or ORs (if intersection is 0). For\n> first condition it has rows=525975, for second it has rows=2403 (with other\n> plans, of course), so it's strange it has such a high estimation.... It's\n> exactly 50% of full cartesian join of merge, so it does think that every\n> second pair would succeed, that is not true.\n>\n>\nI am sorry, I've emptied atom_match table, so one part produce 0 result, but\nanyway here is explain:\n\n\"Merge Join (cost=518771.07..62884559.80 rows=1386158171 width=32) (actual\ntime=30292.802..755751.242 rows=34749 loops=1)\"\n\" Merge Cond: (production.run.id = (production.company.run_id)::bigint)\"\n\" Join Filter: (((production.company.name)::text =\n(production.company.name)::text)\nOR (hashed subplan))\"\n\" -> Sort (cost=45474.92..45606.54 rows=52648 width=38) (actual\ntime=562.928..595.128 rows=15507 loops=1)\"\n\" Sort Key: production.run.id\"\n\" Sort Method: external sort Disk: 880kB\"\n\" -> Nested Loop (cost=1184.82..39904.24 rows=52648 width=38)\n(actual time=90.571..530.925 rows=15507 loops=1)\"\n\" -> HashAggregate (cost=1.55..1.56 rows=1 width=8) (actual\ntime=3.077..3.078 rows=1 loops=1)\"\n\" -> Seq Scan on run (cost=0.00..1.55 rows=1 width=8)\n(actual time=3.066..3.068 rows=1 loops=1)\"\n\" Filter: ((name)::text = 'test'::text)\"\n\" -> Nested Loop (cost=1183.27..39376.19 rows=52648 width=30)\n(actual time=87.489..484.605 rows=15507 loops=1)\"\n\" -> HashAggregate (cost=1.55..1.56 rows=1 width=8)\n(actual time=0.016..0.019 rows=1 loops=1)\"\n\" -> Seq Scan on run (cost=0.00..1.55 rows=1\nwidth=8) (actual time=0.009..0.011 rows=1 loops=1)\"\n\" Filter: ((name)::text = 'test'::text)\"\n\" -> Bitmap Heap Scan on company\n(cost=1181.72..38592.03 rows=62608 width=30) (actual time=87.465..441.014\nrows=15507 loops=1)\"\n\" Recheck Cond:\n((production.company.run_id)::bigint = production.run.id)\"\n\" Filter: ((production.company.status)::text =\n'unprocessed'::text)\"\n\" -> Bitmap Index Scan on comp_run\n(cost=0.00..1166.07 rows=62608 width=0) (actual time=65.828..65.828\nrows=15507 loops=1)\"\n\" Index Cond:\n((production.company.run_id)::bigint = production.run.id)\"\n\" -> Materialize (cost=469981.13..498937.42 rows=2316503 width=30)\n(actual time=15915.639..391938.338 rows=242752539 loops=1)\"\n\" -> Sort (cost=469981.13..475772.39 rows=2316503 width=30) (actual\ntime=15915.599..19920.912 rows=2316503 loops=1)\"\n\" Sort Key: production.company.run_id\"\n\" Sort Method: external merge Disk: 104896kB\"\n\" -> Seq Scan on company (cost=0.00..58808.03 rows=2316503\nwidth=30) (actual time=22.244..7476.954 rows=2316503 loops=1)\"\n\" SubPlan\"\n\" -> Nested Loop (cost=2267.65..3314.94 rows=22 width=1038) (actual\ntime=0.009..0.009 rows=0 loops=1)\"\n\" -> Hash Join (cost=2267.65..3141.36 rows=22 width=523) (actual\ntime=0.006..0.006 rows=0 loops=1)\"\n\" Hash Cond: ((atom_match.atom1_id)::integer = s1.id)\"\n\" -> Seq Scan on atom_match (cost=0.00..30.38 rows=1630\nwidth=8) (actual time=0.002..0.002 rows=0 loops=1)\"\n\" Filter: ((match_function_id)::integer = 2)\"\n\" -> Hash (cost=1292.04..1292.04 rows=12209 width=523)\n(never executed)\"\n\" -> Index Scan using atomstr_typ on atoms_string s1\n(cost=0.00..1292.04 rows=12209 width=523) (never executed)\"\n\" Index Cond: ((atom_type_id)::integer = (-1))\"\n\" -> Index Scan using pk_atoms_string on atoms_string s2\n(cost=0.00..7.88 rows=1 width=523) (never executed)\"\n\" Index Cond: (s2.id = (atom_match.atom2_id)::integer)\"\n\"Total runtime: 755802.686 ms\"\n\nP.S. May be I've chosen wrong list and my Q better belongs to -hackers?\n\nYes, the query should output exactly same result as in \"Union\" plan. I will run \"slow\" explain analyze now and will repost after it will complete (tomorrow?).\n\nBTW: I'd say planner should think rows estimated as sum of \"ORs\" estimation minus intersection, but no more then sum or ORs (if intersection is 0). For first condition it has rows=525975, for second it has rows=2403 (with other plans, of course), so it's strange it has such a high estimation.... It's exactly 50% of full cartesian join of merge, so it does think that every second pair would succeed, that is not true.\n\nI am sorry, I've emptied atom_match table, so one part produce 0 result, but anyway here is explain:\"Merge Join  (cost=518771.07..62884559.80 rows=1386158171 width=32) (actual time=30292.802..755751.242 rows=34749 loops=1)\"\n\"  Merge Cond: (production.run.id = (production.company.run_id)::bigint)\"\"  Join Filter: (((production.company.name)::text = (production.company.name)::text) OR (hashed subplan))\"\n\"  ->  Sort  (cost=45474.92..45606.54 rows=52648 width=38) (actual time=562.928..595.128 rows=15507 loops=1)\"\"        Sort Key: production.run.id\"\"        Sort Method:  external sort  Disk: 880kB\"\n\"        ->  Nested Loop  (cost=1184.82..39904.24 rows=52648 width=38) (actual time=90.571..530.925 rows=15507 loops=1)\"\"              ->  HashAggregate  (cost=1.55..1.56 rows=1 width=8) (actual time=3.077..3.078 rows=1 loops=1)\"\n\"                    ->  Seq Scan on run  (cost=0.00..1.55 rows=1 width=8) (actual time=3.066..3.068 rows=1 loops=1)\"\"                          Filter: ((name)::text = 'test'::text)\"\n\"              ->  Nested Loop  (cost=1183.27..39376.19 rows=52648 width=30) (actual time=87.489..484.605 rows=15507 loops=1)\"\"                    ->  HashAggregate  (cost=1.55..1.56 rows=1 width=8) (actual time=0.016..0.019 rows=1 loops=1)\"\n\"                          ->  Seq Scan on run  (cost=0.00..1.55 rows=1 width=8) (actual time=0.009..0.011 rows=1 loops=1)\"\"                                Filter: ((name)::text = 'test'::text)\"\n\"                    ->  Bitmap Heap Scan on company  (cost=1181.72..38592.03 rows=62608 width=30) (actual time=87.465..441.014 rows=15507 loops=1)\"\"                          Recheck Cond: ((production.company.run_id)::bigint = production.run.id)\"\n\"                          Filter: ((production.company.status)::text = 'unprocessed'::text)\"\"                          ->  Bitmap Index Scan on comp_run  (cost=0.00..1166.07 rows=62608 width=0) (actual time=65.828..65.828 rows=15507 loops=1)\"\n\"                                Index Cond: ((production.company.run_id)::bigint = production.run.id)\"\"  ->  Materialize  (cost=469981.13..498937.42 rows=2316503 width=30) (actual time=15915.639..391938.338 rows=242752539 loops=1)\"\n\"        ->  Sort  (cost=469981.13..475772.39 rows=2316503 width=30) (actual time=15915.599..19920.912 rows=2316503 loops=1)\"\"              Sort Key: production.company.run_id\"\"              Sort Method:  external merge  Disk: 104896kB\"\n\"              ->  Seq Scan on company  (cost=0.00..58808.03 rows=2316503 width=30) (actual time=22.244..7476.954 rows=2316503 loops=1)\"\"  SubPlan\"\"    ->  Nested Loop  (cost=2267.65..3314.94 rows=22 width=1038) (actual time=0.009..0.009 rows=0 loops=1)\"\n\"          ->  Hash Join  (cost=2267.65..3141.36 rows=22 width=523) (actual time=0.006..0.006 rows=0 loops=1)\"\"                Hash Cond: ((atom_match.atom1_id)::integer = s1.id)\"\n\"                ->  Seq Scan on atom_match  (cost=0.00..30.38 rows=1630 width=8) (actual time=0.002..0.002 rows=0 loops=1)\"\"                      Filter: ((match_function_id)::integer = 2)\"\n\"                ->  Hash  (cost=1292.04..1292.04 rows=12209 width=523) (never executed)\"\"                      ->  Index Scan using atomstr_typ on atoms_string s1  (cost=0.00..1292.04 rows=12209 width=523) (never executed)\"\n\"                            Index Cond: ((atom_type_id)::integer = (-1))\"\"          ->  Index Scan using pk_atoms_string on atoms_string s2  (cost=0.00..7.88 rows=1 width=523) (never executed)\"\n\"                Index Cond: (s2.id = (atom_match.atom2_id)::integer)\"\"Total runtime: 755802.686 ms\"P.S. May be I've chosen wrong list and my Q better belongs to -hackers?", "msg_date": "Fri, 7 Nov 2008 11:14:01 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "On Fri, Nov 7, 2008 at 4:14 AM, Віталій Тимчишин <[email protected]> wrote:\n> \"Merge Join (cost=518771.07..62884559.80 rows=1386158171 width=32) (actual\n> time=30292.802..755751.242 rows=34749 loops=1)\"\n\nHave you tried increasing the default_statistics_target? The planner\nis expecting 1.3 billion rows to be produced from a query that's only\nactually producting 35k, which probably indicates some very bad\nstatistics. At the same time, the materialize step produces 242\nmillion rows when the planner only expects to produce 2.3, indicating\na similar problem in the opposite direction. This probably means that\nthe planner is choosing plans that would be optimal if it was making\ngood guesses but are decidedly sub-optimal for your actual data.\n\n\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Fri, 7 Nov 2008 05:07:32 -0500", "msg_from": "\"David Wilson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "Віталій Тимчишин wrote:\n> I am sorry, I've emptied atom_match table, so one part produce 0 result, but\n> anyway here is explain:\n\nDavid's right - the total estimate is horribly wrong\n\n> \"Merge Join (cost=518771.07..62884559.80 rows=1386158171 width=32) (actual\n> time=30292.802..755751.242 rows=34749 loops=1)\"\n\nBut it's this materialize that's taking the biggest piece of the time.\n\n> \" -> Materialize (cost=469981.13..498937.42 rows=2316503 width=30)\n> (actual time=15915.639..391938.338 rows=242752539 loops=1)\"\n\n15.9 seconds to 391.9 seconds. That's half your time right there. The\nfact that it's ending up with 242 million rows isn't promising - are you\nsure the query is doing what you think it is?\n\n> \" -> Sort (cost=469981.13..475772.39 rows=2316503 width=30) (actual\n> time=15915.599..19920.912 rows=2316503 loops=1)\"\n> \" Sort Key: production.company.run_id\"\n> \" Sort Method: external merge Disk: 104896kB\"\n\nBy constrast, this on-disk sort of 104MB is comparatively fast.\n\n> P.S. May be I've chosen wrong list and my Q better belongs to -hackers?\n\nNo - hackers is if you want to discuss the code of the database server\nitself.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 07 Nov 2008 10:45:50 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "Sorry, for delayed response - It was very busy week.\n\n2008/11/7 David Wilson <[email protected]>\n\n> On Fri, Nov 7, 2008 at 4:14 AM, Віталій Тимчишин <[email protected]> wrote:\n> > \"Merge Join (cost=518771.07..62884559.80 rows=1386158171 width=32)\n> (actual\n> > time=30292.802..755751.242 rows=34749 loops=1)\"\n>\n> Have you tried increasing the default_statistics_target? The planner\n> is expecting 1.3 billion rows to be produced from a query that's only\n> actually producting 35k, which probably indicates some very bad\n> statistics.\n\n\n The planner seems to think that every second pair from company<->company\njoin will succeed with this join expression (1386158171 ~ 52648^2 / 2).\nThat is not true.\nAnyway, I've tried to set default_statistics_target to 1000, then analyze.\nNothing've changed\n\nAt the same time, the materialize step produces 242\n> million rows when the planner only expects to produce 2.3, indicating\n> a similar problem in the opposite direction. This probably means that\n> the planner is choosing plans that would be optimal if it was making\n> good guesses but are decidedly sub-optimal for your actual data.\n>\n>\nThat is even more strange, because materialize step must produce exactly the\nrows it takes from sort, that is 2316503, so I don't get how table scan +\nsort + materialize can multiply number of rows by 100.\n\nSorry, for delayed response - It was very busy week.2008/11/7 David Wilson <[email protected]>\n\nOn Fri, Nov 7, 2008 at 4:14 AM, Віталій Тимчишин <[email protected]> wrote:\n> \"Merge Join  (cost=518771.07..62884559.80 rows=1386158171 width=32) (actual\n> time=30292.802..755751.242 rows=34749 loops=1)\"\n\nHave you tried increasing the default_statistics_target? The planner\nis expecting 1.3 billion rows to be produced from a query that's only\nactually producting 35k, which probably indicates some very bad\nstatistics.  The planner seems to think that every second pair from company<->company join will succeed with this join expression (1386158171 ~  52648^2 / 2). That is not true. Anyway, I've tried to set default_statistics_target to 1000, then analyze. Nothing've changed\n At the same time, the materialize step produces 242\nmillion rows when the planner only expects to produce 2.3, indicating\na similar problem in the opposite direction. This probably means that\nthe planner is choosing plans that would be optimal if it was making\ngood guesses but are decidedly sub-optimal for your actual data.\n\nThat is even more strange, because materialize step must produce exactly the rows it takes from sort, that is 2316503, so I don't get how table scan + sort + materialize can multiply number of rows by 100.", "msg_date": "Sat, 15 Nov 2008 15:55:38 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "2008/11/7 Richard Huxton <[email protected]>\n\n> But it's this materialize that's taking the biggest piece of the time.\n>\n> > \" -> Materialize (cost=469981.13..498937.42 rows=2316503 width=30)\n> > (actual time=15915.639..391938.338 rows=242752539 loops=1)\"\n>\n> 15.9 seconds to 391.9 seconds. That's half your time right there. The\n> fact that it's ending up with 242 million rows isn't promising - are you\n> sure the query is doing what you think it is?\n\n\nI am not. I can't see how materialize can multiply number of rows it gets\nfrom sort by 100.\n\n\n>\n> > \" -> Sort (cost=469981.13..475772.39 rows=2316503 width=30)\n> (actual\n> > time=15915.599..19920.912 rows=2316503 loops=1)\"\n> > \" Sort Key: production.company.run_id\"\n> > \" Sort Method: external merge Disk: 104896kB\"\n>\n> By constrast, this on-disk sort of 104MB is comparatively fast.\n>\n\n2008/11/7 Richard Huxton <[email protected]>\nBut it's this materialize that's taking the biggest piece of the time.\n\n> \"  ->  Materialize  (cost=469981.13..498937.42 rows=2316503 width=30)\n> (actual time=15915.639..391938.338 rows=242752539 loops=1)\"\n\n15.9 seconds to 391.9 seconds. That's half your time right there. The\nfact that it's ending up with 242 million rows isn't promising - are you\nsure the query is doing what you think it is?I am not. I can't see how materialize can multiply number of rows it gets from sort by 100.\n\n\n> \"        ->  Sort  (cost=469981.13..475772.39 rows=2316503 width=30) (actual\n> time=15915.599..19920.912 rows=2316503 loops=1)\"\n> \"              Sort Key: production.company.run_id\"\n> \"              Sort Method:  external merge  Disk: 104896kB\"\n\nBy constrast, this on-disk sort of 104MB is comparatively fast.", "msg_date": "Sat, 15 Nov 2008 15:57:03 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL OR performance" }, { "msg_contents": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]> writes:\n> I am not. I can't see how materialize can multiply number of rows it gets\n> from sort by 100.\n\nIs it the right-hand input of a merge join? If so you're looking at\nmark/restore rescans, ie, repeated fetches of the same tuples. There\nmust be a huge number of duplicate join keys in that relation to make\nfor such an increase though. Normally the planner avoids putting a\ntable with lots of duplicates as the RHS of a merge, but if it doesn't\nhave good statistics for the join key then it might not realize the\nproblem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Nov 2008 12:07:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL OR performance " }, { "msg_contents": "2008/11/15 Tom Lane <[email protected]>\n\n> \"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]> writes:\n> > I am not. I can't see how materialize can multiply number of rows it gets\n> > from sort by 100.\n>\n> Is it the right-hand input of a merge join? If so you're looking at\n> mark/restore rescans, ie, repeated fetches of the same tuples. There\n> must be a huge number of duplicate join keys in that relation to make\n> for such an increase though. Normally the planner avoids putting a\n> table with lots of duplicates as the RHS of a merge, but if it doesn't\n> have good statistics for the join key then it might not realize the\n> problem.\n>\n\nOK, thanks for cleaning-up some mystery.\nBut, returning to original Q: Do anyone known why does it choose plan from *\nOR-plan.txt* instead of *union-plan.txt*? The first is\ncost=4588.13..960900482668.95, the latter is cost=266348.42..272953.14\naccording to statistics postgres have, so I suppose planner would select it\nif it could evaluate it.\n\n2008/11/15 Tom Lane <[email protected]>\n\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]> writes:\n> I am not. I can't see how materialize can multiply number of rows it gets\n> from sort by 100.\n\nIs it the right-hand input of a merge join?  If so you're looking at\nmark/restore rescans, ie, repeated fetches of the same tuples.  There\nmust be a huge number of duplicate join keys in that relation to make\nfor such an increase though.  Normally the planner avoids putting a\ntable with lots of duplicates as the RHS of a merge, but if it doesn't\nhave good statistics for the join key then it might not realize the\nproblem.\nOK, thanks for cleaning-up some mystery. But, returning to original Q: Do anyone known why does it choose plan from OR-plan.txt instead of union-plan.txt? The first is cost=4588.13..960900482668.95, the latter is cost=266348.42..272953.14 according to statistics postgres have, so I suppose planner would select it if it could evaluate it.", "msg_date": "Mon, 17 Nov 2008 11:52:21 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL OR performance" } ]
[ { "msg_contents": "Hello,\n\nI've had the feeling for a while that the pg_stat_bgwriter statistics\ndoesn't work quite the way I have understood it (based on the\nexcellent [1] and the pg docs).\n\nI am now monitoring a database that has an lru_multiplier of 4.0, a\ndelay of 200ms and a maxpages of 1000. Current stats:\n\npostgres=# select * from pg_stat_bgwriter;\n checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc \n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n 241 | 17 | 72803 | 0 | 0 | 81015 | 81708\n(1 row)\n\nThis is while the database is undergoing continuous activity (almost\nexclusively writing), but at a rate that does not saturate underlying\nstorage (caching raid controller, all write ops are fast, cache is\nnever filled).\n\nIn addition, PostgreSQL is not even close to even filling it's buffer\ncache. The buffer cache is configured at 1 GB, and the resident size\nof the PostgreSQL process is only 80-90 MB so far. So even\nindependently of any lru multplier setting, delays and whatever else,\nI don't see why any backend would ever have to do its own writeouts in\norder to allocate a page from the buffer cache.\n\nOne theory: Is it the auto vacuum process? Stracing those I've seen\nthat they very often to writes directly to disk.\n\nIn any case, the reason I am fixating on buffers_backend is that I am\nafter a clear indication whether any \"normal\" backend (non-autovacuum\nor anything like that) is ever having to block on disk writes, other\nthan WAL fsync:s.\n\nIs a non-zero buffers_backend consistent with expected behavior?\n\n[1] http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Wed, 5 Nov 2008 13:28:30 +0100", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "lru_multiplier and backend page write-outs" }, { "msg_contents": "On Wed, 5 Nov 2008, Peter Schuller wrote:\n\n> In addition, PostgreSQL is not even close to even filling it's buffer\n> cache. The buffer cache is configured at 1 GB, and the resident size\n> of the PostgreSQL process is only 80-90 MB so far. So even\n> independently of any lru multplier setting, delays and whatever else,\n> I don't see why any backend would ever have to do its own writeouts in\n> order to allocate a page from the buffer cache.\n\nAny buffer that you've accessed recently gets its recent usage count \nincremented such that the background writer won't touch it--the current \none only writes things where that count is 0. The only mechanism which \ndrops that usage count back down again only kicks in once you've used all \nthe buffers in the cache. You need some pressure to evict buffers that \ncan't fit anymore before the background writer has any useful role to play \nin PostgreSQL 8.3.\n\nAt one point I envisioned making it smart enough to try and handle the \nscenario you describe--on an idle system, you may very well want to write \nout dirty and recently accessed buffers if there's nothing else going on. \nBut such behavior is counter-productive on a busy system, which is why a \nsimilar mechanism that existed before 8.3 was removed. Making that only \nhappen when idle requires a metric for what \"busy\" means, which is tricky \nto do given the information available to this particular process.\n\nShort version: if you never fill the buffer cache, buffers_clean will \nalways be zero, and you'll only see writes by checkpoints and things not \noperating with the standard client buffer allocation mechanism. Which \nbrings us to...\n\n> One theory: Is it the auto vacuum process? Stracing those I've seen\n> that they very often to writes directly to disk.\n\nIn order to keep it from using up the whole cache with maintenance \noverhead, vacuum allocates a 256K ring of buffers and use re-uses ones \nfrom there whenever possible. That will generate buffer_backend writes \nwhen that ring fills but it has more left to scan. Your theory that all \nthe backend writes are coming from vacuum seems consistant with what \nyou've described.\n\nYou might even want to drop the two background writer parameters you've \ntweaked upwards back down closer to their original values. I get the \nimpression you might have increased those hoping for more background \nwriter work because you weren't seeing any. If you ever do get to where \nyour buffer cache is full and the background writer starts doing \nsomething, those could jump from ineffective to wastefully heavy at that \npoint.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 5 Nov 2008 15:44:42 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lru_multiplier and backend page write-outs" }, { "msg_contents": "Hello,\n\n> At one point I envisioned making it smart enough to try and handle the \n> scenario you describe--on an idle system, you may very well want to write \n> out dirty and recently accessed buffers if there's nothing else going on. \n> But such behavior is counter-productive on a busy system, which is why a \n> similar mechanism that existed before 8.3 was removed. Making that only \n> happen when idle requires a metric for what \"busy\" means, which is tricky \n> to do given the information available to this particular process.\n> \n> Short version: if you never fill the buffer cache, buffers_clean will \n> always be zero, and you'll only see writes by checkpoints and things not \n> operating with the standard client buffer allocation mechanism. Which \n> brings us to...\n\nSure. I am not really out to get the background writer to\npre-emptively do \"idle trickling\". Though I can see cases where one\nmight care about this (such as lessening the impact of OS buffer cache\ndelays on checkpoints), it's not what I am after now.\n\n> > One theory: Is it the auto vacuum process? Stracing those I've seen\n> > that they very often to writes directly to disk.\n> \n> In order to keep it from using up the whole cache with maintenance \n> overhead, vacuum allocates a 256K ring of buffers and use re-uses ones \n> from there whenever possible. That will generate buffer_backend writes \n> when that ring fills but it has more left to scan. Your theory that all \n> the backend writes are coming from vacuum seems consistant with what \n> you've described.\n\nThe bit that is inconsistent with this theory, given the above ring\nbuffer desription, is that I saw the backend write-out count\nincreasing constantlyduring the write activity I was generating to the\ndatabase. However (because in this particular case it was a small\ndatabase used for some latency related testing), no table was ever\nlarge enough that 256k buffers would ever be filled by the process of\nvacuuming a single table. Most tables would likely have been a handful\nto a couple of hundred of pages large.\n\nIn addition, when I say \"constantly\" above I mean that the count\nincreases even between successive SELECT:s (of the stat table) with\nonly a second or two in between. In the abscence of long-running\nvacuum's, that discounts vacuuming because the naptime is 1 minute.\n\nIn fact this already discounted vacuuming even without the added\ninformation you provided above, but I didn't realize when originally\nposting.\n\nThe reason I mentioned vacuuming was that the use case is such that we\ndo have a lot of tables constantly getting writes and updates, but\nthey are all small.\n\nAnything else known that might be generating the writes, if it is not\nvacuuming?\n\n> You might even want to drop the two background writer parameters you've \n> tweaked upwards back down closer to their original values. I get the \n> impression you might have increased those hoping for more background \n> writer work because you weren't seeing any. If you ever do get to where \n> your buffer cache is full and the background writer starts doing \n> something, those could jump from ineffective to wastefully heavy at that \n> point.\n\nI tweaked it in order to eliminate backends having to do\n\"synchrounous\" (with respect to the operating system even if not with\nrespect to the underlying device) writes.\n\nThe idea is that writes to the operating system are less\nunderstood/controlled, in terms of any latency they may case. It would\nbe very nice if the backend writes were always zero under normal\ncircumstances (or at least growing very very rarely in edge cases\nwhere the JIT policy did not suceed), in order to make it a more\nrelevant and rare observation that the backend write-outs are\nsystematically increasing.\n\nOn this topic btw, was it considered to allow the administrator to\nspecify a fixed-size margin to use when applying the JIT policy? (The\nJIT policy and logic itself being exactly the same still.)\n\nEspecially with larger buffer caches, that would perhaps allow the\nadministrator to make a call to truly eliminate synchronous writes\nduring normal operation, while not adversely affecting anything (if\nthe buffer cache is 1 GB, having a margin of say 50 MB does not really\nmatter much in terms of wasting memory, yet could have a significant\nimpact on eliminating synchronous write-outs).\n\nOn a system where you really want to keep backend writes to exactly 0\nunder normal circumstances (discounting vacuuming), and having a large\nbuffer cache (say the one gig), it might be nice to be able to say \"ok\n- I have 1 GB of buffer cache. for the purpose of the JIT algorithm,\nplease pretend it's only 900 MB\". The result is 100 MB of constantly\nsized \"margin\", with respect to ensuring writes are asynchronous.\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Thu, 6 Nov 2008 11:19:18 +0100", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: lru_multiplier and backend page write-outs" }, { "msg_contents": "On Thu, 6 Nov 2008, Peter Schuller wrote:\n\n>> In order to keep it from using up the whole cache with maintenance\n>> overhead, vacuum allocates a 256K ring of buffers and use re-uses ones\n>> from there whenever possible.\n>\n> no table was ever large enough that 256k buffers would ever be filled by \n> the process of vacuuming a single table.\n\nNot 256K buffers--256K, 32 buffers.\n\n> In addition, when I say \"constantly\" above I mean that the count\n> increases even between successive SELECT:s (of the stat table) with\n> only a second or two in between.\n\nWrites to the database when only doing read operations are usually related \nto hint bits: http://wiki.postgresql.org/wiki/Hint_Bits\n\n> On this topic btw, was it considered to allow the administrator to\n> specify a fixed-size margin to use when applying the JIT policy?\n\nRight now, there's no way to know exactly what's in the buffer cache \nwithout scanning the individual buffers, which requires locking their \nheaders so you can see them consistently. No one process can get the big \npicture without doing something intrusive like that, and on a busy system \nthe overhead of collecting more data to know how exactly far ahead the \ncleaning is can drag down overall performance. A lot can happen while the \nbackground writer is sleeping.\n\nOne next-generation design which has been sketched out but not even \nprototyped would take cleaned buffers and add them to the internal list of \nbuffers that are free, which right now is usually empty on the theory that \ncached data is always more useful than a reserved buffer. If you \ndeveloped a reasonable model for how many buffers you needed and padded \nthat appropriately, that's the easiest way (given the rest of the buffer \nmanager code) to get close to ensuring there aren't any backend writes. \nBecause you've got the OS buffering writes anyway in most cases, it's hard \nto pin down whether that actually improved worst-case latency though. And \nmoving in that direction always seems to reduce average throughput even in \nwrite-heavy benchmarks.\n\nThe important thing to remember is that the underlying OS has its own read \nand write caching mechanisms here, and unless the PostgreSQL ones are \nmeasurably better than those you might as well let the OS manage the \nproblem instead. It's easy to demonstrate that's happening when you give \na decent amount of memory to shared_buffers, it's much harder to prove \nthat's the case for an improved write scheduling algorithm. Stepping back \na bit, you might even consider that one reason PostgreSQL has grown as \nwell as it has in scalability is exactly because it's been riding \nimprovements the underlying OS in many of these cases, rather than trying \nto do all the I/O scheduling itself.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 6 Nov 2008 17:10:29 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lru_multiplier and backend page write-outs" }, { "msg_contents": "> > no table was ever large enough that 256k buffers would ever be filled by \n> > the process of vacuuming a single table.\n> \n> Not 256K buffers--256K, 32 buffers.\n\nOk. \n\n> > In addition, when I say \"constantly\" above I mean that the count\n> > increases even between successive SELECT:s (of the stat table) with\n> > only a second or two in between.\n> \n> Writes to the database when only doing read operations are usually related \n> to hint bits: http://wiki.postgresql.org/wiki/Hint_Bits\n\nSorry, I didn't mean to imply read-only operations (I did read the\nhint bits information a while back though). What I meant was that\nwhile I was constantly generating the insert/delete/update activity, I\nwas selecting the bg writer stats with only a second or two in\nbetween. The intent was to convey that the count of backend written\npages was systematically and constantly (as in a few hundreds per\nhandful of seconds) increasing, in spite of no long running vacuum and\nthe buffer cache not being close to full.\n\n> > On this topic btw, was it considered to allow the administrator to\n> > specify a fixed-size margin to use when applying the JIT policy?\n> \n> Right now, there's no way to know exactly what's in the buffer cache \n> without scanning the individual buffers, which requires locking their \n> headers so you can see them consistently. No one process can get the big \n> picture without doing something intrusive like that, and on a busy system \n> the overhead of collecting more data to know how exactly far ahead the \n> cleaning is can drag down overall performance. A lot can happen while the \n> background writer is sleeping.\n\nUnderstood.\n\n> One next-generation design which has been sketched out but not even \n> prototyped would take cleaned buffers and add them to the internal list of \n> buffers that are free, which right now is usually empty on the theory that \n> cached data is always more useful than a reserved buffer. If you \n> developed a reasonable model for how many buffers you needed and padded \n> that appropriately, that's the easiest way (given the rest of the buffer \n> manager code) to get close to ensuring there aren't any backend writes. \n> Because you've got the OS buffering writes anyway in most cases, it's hard \n> to pin down whether that actually improved worst-case latency though. And \n> moving in that direction always seems to reduce average throughput even in \n> write-heavy benchmarks.\n\nOk.\n\n> The important thing to remember is that the underlying OS has its own read \n> and write caching mechanisms here, and unless the PostgreSQL ones are \n> measurably better than those you might as well let the OS manage the \n> problem instead.\n\nThe problem though is that though the OS may be good in the common\ncases it is designed for, it can have specific features that are\ndirectly counter-productive if your goals do not line up with that of\nthe commonly designed-for use case (in particular, if you care about\nlatency a lot and not necessarily about absolute max throughput).\n\nFor example, in Linux up until recently if not still, there is the\n1024 per-inode buffer limit that limited the number of buffers written\nas a result of expiry, which means that when PostgreSQL does its\nfsync(), you may end up having a lot more to write out than what would\nhave been the case if the centisecs_expiry had been enforced,\nregardless of whether PostgreSQL was tuned to write dirty pages out\nsufficiently aggressively. If the amount built up exceeds the capacity\nof the RAID controller cache...\n\nI had a case where I suspect this was exaserbating the\nsituation. Manually doing a 'sync' on the system every few seconds\nnoticably helped (the theory being, because it forced page write-outs\nto happen earlier and in smaller storms).\n\n> It's easy to demonstrate that's happening when you give \n> a decent amount of memory to shared_buffers, it's much harder to prove \n> that's the case for an improved write scheduling algorithm. Stepping back \n> a bit, you might even consider that one reason PostgreSQL has grown as \n> well as it has in scalability is exactly because it's been riding \n> improvements the underlying OS in many of these cases, rather than trying \n> to do all the I/O scheduling itself.\n\nSure. In this case with the backend writes, I am nore interesting in\nunderstanding better what is happening and having better indications\nof when backends block on I/O, than necessarily having a proven\nimprovement in throughput or latency. It makes it easier to reason\nabout what is happening when you *do* have a measured performance\nproblem.\n\nThanks for all the insightful information.\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Fri, 7 Nov 2008 00:04:01 +0100", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: lru_multiplier and backend page write-outs" } ]
[ { "msg_contents": "We recently upgraded the databases for our circuit court applications\nfrom PostgreSQL 8.2.5 to 8.3.4. The application software didn't\nchange. Most software runs fine, and our benchmarks prior to the\nupdate tended to show a measurable, if not dramatic, performance\nimprovement overall. We have found one area where jobs are running\nmuch longer and having a greater impact on concurrent jobs -- those\nwhere the programmer creates and drops many temporary tables\n(thousands) within a database transaction. We are looking to mitigate\nthe problem while we look into rewriting queries where the temporary\ntable usage isn't really needed (probably most of them, but the\nrewrites are not trivial).\n \nI'm trying to quantify the issue, and would appreciate any\nsuggestions, either for mitigation or collecting useful data to find\nthe cause of the performance regression. I create a script which\nbrackets 1000 lines like the following within a single begin/commit:\n \ncreate temporary table tt (c1 int not null primary key, c2 text, c3\ntext); drop table tt;\n \nI run this repeatedly, to get a \"steady state\", with just enough\nsettling time between runs to spot the boundaries of the runs in the\nvmstat 1 output (5 to 20 seconds between runs). I'm surprised at how\nmuch disk output there is for this, in either version. In 8.2.5 a\ntypical run has about 156,000 disk writes in the vmstat output, while\n8.3.4 has about 172,000 writes.\n \nDuring the main part of the run 8.2.5 ranges between 0 and 15 percent\nof cpu time in I/O wait, averaging around 10%; while 8.3.4 ranges\nbetween 15 and 25 percent of cpu time in I/O wait, averaging around\n18%, with occasional outliers on both sides, down to 5% and up to 55%.\nFor both, there's a period of time at the end of the transaction\nwhere the COMMIT seems to be doing disk output without any disk wait,\nsuggesting that the BBU RAID controller is either able to write these\nfaster because there are multiple updates to the same sectors which\nget combined, or that they can be written sequentially.\n \nThe time required for psql to run the script varies little in 8.2.5 --\nfrom 4m43.843s to 4m49.388s. Under 8.3.4 this bounces around from run\nto run -- from 1m28.270s to 5m39.327s.\n \nI can't help wondering why creating and dropping a temporary table\nrequires over 150 disk writes. I also wonder whether there is\nsomething in 8.3.4 which directly causes more writes, or whether it is\nthe result of the new checkpoint and background writer hitting some\npessimal usage pattern where \"just in time\" writes become \"just too\nlate\" to be efficient.\n \nMost concerning is that the 8.3.4 I/O wait time results in slow\nperformance for interactive tasks and results in frustrated users\ncalling the support line complaining of slowness. I can confirm that\nqueries which normally run in 10 to 20 ms are running for several\nseconds in competition with the temporary table creation/drop queries,\nwhich wasn't the case before.\n \nI'm going to get the latest snapshot to see if the issue has changed\nfor 8.4devel, but I figured I should post the information I have so\nfar to get feedback.\n \n-Kevin\n", "msg_date": "Wed, 05 Nov 2008 12:45:46 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Create and drop temp table in 8.3.4" }, { "msg_contents": ">>> \"Kevin Grittner\" <[email protected]> wrote: \n> We have found one area where jobs are running\n> much longer and having a greater impact on concurrent jobs -- those\n> where the programmer creates and drops many temporary tables\n> (thousands) within a database transaction.\n \nI forgot to include the standard information about the environment and\nconfiguration.\n \nccsa@COUNTY2-PG:~> cat /proc/version\nLinux version 2.6.16.60-0.31-smp (geeko@buildhost) (gcc version 4.1.2\n20070115 (SUSE Linux)) #1 SMP Tue Oct 7 16:16:29 UTC 2008\nccsa@COUNTY2-PG:~> cat /etc/SuSE-release\nSUSE Linux Enterprise Server 10 (x86_64)\nVERSION = 10\nPATCHLEVEL = 2\nccsa@COUNTY2-PG:~> uname -a\nLinux COUNTY2-PG 2.6.16.60-0.31-smp #1 SMP Tue Oct 7 16:16:29 UTC 2008\nx86_64 x86_64 x86_64 GNU/Linux\n \nTwo dual-core Xeon 3 GHz processors.\n4 GB system RAM.\nBBU RAID controller with 256 MB RAM.\nRAID 5 on 5 spindles.\n \n \n8.2.5:\n \nccsa@COUNTY2-PG:~> /usr/local/pgsql-8.2.5-64/bin/pg_config\nBINDIR = /usr/local/pgsql-8.2.5-64/bin\nDOCDIR = /usr/local/pgsql-8.2.5-64/doc\nINCLUDEDIR = /usr/local/pgsql-8.2.5-64/include\nPKGINCLUDEDIR = /usr/local/pgsql-8.2.5-64/include\nINCLUDEDIR-SERVER = /usr/local/pgsql-8.2.5-64/include/server\nLIBDIR = /usr/local/pgsql-8.2.5-64/lib\nPKGLIBDIR = /usr/local/pgsql-8.2.5-64/lib\nLOCALEDIR =\nMANDIR = /usr/local/pgsql-8.2.5-64/man\nSHAREDIR = /usr/local/pgsql-8.2.5-64/share\nSYSCONFDIR = /usr/local/pgsql-8.2.5-64/etc\nPGXS = /usr/local/pgsql-8.2.5-64/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = '--prefix=/usr/local/pgsql-8.2.5-64'\n'--enable-integer-datetimes' '--enable-debug' '--disable-nls'\nCC = gcc\nCPPFLAGS = -D_GNU_SOURCE\nCFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline\n-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -g\nCFLAGS_SL = -fpic\nLDFLAGS = -Wl,-rpath,'/usr/local/pgsql-8.2.5-64/lib'\nLDFLAGS_SL =\nLIBS = -lpgport -lz -lreadline -lcrypt -ldl -lm\nVERSION = PostgreSQL 8.2.5\n \nmax_connections = 50\nshared_buffers = 256MB\ntemp_buffers = 10MB\nmax_prepared_transactions = 0\nwork_mem = 16MB\nmaintenance_work_mem = 400MB\nmax_fsm_pages = 1000000\nbgwriter_lru_percent = 20.0\nbgwriter_lru_maxpages = 200\nbgwriter_all_percent = 10.0\nbgwriter_all_maxpages = 600\nwal_buffers = 256kB\ncheckpoint_segments = 50\narchive_command = '/bin/true'\narchive_timeout = 3600\nseq_page_cost = 0.1\nrandom_page_cost = 0.1\neffective_cache_size = 3GB\ngeqo = off\nfrom_collapse_limit = 20\njoin_collapse_limit = 20\nredirect_stderr = on\nlog_line_prefix = '[%m] %p %q<%u %d %r> '\nautovacuum_naptime = 1min\nautovacuum_vacuum_threshold = 10\nautovacuum_analyze_threshold = 10\ndatestyle = 'iso, mdy'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\nescape_string_warning = off\nsql_inheritance = off\nstandard_conforming_strings = on\n \n \n8.3.4:\n \nccsa@COUNTY2-PG:~> /usr/local/pgsql-8.3.4-64/bin/pg_config\nBINDIR = /usr/local/pgsql-8.3.4-64/bin\nDOCDIR = /usr/local/pgsql-8.3.4-64/doc\nINCLUDEDIR = /usr/local/pgsql-8.3.4-64/include\nPKGINCLUDEDIR = /usr/local/pgsql-8.3.4-64/include\nINCLUDEDIR-SERVER = /usr/local/pgsql-8.3.4-64/include/server\nLIBDIR = /usr/local/pgsql-8.3.4-64/lib\nPKGLIBDIR = /usr/local/pgsql-8.3.4-64/lib\nLOCALEDIR =\nMANDIR = /usr/local/pgsql-8.3.4-64/man\nSHAREDIR = /usr/local/pgsql-8.3.4-64/share\nSYSCONFDIR = /usr/local/pgsql-8.3.4-64/etc\nPGXS = /usr/local/pgsql-8.3.4-64/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = '--prefix=/usr/local/pgsql-8.3.4-64'\n'--enable-integer-datetimes' '--enable-debug' '--disable-nls'\n'--with-libxml'\nCC = gcc\nCPPFLAGS = -D_GNU_SOURCE -I/usr/include/libxml2\nCFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline\n-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing\n-fwrapv -g\nCFLAGS_SL = -fpic\nLDFLAGS = -Wl,-rpath,'/usr/local/pgsql-8.3.4-64/lib'\nLDFLAGS_SL =\nLIBS = -lpgport -lxml2 -lz -lreadline -lcrypt -ldl -lm\nVERSION = PostgreSQL 8.3.4\n \nmax_connections = 50\nshared_buffers = 256MB\ntemp_buffers = 10MB\nmax_prepared_transactions = 0\nwork_mem = 16MB\nmaintenance_work_mem = 400MB\nmax_fsm_pages = 1000000\nbgwriter_lru_maxpages = 1000\nbgwriter_lru_multiplier = 4.0\nwal_buffers = 256kB\ncheckpoint_segments = 50\narchive_mode = on\narchive_command = '/bin/true'\narchive_timeout = 3600\nseq_page_cost = 0.1\nrandom_page_cost = 0.1\neffective_cache_size = 3GB\ngeqo = off\nfrom_collapse_limit = 20\njoin_collapse_limit = 20\nlogging_collector = on\nlog_checkpoints = on\nlog_connections = on\nlog_disconnections = on\nlog_line_prefix = '[%m] %p %q<%u %d %r> '\nautovacuum_naptime = 1min\nautovacuum_vacuum_threshold = 10\nautovacuum_analyze_threshold = 10\ndatestyle = 'iso, mdy'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\ndefault_text_search_config = 'pg_catalog.english'\nescape_string_warning = off\nsql_inheritance = off\nstandard_conforming_strings = on\n\n", "msg_date": "Wed, 05 Nov 2008 13:05:59 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": ">>> \"Kevin Grittner\" <[email protected]> wrote: \n> I'm going to get the latest snapshot to see if the issue has changed\n> for 8.4devel\n \nIn testing under today's snapshot, it seemed to take 150,000 writes to\ncreate and drop 1,000 temporary tables within a database transaction. \nThe numbers for the various versions might be within the sampling\nnoise, since the testing involved manual steps and required saturating\nthe queues in PostgreSQL, the OS, and the RAID controller to get\nmeaningful numbers. It seems like the complaints of slowness result\nprimarily from these writes saturating the bandwidth when a query\ngenerates a temporary table in a loop, with the increased impact in\nlater releases resulting from it getting through the loop faster.\n \nI've started a thread on the hackers' list to discuss a possible\nPostgreSQL enhancement to help such workloads. In the meantime, I\nthink I know which knobs to try turning to mitigate the issue, and\nI'll suggest rewrites to some of these queries, to avoid the temporary\ntables.\n \nIf I find a particular tweak to the background writer or some such is\nparticularly beneficial, I'll post again.\n \n-Kevin\n", "msg_date": "Wed, 05 Nov 2008 17:16:11 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> I'm trying to quantify the issue, and would appreciate any\n> suggestions, either for mitigation or collecting useful data to find\n> the cause of the performance regression. I create a script which\n> brackets 1000 lines like the following within a single begin/commit:\n \n> create temporary table tt (c1 int not null primary key, c2 text, c3\n> text); drop table tt;\n\nI poked at this a little bit. The test case is stressing the system\nmore than might be apparent: there's an index on c1 because of the\nPRIMARY KEY, and the text columns force a toast table to be created,\nwhich has its own index. So that means four separate filesystem\nfiles get created for each iteration, and then dropped at the end of\nthe transaction. (The different behavior you notice at COMMIT must\nbe the cleanup phase where the unlink()s get issued.)\n\nEven though nothing ever gets put in the indexes, their metapages get\ncreated immediately, so we also allocate and write 8K per index.\n\nSo there are three cost components:\n\n1. Filesystem overhead to create and eventually delete all those\nthousands of files.\n\n2. Write traffic for the index metapages.\n\n3. System catalog tuple insertions and deletions (and the ensuing\nWAL log traffic).\n\nI'm not too sure which of these is the dominant cost --- it might\nwell vary from system to system anyway depending on what filesystem\nyou use. But I think it's not #2 since that one would only amount\nto 16MB over the length of the transaction.\n\nAs far as I can tell with strace, the filesystem overhead ought to be\nthe same in 8.2 and 8.3 because pretty much the same series of syscalls\noccurs. So I suspect that the slowdown you saw comes from making a\nlarger number of catalog updates in 8.3; though I can't think what that\nwould be offhand.\n\nA somewhat worrisome point is that the filesystem overhead is going to\nessentially double in CVS HEAD, because of the addition of per-relation\nFSM files. (In fact, Heikki is proposing to triple the overhead by also\nadding DSM files ...) If cost #1 is significant then that could really\nhurt.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Nov 2008 20:35:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create and drop temp table in 8.3.4 " }, { "msg_contents": ">>> \"Kevin Grittner\" <[email protected]> wrote: \n> If I find a particular tweak to the background writer or some such\nis\n> particularly beneficial, I'll post again.\n \nIt turns out that it was not the PostgreSQL version which was\nprimarily responsible for the performance difference. We updated the\nkernel at the same time we rolled out 8.3.4, and the new kernel\ndefaulted to using write barriers, while the old kernel didn't. Since\nwe have a BBU RAID controller, we will add nobarrier to the fstab\nentries. This makes file creation and unlink each about 20 times\nfaster.\n \n-Kevin\n", "msg_date": "Thu, 06 Nov 2008 13:02:55 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": "On Thu, 2008-11-06 at 13:02 -0600, Kevin Grittner wrote:\n> >>> \"Kevin Grittner\" <[email protected]> wrote: \n> > If I find a particular tweak to the background writer or some such\n> is\n> > particularly beneficial, I'll post again.\n> \n> It turns out that it was not the PostgreSQL version which was\n> primarily responsible for the performance difference. We updated the\n> kernel at the same time we rolled out 8.3.4, and the new kernel\n> defaulted to using write barriers, while the old kernel didn't. Since\n> we have a BBU RAID controller, we will add nobarrier to the fstab\n> entries. This makes file creation and unlink each about 20 times\n> faster.\n\nWoah... which version of the kernel was old and new?\n\n> \n> -Kevin\n> \n-- \n\n", "msg_date": "Thu, 06 Nov 2008 11:17:51 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": ">>> \"Joshua D. Drake\" <[email protected]> wrote: \n> On Thu, 2008-11-06 at 13:02 -0600, Kevin Grittner wrote:\n>> the new kernel\n>> defaulted to using write barriers, while the old kernel didn't. \nSince\n>> we have a BBU RAID controller, we will add nobarrier to the fstab\n>> entries. This makes file creation and unlink each about 20 times\n>> faster.\n> \n> Woah... which version of the kernel was old and new?\n \nold:\n \nkgrittn@DBUTL-PG:/var/pgsql/data/test> cat /proc/version\nLinux version 2.6.5-7.287.3-bigsmp (geeko@buildhost) (gcc version 3.3.3\n(SuSE Linux)) #1 SMP Tue Oct 2 07:31:36 UTC 2007\nkgrittn@DBUTL-PG:/var/pgsql/data/test> uname -a\nLinux DBUTL-PG 2.6.5-7.287.3-bigsmp #1 SMP Tue Oct 2 07:31:36 UTC 2007\ni686 i686 i386 GNU/Linux\nkgrittn@DBUTL-PG:/var/pgsql/data/test> cat /etc/SuSE-release\nSUSE LINUX Enterprise Server 9 (i586)\nVERSION = 9\nPATCHLEVEL = 3\n \nnew:\n \nkgrittn@SAWYER-PG:~> cat /proc/version\nLinux version 2.6.16.60-0.27-smp (geeko@buildhost) (gcc version 4.1.2\n20070115 (SUSE Linux)) #1 SMP Mon Jul 28 12:55:32 UTC 2008\nkgrittn@SAWYER-PG:~> uname -a\nLinux SAWYER-PG 2.6.16.60-0.27-smp #1 SMP Mon Jul 28 12:55:32 UTC 2008\nx86_64 x86_64 x86_64 GNU/Linux\nkgrittn@SAWYER-PG:~> cat /etc/SuSE-release\nSUSE Linux Enterprise Server 10 (x86_64)\nVERSION = 10\nPATCHLEVEL = 2\n \nTo be clear, file create and unlink speeds are almost the same between\nthe two kernels without write barriers; the difference is that they\nwere in effect by default in the newer kernel.\n \n-Kevin\n", "msg_date": "Thu, 06 Nov 2008 13:35:26 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": "To others that may stumble upon this thread:\nNote that Write Barriers can be very important for data integrity when power\nloss or hardware failure are a concern. Only disable them if you know the\nconsequences are mitigated by other factors (such as a BBU + db using the\nWAL log with sync writes), or if you accept the additional risk to data\nloss. Also note that LVM prevents the possibility of using write barriers,\nand lowers data reliability as a result. The consequences are application\ndependent and also highly file system dependent.\n\nOn Temp Tables:\nI am a bit ignorant on the temp table relationship to file creation -- it\nmakes no sense to me at all that a file would even be created for a temp\ntable unless it spills out of RAM or is committed. Inside of a transaction,\nshouldn't they be purely in-memory if there is space? Is there any way to\nprevent the file creation? This seems like a total waste of time for many\ntemp table use cases, and explains why they were so slow in some exploratory\ntesting we did a few months ago.\n\n\nOn Thu, Nov 6, 2008 at 11:35 AM, Kevin Grittner <[email protected]\n> wrote:\n\n> >>> \"Joshua D. Drake\" <[email protected]> wrote:\n> > On Thu, 2008-11-06 at 13:02 -0600, Kevin Grittner wrote:\n> >> the new kernel\n> >> defaulted to using write barriers, while the old kernel didn't.\n> Since\n> >> we have a BBU RAID controller, we will add nobarrier to the fstab\n> >> entries. This makes file creation and unlink each about 20 times\n> >> faster.\n> >\n> > Woah... which version of the kernel was old and new?\n>\n> old:\n>\n> kgrittn@DBUTL-PG:/var/pgsql/data/test> cat /proc/version\n> Linux version 2.6.5-7.287.3-bigsmp (geeko@buildhost) (gcc version 3.3.3\n> (SuSE Linux)) #1 SMP Tue Oct 2 07:31:36 UTC 2007\n> kgrittn@DBUTL-PG:/var/pgsql/data/test> uname -a\n> Linux DBUTL-PG 2.6.5-7.287.3-bigsmp #1 SMP Tue Oct 2 07:31:36 UTC 2007\n> i686 i686 i386 GNU/Linux\n> kgrittn@DBUTL-PG:/var/pgsql/data/test> cat /etc/SuSE-release\n> SUSE LINUX Enterprise Server 9 (i586)\n> VERSION = 9\n> PATCHLEVEL = 3\n>\n> new:\n>\n> kgrittn@SAWYER-PG:~> cat /proc/version\n> Linux version 2.6.16.60-0.27-smp (geeko@buildhost) (gcc version 4.1.2\n> 20070115 (SUSE Linux)) #1 SMP Mon Jul 28 12:55:32 UTC 2008\n> kgrittn@SAWYER-PG:~> uname -a\n> Linux SAWYER-PG 2.6.16.60-0.27-smp #1 SMP Mon Jul 28 12:55:32 UTC 2008\n> x86_64 x86_64 x86_64 GNU/Linux\n> kgrittn@SAWYER-PG:~> cat /etc/SuSE-release\n> SUSE Linux Enterprise Server 10 (x86_64)\n> VERSION = 10\n> PATCHLEVEL = 2\n>\n> To be clear, file create and unlink speeds are almost the same between\n> the two kernels without write barriers; the difference is that they\n> were in effect by default in the newer kernel.\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nTo others that may stumble upon this thread: Note that Write Barriers can be very important for data integrity when power loss or hardware failure are a concern.  Only disable them if you know the consequences are mitigated by other factors (such as a BBU + db using the WAL log with sync writes), or if you accept the additional risk to data loss.  Also note that LVM prevents the possibility of using write barriers, and lowers data reliability as a result.   The consequences are application dependent and also highly file system dependent.\nOn Temp Tables:I am a bit ignorant on the temp table relationship to file creation -- it makes no sense to me at all that a file would even be created for a temp table unless it spills out of RAM or is committed.  Inside of a transaction, shouldn't they be purely in-memory if there is space?  Is there any way to prevent the file creation?  This seems like a total waste of time for many temp table use cases, and explains why they were so slow in some exploratory testing we did a few months ago.\n\nOn Thu, Nov 6, 2008 at 11:35 AM, Kevin Grittner <[email protected]> wrote:\n>>> \"Joshua D. Drake\" <[email protected]> wrote:\n> On Thu, 2008-11-06 at 13:02 -0600, Kevin Grittner wrote:\n>> the new kernel\n>> defaulted to using write barriers, while the old kernel didn't.\nSince\n>> we have a BBU RAID controller, we will add nobarrier to the fstab\n>> entries.  This makes file creation and unlink each about 20 times\n>> faster.\n>\n> Woah... which version of the kernel was old and new?\n\nold:\n\nkgrittn@DBUTL-PG:/var/pgsql/data/test> cat /proc/version\nLinux version 2.6.5-7.287.3-bigsmp (geeko@buildhost) (gcc version 3.3.3\n(SuSE Linux)) #1 SMP Tue Oct 2 07:31:36 UTC 2007\nkgrittn@DBUTL-PG:/var/pgsql/data/test> uname -a\nLinux DBUTL-PG 2.6.5-7.287.3-bigsmp #1 SMP Tue Oct 2 07:31:36 UTC 2007\ni686 i686 i386 GNU/Linux\nkgrittn@DBUTL-PG:/var/pgsql/data/test> cat /etc/SuSE-release\nSUSE LINUX Enterprise Server 9 (i586)\nVERSION = 9\nPATCHLEVEL = 3\n\nnew:\n\nkgrittn@SAWYER-PG:~> cat /proc/version\nLinux version 2.6.16.60-0.27-smp (geeko@buildhost) (gcc version 4.1.2\n20070115 (SUSE Linux)) #1 SMP Mon Jul 28 12:55:32 UTC 2008\nkgrittn@SAWYER-PG:~> uname -a\nLinux SAWYER-PG 2.6.16.60-0.27-smp #1 SMP Mon Jul 28 12:55:32 UTC 2008\nx86_64 x86_64 x86_64 GNU/Linux\nkgrittn@SAWYER-PG:~> cat /etc/SuSE-release\nSUSE Linux Enterprise Server 10 (x86_64)\nVERSION = 10\nPATCHLEVEL = 2\n\nTo be clear, file create and unlink speeds are almost the same between\nthe two kernels without write barriers; the difference is that they\nwere in effect by default in the newer kernel.\n\n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 6 Nov 2008 13:05:06 -0800", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": ">>> \"Scott Carey\" <[email protected]> wrote: \n> Note that Write Barriers can be very important for data integrity\nwhen power\n> loss or hardware failure are a concern. Only disable them if you\nknow the\n> consequences are mitigated by other factors (such as a BBU + db using\nthe\n> WAL log with sync writes), or if you accept the additional risk to\ndata\n> loss.\n \nFor those using xfs, this link may be useful:\n \nhttp://oss.sgi.com/projects/xfs/faq.html#wcache\n \n> On Temp Tables:\n> I am a bit ignorant on the temp table relationship to file creation\n-- it\n> makes no sense to me at all that a file would even be created for a\ntemp\n> table unless it spills out of RAM or is committed. Inside of a\ntransaction,\n> shouldn't they be purely in-memory if there is space? Is there any\nway to\n> prevent the file creation? This seems like a total waste of time for\nmany\n> temp table use cases, and explains why they were so slow in some\nexploratory\n> testing we did a few months ago.\n \nAs I learned today, creating a temporary table in PostgreSQL can\neasily create four files and do dozens of updates to system tables;\nthat's all before you start actually inserting any data into the\ntemporary table.\n \n-Kevin\n", "msg_date": "Thu, 06 Nov 2008 15:49:59 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": "On Thu, Nov 6, 2008 at 2:05 PM, Scott Carey <[email protected]> wrote:\n> To others that may stumble upon this thread:\n> Note that Write Barriers can be very important for data integrity when power\n> loss or hardware failure are a concern. Only disable them if you know the\n> consequences are mitigated by other factors (such as a BBU + db using the\n> WAL log with sync writes), or if you accept the additional risk to data\n> loss. Also note that LVM prevents the possibility of using write barriers,\n> and lowers data reliability as a result. The consequences are application\n> dependent and also highly file system dependent.\n\nI am pretty sure that with no write barriers that even a BBU hardware\ncaching raid controller cannot guarantee your data.\n", "msg_date": "Thu, 6 Nov 2008 15:05:21 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": ">>> \"Scott Marlowe\" <[email protected]> wrote: \n> I am pretty sure that with no write barriers that even a BBU\nhardware\n> caching raid controller cannot guarantee your data.\n \nThat seems at odds with this:\n \nhttp://oss.sgi.com/projects/xfs/faq.html#wcache_persistent\n \nWhat evidence to you have that the SGI XFS team is wrong?\n \nIt does seem fairly bizarre to me that we can't configure our system\nto enforce write barriers within the OS and file system without having\nit enforced all the way past the BBU RAID cache onto the hard drives\nthemselves. We consider that once it hits the battery-backed cache,\nit is persisted. Reality has only contradicted that once so far (with\na RAID controller failure), and our backups have gotten us past that\nwith no sweat.\n \n-Kevin\n", "msg_date": "Thu, 06 Nov 2008 16:33:43 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": "On Thu, Nov 6, 2008 at 3:33 PM, Kevin Grittner\n<[email protected]> wrote:\n>>>> \"Scott Marlowe\" <[email protected]> wrote:\n>> I am pretty sure that with no write barriers that even a BBU\n> hardware\n>> caching raid controller cannot guarantee your data.\n>\n> That seems at odds with this:\n>\n> http://oss.sgi.com/projects/xfs/faq.html#wcache_persistent\n>\n> What evidence to you have that the SGI XFS team is wrong?\n\nLogic? Without write barriers in my file system an fsync request will\nbe immediately returned true, correct? That means that writes can\nhappen out of order, and a system crash could corrupt the file system.\n Just seems kind of obvious to me.\n", "msg_date": "Thu, 6 Nov 2008 15:45:57 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": ">>> \"Scott Marlowe\" <[email protected]> wrote: \n> On Thu, Nov 6, 2008 at 3:33 PM, Kevin Grittner\n> <[email protected]> wrote:\n>>>>> \"Scott Marlowe\" <[email protected]> wrote:\n>>> I am pretty sure that with no write barriers that even a BBU\n>> hardware\n>>> caching raid controller cannot guarantee your data.\n>>\n>> That seems at odds with this:\n>>\n>> http://oss.sgi.com/projects/xfs/faq.html#wcache_persistent\n>>\n>> What evidence to you have that the SGI XFS team is wrong?\n> \n> Without write barriers in my file system an fsync request will\n> be immediately returned true, correct?\n \nNot as I understand it; although it will be pretty fast if it all fits\ninto the battery backed cache.\n \n-Kevin\n", "msg_date": "Thu, 06 Nov 2008 17:04:03 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": "On Thu, Nov 6, 2008 at 4:04 PM, Kevin Grittner\n<[email protected]> wrote:\n>>>> \"Scott Marlowe\" <[email protected]> wrote:\n>> On Thu, Nov 6, 2008 at 3:33 PM, Kevin Grittner\n>> <[email protected]> wrote:\n>>>>>> \"Scott Marlowe\" <[email protected]> wrote:\n>>>> I am pretty sure that with no write barriers that even a BBU\n>>> hardware\n>>>> caching raid controller cannot guarantee your data.\n>>>\n>>> That seems at odds with this:\n>>>\n>>> http://oss.sgi.com/projects/xfs/faq.html#wcache_persistent\n>>>\n>>> What evidence to you have that the SGI XFS team is wrong?\n>>\n>> Without write barriers in my file system an fsync request will\n>> be immediately returned true, correct?\n>\n> Not as I understand it; although it will be pretty fast if it all fits\n> into the battery backed cache.\n\nOK, thought exercise time. There's a limited size for the cache.\nLet's assume it's much smaller, say 16Megabytes. We turn off write\nbarriers. We start writing data to the RAID array faster than the\ndisks can write it. At some point, the data flowing into the cache is\nbacking up into the OS. Without write barriers, the second we call an\nfsync it returns true. But the data's not in the cache yet, or on the\ndisk. Machine crashes, data is incoherent.\n\nBut that's assuming write barriers work as I understand them.\n", "msg_date": "Thu, 6 Nov 2008 17:03:58 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": "On Thu, Nov 6, 2008 at 4:03 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Nov 6, 2008 at 4:04 PM, Kevin Grittner <[email protected]> wrote:\n>> \"Scott Marlowe\" <[email protected]> wrote:\n>>> Without write barriers in my file system an fsync request will\n>>> be immediately returned true, correct?\n>>\n>> Not as I understand it; although it will be pretty fast if it all fits\n>> into the battery backed cache.\n>\n> OK, thought exercise time. There's a limited size for the cache.\n> Let's assume it's much smaller, say 16Megabytes. We turn off write\n> barriers. We start writing data to the RAID array faster than the\n> disks can write it. At some point, the data flowing into the cache is\n> backing up into the OS. Without write barriers, the second we call an\n> fsync it returns true. But the data's not in the cache yet, or on the\n> disk. Machine crashes, data is incoherent.\n>\n> But that's assuming write barriers work as I understand them.\n\nLet's try to clear up a couple things:\n\n1. We are talking about 3 different memory caches in order from low to high:\nDisk cache, Controller cache (BBU in this case) and OS cache.\n\n2. A write barrier instructs the lower level hardware that commands\nissued before the barrier must be written to disk before commands\nissued after the barrier. Write barriers are used to ensure that data\nwritten to disk is written in such a way as to maintain filesystem\nconsistency, without losing all the benefits of a write cache.\n\n3. A fsync call forces data to be synced to the controller.\n\nThis means that whenever you call fsync, at the very minimum, the data\nwill have made it to the controller. How much further down the line\nwill depend on whether or not the controller is in WriteBack or\nWriteThrough mode and whether or not the disk is also caching writes.\n\nSo in your example, if the OS is caching some writes and fsync is\ncalled, it won't be returned until at a minimum the controller has\naccepted all the data, regardless of whether or not write barriers are\nenabled.\n\nIn theory, it should be safe to disable write barriers if you have a\nBBU because the BBU should guarantee that all writes will eventually\nmake it to disk (or at least reduce the risk of that not happening to\nan acceptable level).\n\n-Dave\n", "msg_date": "Thu, 6 Nov 2008 18:21:59 -0800", "msg_from": "\"David Rees\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": "On Thu, 6 Nov 2008, Scott Marlowe wrote:\n> Without write barriers, the second we call an fsync it returns true.\n>\n> But that's assuming write barriers work as I understand them.\n\nWrite barriers do not work as you understand them.\n\nCalling fsync always blocks until all the data has made it to safe \nstorage, and always has (barring broken systems). Write barriers are meant \nto be a way to speed up fsync-like operations. Before write barriers, all \nthe system could do was call fsync, and that would cause the operating \nsystem to wait for a response from the disc subsystem that the data had \nbeen written before it could start writing some more stuff. Write \nbarriers provide an extra way of telling the disc \"Write everything before \nthe barrier before you write anything after the barrier\", which allows the \noperating system to keep stuffing data into the disc queue without having \nto wait for a response.\n\nSo fsync should always work right, unless the system is horribly broken, \non all systems going back many years.\n\nMatthew\n\n-- \nI'd try being be a pessimist, but it probably wouldn't work anyway.\n", "msg_date": "Tue, 11 Nov 2008 14:52:50 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create and drop temp table in 8.3.4" }, { "msg_contents": "Seems like this didn't make it through to the list the first time...\n\n* Aidan Van Dyk <[email protected]> [081106 22:19]:\n> * David Rees <[email protected]> [081106 21:22]:\n> \n> > 2. A write barrier instructs the lower level hardware that commands\n> > issued before the barrier must be written to disk before commands\n> > issued after the barrier. Write barriers are used to ensure that data\n> > written to disk is written in such a way as to maintain filesystem\n> > consistency, without losing all the benefits of a write cache.\n> > \n> > 3. A fsync call forces data to be synced to the controller.\n> > \n> > This means that whenever you call fsync, at the very minimum, the data\n> > will have made it to the controller. How much further down the line\n> > will depend on whether or not the controller is in WriteBack or\n> > WriteThrough mode and whether or not the disk is also caching writes.\n> > \n> > So in your example, if the OS is caching some writes and fsync is\n> > called, it won't be returned until at a minimum the controller has\n> > accepted all the data, regardless of whether or not write barriers are\n> > enabled.\n> > \n> > In theory, it should be safe to disable write barriers if you have a\n> > BBU because the BBU should guarantee that all writes will eventually\n> > make it to disk (or at least reduce the risk of that not happening to\n> > an acceptable level).\n> \n> All that's \"correct\", but note that fsync doesn't guarentee *coherent*\n> filesystem state has been made to controller. And fsync *can* carry \"later\"\n> writes to the controller.\n> \n> I belive the particular case the prompted the write-barriers to become default\n> was ext3 + journals, where in certain (rare) cases, upon recovery, things were\n> out of sync. What was happening was that ext3 was syncing the journal, but\n> \"extra\" writes were getting carried to the controller during the sync\n> operation, and if something crashed at the right time, \"new\" data was on the\n> disk where the \"old journal\" (because the new journal hadn't finished making\n> it to the controller) didn't expect it.\n> \n> The write barriers give the FS the symantics to say \"all previous queue\n> writes\" [BARRIER] flush to controller [BARRIER] \"any new writes\", and thus\n> guarentee the ordering of certian operations to disk, and guarentee coherency\n> of the FS at all times.\n> \n> Of course, that guarenteed FS consistency comes at a cost. As to it's\n> necessity with the way PG uses the FS w/ WAL.... or it's necessity with\n> xfs...\n> \n> a.\n> \n> -- \n> Aidan Van Dyk Create like a god,\n> [email protected] command like a king,\n> http://www.highrise.ca/ work like a slave.\n\n\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Tue, 11 Nov 2008 10:07:53 -0500", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create and drop temp table in 8.3.4" } ]
[ { "msg_contents": "So, we had a query run accidentally without going through the right checks\nto ensure that it had the right limits in a where clause for our table\npartitioning, resulting in an attempt to scan TB's of data.\n\nObviously, we fixed the query, but the curious result is this explain plan\n(shortened, in full form its ~3500 lines long). A true cost estimate of ~ 4\nmillion isn't a big deal on this server. It is plainly wrong... wouldn't a\nnested loop of this sort estimate at least 128266*4100644 for the cost? Or\nsomething on that order of magnitude?\nCertainly, a cost estimate that is ... LESS than one of the sub sections of\nthe query is wrong. This was one hell of a broken query, but it at least\nshould have taken an approach that was not a nested loop, and I'm curious if\nthat choice was due to a bad estimate here.\n\nNested Loop IN Join (cost=0.00..3850831.86 rows=128266 width=8)\n Join Filter: ((log.p_p_logs.s_id)::text = (log.s_r_logs.s_id)::text)\n -> Append (cost=0.00..6078.99 rows=128266 width=46)\n -> Seq Scan on p_p_logs (cost=0.00..1.01 rows=1 width=14)\n Filter: ((date >= '2008-10-27'::date) AND (sector = 12))\n -> Seq Scan on p_p_logs_012_2008_10_27 p_p_logs\n(cost=0.00..718.22 rows=15148 width=46)\n Filter: ((date >= '2008-10-27'::date) AND (sector = 12))\n [ Snipped ~ 10 more tables]\n\n -> Append (cost=0.00..4100644.78 rows=29850181 width=118)\n -> Seq Scan on s_r_logs (cost=0.00..1.01 rows=1 width=14)\n Filter: log.s_r_logs.source\n -> Seq Scan on s_r_logs_002_2008_10_01 s_r_logs (cost=0.00..91.00\nrows=1050 width=33)\n Filter: p_log.s_r_logs.source\n -> Seq Scan on s_r_logs_002_2008_10_02 s_r_logs (cost=0.00..65.00\nrows=750 width=33)\n [ Snipped ~1500 tables of various sizes ]\n\nSo, we had a query run accidentally without going through the right checks to ensure that it had the right limits in a where clause for our table partitioning, resulting in an attempt to scan TB's of data.Obviously, we fixed the query, but the curious result is this explain plan (shortened, in full form its ~3500 lines long).  A true cost estimate of ~ 4 million isn't a big deal on this server.  It is plainly wrong...  wouldn't a nested loop of this sort estimate at least 128266*4100644 for the cost?  Or something on that order of magnitude?  \nCertainly, a cost estimate that is ... LESS than one of the sub sections of the query is wrong.   This was one hell of a broken query, but it at least should have taken an approach that was not a nested loop, and I'm curious if that choice was due to a bad estimate here.  \nNested Loop IN Join  (cost=0.00..3850831.86 rows=128266 width=8)   Join Filter: ((log.p_p_logs.s_id)::text = (log.s_r_logs.s_id)::text)   ->  Append  (cost=0.00..6078.99 rows=128266 width=46)         ->  Seq Scan on p_p_logs  (cost=0.00..1.01 rows=1 width=14)\n               Filter: ((date >= '2008-10-27'::date) AND (sector = 12))         ->  Seq Scan on p_p_logs_012_2008_10_27 p_p_logs  (cost=0.00..718.22 rows=15148 width=46)               Filter: ((date >= '2008-10-27'::date) AND (sector = 12))\n      [ Snipped ~ 10 more tables]   ->  Append  (cost=0.00..4100644.78 rows=29850181 width=118)         ->  Seq Scan on s_r_logs  (cost=0.00..1.01 rows=1 width=14)               Filter: log.s_r_logs.source\n         ->  Seq Scan on s_r_logs_002_2008_10_01 s_r_logs  (cost=0.00..91.00 rows=1050 width=33)               Filter: p_log.s_r_logs.source         ->  Seq Scan on s_r_logs_002_2008_10_02 s_r_logs  (cost=0.00..65.00 rows=750 width=33)\n      [ Snipped ~1500 tables of various sizes ]", "msg_date": "Wed, 5 Nov 2008 12:00:38 -0800", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query planner cost estimate less than the sum of its parts?" }, { "msg_contents": "\n\"Scott Carey\" <[email protected]> writes:\n\n> Certainly, a cost estimate that is ... LESS than one of the sub sections of\n> the query is wrong. This was one hell of a broken query, but it at least\n> should have taken an approach that was not a nested loop, and I'm curious if\n> that choice was due to a bad estimate here.\n>\n> Nested Loop IN Join (cost=0.00..3850831.86 rows=128266 width=8)\n\nBecause it's an IN join it doesn't have to run the inner join to completion.\nOnce it finds a match it can return the outer tuple and continue to the next\nouter tuple.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Wed, 05 Nov 2008 21:22:09 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner cost estimate less than the sum of its parts?" }, { "msg_contents": "I'll have to think a bit about that given that the query had run for 20\nhours of 250MB/sec-ish disk reads and wasn't done. Luckily, thats not even\n35% disk utilization on this system, and the 'right' query with fewer tables\ndoes things properly with a hash and takes seconds rather than hours\n(days?).\n\nIf it can short-circuit the search, then its probably extremely\nunderestimating how much data it has to look through before finding a match,\nwhich I'd expect out of a partitioned table query since the planner\nassumptions around those are generally bad to really bad (as in, the\naggregate statistics on a list of tables is essentially not used or\ncalculated/estimated wrong). I suppose the real problem is there, its going\nto have to look through most of this data to find a match, on every loop,\nand the planner has no clue.\nIf the nested loop was the other way around it would not have even pinned\nthe disk and have been all in memory on the matching. If it had hashed all\nof the estimated 128K values in the top -- which at 1GB for work_mem it\nshould but does not -- it could have scanned once for matches and thrown out\nthose in the hash that did not have a match.\n\nAnyhow this isn't causing a problem at the moment, and it looks like the\nusual culprit with poor planner choices on partition tables and not a new\none.\n\nOn Wed, Nov 5, 2008 at 1:22 PM, Gregory Stark <[email protected]>wrote:\n\n>\n> \"Scott Carey\" <[email protected]> writes:\n>\n> > Certainly, a cost estimate that is ... LESS than one of the sub sections\n> of\n> > the query is wrong. This was one hell of a broken query, but it at\n> least\n> > should have taken an approach that was not a nested loop, and I'm curious\n> if\n> > that choice was due to a bad estimate here.\n> >\n> > Nested Loop IN Join (cost=0.00..3850831.86 rows=128266 width=8)\n>\n> Because it's an IN join it doesn't have to run the inner join to\n> completion.\n> Once it finds a match it can return the outer tuple and continue to the\n> next\n> outer tuple.\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's 24x7 Postgres support!\n>\n\nI'll have to think a bit about that given that the query had run for 20 hours of 250MB/sec-ish disk reads and wasn't done.  Luckily, thats not even 35% disk utilization on this system, and the 'right' query with fewer tables does things properly with a hash and takes seconds rather than hours (days?).\nIf it can short-circuit the search, then its probably extremely underestimating how much data it has to look through before finding a match, which I'd expect out of a partitioned table query since the planner assumptions around those are generally bad to really bad (as in, the aggregate statistics on a list of tables is essentially not used or calculated/estimated wrong).  I suppose the real problem is there, its going to have to look through most of this data to find a match, on every loop, and the planner has no clue.\nIf the nested loop was the other way around it would not have even pinned the disk and have been all in memory on the matching.  If it had hashed all of the estimated 128K values in the top -- which at 1GB for work_mem it should but does not -- it could have scanned once for matches and thrown out those in the hash that did not have a match.\nAnyhow this isn't causing a problem at the moment, and it looks like the usual culprit with poor planner choices on partition tables and not a new one.On Wed, Nov 5, 2008 at 1:22 PM, Gregory Stark <[email protected]> wrote:\n\n\"Scott Carey\" <[email protected]> writes:\n\n> Certainly, a cost estimate that is ... LESS than one of the sub sections of\n> the query is wrong.   This was one hell of a broken query, but it at least\n> should have taken an approach that was not a nested loop, and I'm curious if\n> that choice was due to a bad estimate here.\n>\n> Nested Loop IN Join  (cost=0.00..3850831.86 rows=128266 width=8)\n\nBecause it's an IN join it doesn't have to run the inner join to completion.\nOnce it finds a match it can return the outer tuple and continue to the next\nouter tuple.\n\n--\n  Gregory Stark\n  EnterpriseDB          http://www.enterprisedb.com\n  Ask me about EnterpriseDB's 24x7 Postgres support!", "msg_date": "Wed, 5 Nov 2008 14:19:32 -0800", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner cost estimate less than the sum of its parts?" } ]
[ { "msg_contents": "Hi all\n \nMy database server db01 is on linux environment and size of base folder increasing very fast unexpectedly(creating renamed files of 1 GB in base folder like 1667234568.10)\ndetails as below\n \npath of the table space/base file\n \n/opt/appl/pgsql82/data/base/453447624/\n \n[postgres@wcatlsatdb01 453447624]$ ls -ltrh 1662209326*\n-rw-------  1 postgres postgres 1.0G Aug 27 06:32 1662209326\n-rw-------  1 postgres postgres 1.0G Aug 30 02:16 1662209326.1\n-rw-------  1 postgres postgres 1.0G Aug 30 17:39 1662209326.2\n-rw-------  1 postgres postgres 1.0G Sep  1 05:42 1662209326.3\n-rw-------  1 postgres postgres 1.0G Sep  2 09:28 1662209326.4\n-rw-------  1 postgres postgres 1.0G Sep  4 10:08 1662209326.5\n-rw-------  1 postgres postgres 1.0G Sep  6 02:17 1662209326.6\n-rw-------  1 postgres postgres 1.0G Sep  8 03:43 1662209326.7\n-rw-------  1 postgres postgres 1.0G Sep  8 08:16 1662209326.8\n-rw-------  1 postgres postgres 1.0G Sep 10 10:53 1662209326.9\n-rw-------  1 postgres postgres 1.0G Sep 11 04:59 1662209326.10\n-rw-------  1 postgres postgres 1.0G Sep 13 05:22 1662209326.11\n-rw-------  1 postgres postgres 1.0G Sep 15 01:43 1662209326.12\n-rw-------  1 postgres postgres 1.0G Sep 16 08:32 1662209326.13\n-rw-------  1 postgres postgres 1.0G Sep 17 14:30 1662209326.14\n-rw-------  1 postgres postgres 1.0G Sep 19 07:24 1662209326.15\n-rw-------  1 postgres postgres 1.0G Sep 21 02:55 1662209326.16\n-rw-------  1 postgres postgres 1.0G Sep 22 09:59 1662209326.17\n-rw-------  1 postgres postgres 1.0G Sep 23 06:22 1662209326.18\n-rw-------  1 postgres postgres 1.0G Sep 25 08:42 1662209326.19\n-rw-------  1 postgres postgres 1.0G Sep 26 04:13 1662209326.20\n-rw-------  1 postgres postgres 1.0G Sep 29 03:58 1662209326.21\n-rw-------  1 postgres postgres 1.0G Sep 29 08:06 1662209326.22\n-rw-------  1 postgres postgres 1.0G Oct  2 02:23 1662209326.23\n-rw-------  1 postgres postgres 1.0G Oct  2 06:45 1662209326.24\n-rw-------  1 postgres postgres 1.0G Oct  5 03:17 1662209326.25\n-rw-------  1 postgres postgres 1.0G Oct  5 14:39 1662209326.26\n-rw-------  1 postgres postgres 1.0G Oct  7 08:46 1662209326.27\n-rw-------  1 postgres postgres 1.0G Oct  8 18:24 1662209326.28\n-rw-------  1 postgres postgres 1.0G Oct 10 08:24 1662209326.29\n-rw-------  1 postgres postgres 1.0G Oct 12 03:24 1662209326.30\n-rw-------  1 postgres postgres 1.0G Oct 14 02:31 1662209326.31\n-rw-------  1 postgres postgres 1.0G Oct 16 02:36 1662209326.32\n-rw-------  1 postgres postgres 1.0G Oct 17 02:37 1662209326.33\n-rw-------  1 postgres postgres 1.0G Oct 19 02:37 1662209326.34\n-rw-------  1 postgres postgres 1.0G Oct 20 04:45 1662209326.35\n-rw-------  1 postgres postgres 1.0G Oct 22 02:37 1662209326.36\n-rw-------  1 postgres postgres 1.0G Oct 23 02:37 1662209326.37\n-rw-------  1 postgres postgres 1.0G Oct 25 02:38 1662209326.38\n-rw-------  1 postgres postgres 1.0G Oct 26 02:40 1662209326.39\n-rw-------  1 postgres postgres 1.0G Oct 28 02:40 1662209326.40\n-rw-------  1 postgres postgres 1.0G Oct 29 02:39 1662209326.41\n-rw-------  1 postgres postgres 1.0G Oct 31 05:11 1662209326.42\n-rw-------  1 postgres postgres 1.0G Nov  1 02:43 1662209326.43\n-rw-------  1 postgres postgres 1.0G Nov  3 03:31 1662209326.44\n-rw-------  1 postgres postgres 1.0G Nov  4 03:18 1662209326.45\n-rw-------  1 postgres postgres 1.0G Nov  6 02:37 1662209326.46\n-rw-------  1 postgres postgres 1.0G Nov  6 02:38 1662209326.47\n-rw-------  1 postgres postgres 249M Nov  6 02:39 1662209326.48\n[postgres@wcatlsatdb01 453447624]$\n\nwhat is significance of these files and how can i avoid it.can i delete these renamed files from base folder or any thing else. Please  help\n \nwith regards!\n\n \nBrahma Prakash Tiwari\nDBA \niBoss Tech Solutions \nSec-63 Noida\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe contents of this email, including the attachments, are PRIVILEGED AND CONFIDENTIAL to the intended recipient at the email address to which it has been addressed. \n \n\n\n Get rid of Add-Ons in your email ID. Get [email protected]. Sign up now! http://in.promos.yahoo.com/address\nHi all\n \nMy database server db01 is on linux environment and size of base folder increasing very fast unexpectedly(creating renamed files of 1 GB in base folder like 1667234568.10)\ndetails as below\n \npath of the table space/base file\n \n/opt/appl/pgsql82/data/base/453447624/\n \n[postgres@wcatlsatdb01 453447624]$ ls -ltrh 1662209326*-rw-------  1 postgres postgres 1.0G Aug 27 06:32 1662209326-rw-------  1 postgres postgres 1.0G Aug 30 02:16 1662209326.1-rw-------  1 postgres postgres 1.0G Aug 30 17:39 1662209326.2-rw-------  1 postgres postgres 1.0G Sep  1 05:42 1662209326.3-rw-------  1 postgres postgres 1.0G Sep  2 09:28 1662209326.4-rw-------  1 postgres postgres 1.0G Sep  4 10:08 1662209326.5-rw-------  1 postgres postgres 1.0G Sep  6 02:17 1662209326.6-rw-------  1 postgres postgres 1.0G Sep  8 03:43 1662209326.7-rw-------  1 postgres postgres 1.0G Sep  8 08:16 1662209326.8-rw-------  1 postgres postgres 1.0G Sep 10 10:53 1662209326.9-rw-------  1 postgres postgres 1.0G Sep 11 04:59 1662209326.10-rw-------  1 postgres postgres 1.0G Sep 13 05:22 1662209326.11-rw-------  1\n postgres postgres 1.0G Sep 15 01:43 1662209326.12-rw-------  1 postgres postgres 1.0G Sep 16 08:32 1662209326.13-rw-------  1 postgres postgres 1.0G Sep 17 14:30 1662209326.14-rw-------  1 postgres postgres 1.0G Sep 19 07:24 1662209326.15-rw-------  1 postgres postgres 1.0G Sep 21 02:55 1662209326.16-rw-------  1 postgres postgres 1.0G Sep 22 09:59 1662209326.17-rw-------  1 postgres postgres 1.0G Sep 23 06:22 1662209326.18-rw-------  1 postgres postgres 1.0G Sep 25 08:42 1662209326.19-rw-------  1 postgres postgres 1.0G Sep 26 04:13 1662209326.20-rw-------  1 postgres postgres 1.0G Sep 29 03:58 1662209326.21-rw-------  1 postgres postgres 1.0G Sep 29 08:06 1662209326.22-rw-------  1 postgres postgres 1.0G Oct  2 02:23 1662209326.23-rw-------  1 postgres postgres 1.0G Oct  2 06:45 1662209326.24-rw-------  1 postgres postgres\n 1.0G Oct  5 03:17 1662209326.25-rw-------  1 postgres postgres 1.0G Oct  5 14:39 1662209326.26-rw-------  1 postgres postgres 1.0G Oct  7 08:46 1662209326.27-rw-------  1 postgres postgres 1.0G Oct  8 18:24 1662209326.28-rw-------  1 postgres postgres 1.0G Oct 10 08:24 1662209326.29-rw-------  1 postgres postgres 1.0G Oct 12 03:24 1662209326.30-rw-------  1 postgres postgres 1.0G Oct 14 02:31 1662209326.31-rw-------  1 postgres postgres 1.0G Oct 16 02:36 1662209326.32-rw-------  1 postgres postgres 1.0G Oct 17 02:37 1662209326.33-rw-------  1 postgres postgres 1.0G Oct 19 02:37 1662209326.34-rw-------  1 postgres postgres 1.0G Oct 20 04:45 1662209326.35-rw-------  1 postgres postgres 1.0G Oct 22 02:37 1662209326.36-rw-------  1 postgres postgres 1.0G Oct 23 02:37 1662209326.37-rw-------  1 postgres postgres 1.0G Oct 25\n 02:38 1662209326.38-rw-------  1 postgres postgres 1.0G Oct 26 02:40 1662209326.39-rw-------  1 postgres postgres 1.0G Oct 28 02:40 1662209326.40-rw-------  1 postgres postgres 1.0G Oct 29 02:39 1662209326.41-rw-------  1 postgres postgres 1.0G Oct 31 05:11 1662209326.42-rw-------  1 postgres postgres 1.0G Nov  1 02:43 1662209326.43-rw-------  1 postgres postgres 1.0G Nov  3 03:31 1662209326.44-rw-------  1 postgres postgres 1.0G Nov  4 03:18 1662209326.45-rw-------  1 postgres postgres 1.0G Nov  6 02:37 1662209326.46-rw-------  1 postgres postgres 1.0G Nov  6 02:38 1662209326.47-rw-------  1 postgres postgres 249M Nov  6 02:39 1662209326.48[postgres@wcatlsatdb01 453447624]$\nwhat is significance of these files and how can i avoid it.can i delete these renamed files from base folder or any thing else. Please  help\n \nwith regards!\n \nBrahma Prakash TiwariDBA iBoss Tech Solutions Sec-63 Noida\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~The contents of this email, including the attachments, are PRIVILEGED AND CONFIDENTIAL to the intended recipient at the email address to which it has been addressed. \n \n Get rid of Add-Ons in your email ID. Get [email protected]. Sign up now!", "msg_date": "Thu, 6 Nov 2008 16:15:02 +0530 (IST)", "msg_from": "brahma tiwari <[email protected]>", "msg_from_op": true, "msg_subject": "server space increasing very fast but transaction are very low" }, { "msg_contents": "Hi,\n\nOn Thu, Nov 06, 2008 at 04:15:02PM +0530, brahma tiwari wrote:\n\n> My database server db01�is on linux environment and size of base folder increasing very fast unexpectedly(creating renamed files of 1�GB in base folder like 1667234568.10)\n\nThis sounds like your max_fsm_pages setting is too low, so the server\ncannot track free pages any more\n\nBump it up (there used to be a rule of thumb, IIRC it was 65536/1 GB of\ndata/disk space - please correct me, if I'm wrong).\nIIRC, you need to restart PostgresSQL after altering that setting. A\nVACUUM is prbably a good thing as well, maybe even VACUUM FULL. Do not\ndelete anything!\n\nHTH,\n\nTino.\n\n-- \n\"What we nourish flourishes.\" - \"Was wir n�hren erbl�ht.\"\n\nwww.lichtkreis-chemnitz.de\nwww.craniosacralzentrum.de\n", "msg_date": "Thu, 6 Nov 2008 11:57:27 +0100", "msg_from": "Tino Schwarze <[email protected]>", "msg_from_op": false, "msg_subject": "Re: server space increasing very fast but transaction are very low" }, { "msg_contents": "1. Don't email people directly to start a new thread (unless you have a\nsupport contract with them of course).\n\n2. Not much point in sending to two mailing lists simultaneously. You'll\njust split your responses.\n\n\nbrahma tiwari wrote:\n> Hi all\n> \n> My database server db01 is on linux environment and size of base\n> folder increasing very fast unexpectedly(creating renamed files of 1\n> GB in base folder like 1667234568.10) details as below\n\nThese are the files containing your tables / indexes.\nWhen a file gets larger than 1GB the file gets split and you get .1, .2 etc\non the end)\n\n> what is significance of these files and how can i avoid it.can i\n> delete these renamed files from base folder or any thing else. Please\n> help\n\nNEVER delete any files in .../data/base.\n\nSince these files all seem to have the same number they are all the same\nobject (table or index). You can see which by looking in pg_class.\nYou'll want to use the number 1662209326 of course.\n\n=> SELECT relname,relpages,reltuples,relfilenode FROM pg_class WHERE\nrelfilenode=2336591;\n relname | relpages | reltuples | relfilenode\n---------+----------+-----------+-------------\n outputs | 3 | 220 | 2336591\n(1 row)\n\nThis is the table outputs on mine which occupies 3 pages on disk and has\nabout 220 rows. You can find out the reverse (size of any table by name)\nwith some useful functions:\n select pg_size_pretty(pg_total_relation_size('my_table_name'));\n\nI'm guessing what you've got is a table that's not being vacuumed\nbecause you've had a transaction that's been open for weeks.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 06 Nov 2008 11:04:30 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: server space increasing very fast but transaction are\n very low" }, { "msg_contents": "Richard Huxton <dev 'at' archonet.com> writes:\n\n> I'm guessing what you've got is a table that's not being vacuumed\n> because you've had a transaction that's been open for weeks.\n\nOr because no vacuuming at all is performed on this table (no\nautovacuum and no explicit VACUUM on database or table).\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Thu, 06 Nov 2008 12:57:09 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: server space increasing very fast but transaction are very low" } ]
[ { "msg_contents": "Dear List,\n\nI would like to improve seq scan performance. :-)\n\nI have many cols in a table. I use only 1 col for search on it. It is \nindexed with btree with text_pattern_ops. The search method is: r like \n'%aaa%'\nWhen I make another table with only this col values, the search time is \nbetter when the data is cached. But wronger when the data isn't in cache.\n\nI think the following:\n- When there is a big table with many cols, the seq search is read all \ncols not only searched.\n- If we use an index with full values of a col, it is possible to seq \nscan over the index is make better performance (lower io with smaller data).\n\nIt is possible to make an index on the table, and make a seq index scan \non this values?\n\nThanks for helping.\n\nBest Regards,\n Ferenc\n", "msg_date": "Mon, 10 Nov 2008 07:50:43 +0100", "msg_from": "=?ISO-8859-2?Q?Lutisch=E1n_Ferenc?= <[email protected]>", "msg_from_op": true, "msg_subject": "Improve Seq scan performance" }, { "msg_contents": "Lutischďż˝n Ferenc wrote:\n\n> It is possible to make an index on the table, and make a seq index scan \n> on this values?\n\nMy understanding is that this isn't possible in PostgreSQL, because \nindexes do not contain information about tuple visibility. Data read \nfrom the index might refer to tuples that've been deleted as far as your \ntransaction is concerned, or to tuples that were created after your \nsnapshot was taken.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 10 Nov 2008 15:55:55 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Seq scan performance" }, { "msg_contents": "> Lutischán Ferenc wrote:\n>\n> It is possible to make an index on the table, and make a seq index scan on\n>> this values?\n>>\n>\n> My understanding is that this isn't possible in PostgreSQL, because indexes\n> do not contain information about tuple visibility. Data read from the index\n> might refer to tuples that've been deleted as far as your transaction is\n> concerned, or to tuples that were created after your snapshot was taken.\n>\n> My understanding is even though indices do not contain information on tuple\nvisibility, index could be used to filter out records that is known to make\nno match. Since btree index stores exact values, PostgreSQL could scan\nthrough the index and skip those entries that do not contain '%aaa%'. That\nwill dramatically improve cases where the criteria has good selectivity,\nsince index has much more compact structure than table.\n\nAs far as I understand, it is discouraged to implement/suggest patches\nduring Commitfest, however, I would love to treat the following case as a\n\"performance bug\" and add it to the \"TODO\" list:\n\n\ncreate table seq_test\n as select cast(i as text) i, repeat('*', 500) padding from\ngenerate_series(1,10000) as s(i);\n\ncreate index i_ix on seq_test(i);\n\nvacuum analyze verbose seq_test;\n-- index \"i_ix\" now contains 10000 row versions in *30 *pages\n-- \"seq_test\": found 0 removable, 10000 nonremovable row versions in *667 *\npages\n\nexplain analyze select * from seq_test where i like '%123%';\n-- Seq Scan reads 667 pages (as expected)\nSeq Scan on seq_test (cost=0.00..792.00 rows=356 width=508) (actual\ntime=0.129..9.071 rows=20 loops=1 read_shared=*667*(667) read_local=0(0)\nflush=0 local_flush=0 file_read=0 file_write=0)\n Filter: (i ~~ '%123%'::text)\nTotal runtime: 9.188 ms\n\nset enable_seqscan=off\n-- Index Scan reads 2529 pages for some reason. I would expect *30 *(index\nsize) + *20 *(number of matching entries) = 50 pages maximum, that is 10\ntimes better than with seq scan.\nIndex Scan using i_ix on seq_test (cost=0.00..1643.74 rows=356 width=508)\n(actual time=0.334..16.746 rows=*20 *loops=1 read_shared=2529(2529)\nread_local=0(0) flush=0 local_flush=0 file_read=0 file_write=0)\n Filter: (i ~~ '%123%'::text)\nTotal runtime: 16.863 ms\n\nHopefully, there will be a clear distinction between filtering via index and\nfiltering via table access.\n\n\nRegards,\nVladimir Sitnikov\n\nLutischán Ferenc wrote:\n\n\nIt is possible to make an index on the table, and make a seq index scan on this values?\n\n\nMy understanding is that this isn't possible in PostgreSQL, because indexes do not contain information about tuple visibility. Data read from the index might refer to tuples that've been deleted as far as your transaction is concerned, or to tuples that were created after your snapshot was taken.\nMy understanding is even though indices do not contain information on tuple visibility, index could be used to filter out records that is known to make no match. Since btree index stores exact values, PostgreSQL could scan through the index and skip those entries that do not contain '%aaa%'. That will dramatically improve cases where the criteria has good selectivity, since index has much more compact structure than table.\nAs far as I understand, it is discouraged to implement/suggest patches during Commitfest, however, I would love to treat the following case as a \"performance bug\" and add it to the \"TODO\" list:\ncreate table seq_test as select cast(i as text) i, repeat('*', 500) padding from generate_series(1,10000) as s(i);\ncreate index i_ix on seq_test(i);\nvacuum analyze verbose seq_test;-- index \"i_ix\" now contains 10000 row versions in 30 pages\n-- \"seq_test\": found 0 removable, 10000 nonremovable row versions in 667 pages\nexplain analyze select * from seq_test where i like '%123%';\n-- Seq Scan reads 667 pages (as expected)Seq Scan on seq_test  (cost=0.00..792.00 rows=356 width=508) (actual time=0.129..9.071 rows=20 loops=1 read_shared=667(667) read_local=0(0) flush=0 local_flush=0 file_read=0 file_write=0)\n  Filter: (i ~~ '%123%'::text)Total runtime: 9.188 ms\nset enable_seqscan=off-- Index Scan reads 2529 pages for some reason. I would expect 30 (index size) + 20 (number of matching entries) = 50 pages maximum, that is 10 times better than with seq scan.\nIndex Scan using i_ix on seq_test  (cost=0.00..1643.74 rows=356 width=508) (actual time=0.334..16.746 rows=20 loops=1 read_shared=2529(2529) read_local=0(0) flush=0 local_flush=0 file_read=0 file_write=0)\n  Filter: (i ~~ '%123%'::text)Total runtime: 16.863 ms\nHopefully, there will be a clear distinction between filtering via index and filtering via table access.Regards,Vladimir Sitnikov", "msg_date": "Sun, 9 Nov 2008 23:37:00 -0800", "msg_from": "\"Vladimir Sitnikov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Seq scan performance" }, { "msg_contents": "Vladimir Sitnikov wrote:\n>> Lutisch�n Ferenc wrote:\n>>\n>> It is possible to make an index on the table, and make a seq index scan on\n>>> this values?\n>>>\n>> My understanding is that this isn't possible in PostgreSQL, because indexes\n>> do not contain information about tuple visibility. Data read from the index\n>> might refer to tuples that've been deleted as far as your transaction is\n>> concerned, or to tuples that were created after your snapshot was taken.\n>>\n> My understanding is even though indices do not contain information on tuple\n> visibility, index could be used to filter out records that is known to make\n> no match.\n\nYes, that's what an index is for. As far as I know, if it's worth doing \nthat, it's worth doing an index scan instead of a seq scan.\n\nMaybe there's some hybrid type possible where you can scan the index to \nfind large table regions that are known /not/ to contain tuples of \ninterest and seek over them in your scan. I wouldn't know, really, but \nit sounds like it'd probably be more I/O than a pure seq scan (given the \nreading of the index too) unless the table had the values of interest \nrather neatly clustered. It'd also surely use more memory and CPU time \nprocessing the whole index to find table regions without values of interest.\n\nIs that what you meant, though?\n\n> create table seq_test\n> as select cast(i as text) i, repeat('*', 500) padding from\n> generate_series(1,10000) as s(i);\n> \n> create index i_ix on seq_test(i);\n> \n> vacuum analyze verbose seq_test;\n> -- index \"i_ix\" now contains 10000 row versions in *30 *pages\n> -- \"seq_test\": found 0 removable, 10000 nonremovable row versions in *667 *\n> pages\n> \n> explain analyze select * from seq_test where i like '%123%';\n\nA b-tree index cannot be used on a LIKE query with a leading wildcard. \nSee the FAQ.\n\nIn addition, if your database is not in the C locale you can't use an \nordinary index for LIKE queries. See the FAQ. You need to create a \ntext_pattern_ops index instead:\n\ncreate index i_ix_txt on seq_test(i text_pattern_ops);\n\n> set enable_seqscan=off\n> -- Index Scan reads 2529 pages for some reason. I would expect *30 *(index\n> size) + *20 *(number of matching entries) = 50 pages maximum, that is 10\n> times better than with seq scan.\n> Index Scan using i_ix on seq_test (cost=0.00..1643.74 rows=356 width=508)\n> (actual time=0.334..16.746 rows=*20 *loops=1 read_shared=2529(2529)\n> read_local=0(0) flush=0 local_flush=0 file_read=0 file_write=0)\n> Filter: (i ~~ '%123%'::text)\n> Total runtime: 16.863 ms\n\nI think it's reading the whole index, because it can't do a prefix \nsearch if there's a leading wildcard. I'm a bit confused, though, since \nI thought in this case it couldn't actually execute the query w/o a \nsequential scan, and would just use one irrespective of the \nenable_seqscan param. That's what happens here.\n\nHere's what I get:\n\ntest=# create table seq_test as select cast(i as text) AS i, repeat('*', \n500) AS padding from generate_series(1,10000) as s(i);\nSELECT\ntest=# create index i_ix on seq_test(i);\nCREATE INDEX\ntest=# create index i_ix_txt on seq_test(i text_pattern_ops);\nCREATE INDEX\ntest=# vacuum analyze verbose seq_test;\n-- blah blah\nINFO: \"seq_test\": scanned 667 of 667 pages, containing 10000 live rows \nand 0 dead rows; 3000 rows in sample, 10000 estimated total rows\n\ntest=# explain analyze select * from seq_test where i like '%123%';\n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------\n Seq Scan on seq_test (cost=0.00..792.00 rows=400 width=508) (actual \ntime=0.081..5.239 rows=20 loops=1)\n Filter: (i ~~ '%123%'::text)\n Total runtime: 5.281 ms\n(3 rows)\n\n-- Now, note lack of leading wildcard:\ntest=# explain analyze select * from seq_test where i like '123%';\n QUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on seq_test (cost=4.35..40.81 rows=10 width=508) \n(actual time=0.062..0.081 rows=11 loops=1)\n Filter: (i ~~ '123%'::text)\n -> Bitmap Index Scan on i_ix_txt (cost=0.00..4.35 rows=10 width=0) \n(actual time=0.048..0.048 rows=11 loops=1)\n Index Cond: ((i ~>=~ '123'::text) AND (i ~<~ '124'::text))\n Total runtime: 0.121 ms\n(5 rows)\n\ntest=# set enable_seqscan=off;\nSET\ntest=# explain analyze select * from seq_test where i like '%123%';\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------\n Seq Scan on seq_test (cost=100000000.00..100000792.00 rows=400 \nwidth=508) (actual time=0.088..5.666 rows=20 loops=1)\n Filter: (i ~~ '%123%'::text)\n Total runtime: 5.702 ms\n(3 rows)\n\n--\nCraig Ringer\n", "msg_date": "Mon, 10 Nov 2008 16:59:45 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Seq scan performance" }, { "msg_contents": ">\n> Maybe there's some hybrid type possible where you can scan the index to\n> find large table regions that are known /not/ to contain tuples of interest\n> and seek over them in your scan. I wouldn't know, really, but it sounds like\n> it'd probably be more I/O than a pure seq scan (given the reading of the\n> index too) unless the table had the values of interest rather neatly\n> clustered. It'd also surely use more memory and CPU time processing the\n> whole index to find table regions without values of interest.\n\n\n>\n> Is that what you meant, though?\n\nNot exactly. I mean the following: there are cases when index scan even\nover non-clustered values is a complete win (basically, it is a win when the\nnumber of returned values is relatively small no matter is it due to\nselectivity or due to limit clause).\nThe test case that I have provided creates a 667 pages long table and 30\npages long index thus a complete scan of the index is 22 times faster in\nterms of I/O.\n\nSuppose you want to find all the values that contain '%123%'. Currently\nPostgreSQL will do a sec scan, while the better option might be (and it is)\nto loop through all the items in the index (it will cost 30 I/O), find\nrecords that truly contain %123% (it will find 20 of them) and do 20 I/O to\ncheck tuple visiblity. That is 50 I/O versus 667 for seq scan.\n\n\n\n> A b-tree index cannot be used on a LIKE query with a leading wildcard. See\n> the FAQ.\n\nUnfortunately it is true. I would love to improve that particular case.\n\nIn addition, if your database is not in the C locale you can't use an\n> ordinary index for LIKE queries. See the FAQ. You need to create a\n> text_pattern_ops index instead:\n>\n> create index i_ix_txt on seq_test(i text_pattern_ops);\n\nGood catch. However, that does not change the results. PostgresSQL does the\nsame amount of 2529 I/O for index scan on '%123%' for some unknown reason.\n\n\n>\n>\n> set enable_seqscan=off\n>> -- Index Scan reads 2529 pages for some reason. I would expect *30 *(index\n>> size) + *20 *(number of matching entries) = 50 pages maximum, that is 10\n>> times better than with seq scan.\n>> Index Scan using i_ix on seq_test (cost=0.00..1643.74 rows=356 width=508)\n>> (actual time=0.334..16.746 rows=*20 *loops=1 read_shared=2529(2529)\n>> read_local=0(0) flush=0 local_flush=0 file_read=0 file_write=0)\n>> Filter: (i ~~ '%123%'::text)\n>> Total runtime: 16.863 ms\n>>\n>\n> I think it's reading the whole index, because it can't do a prefix search\n> if there's a leading wildcard. I'm a bit confused, though, since I thought\n> in this case it couldn't actually execute the query w/o a sequential scan,\n> and would just use one irrespective of the enable_seqscan param. That's what\n> happens here.\n\nPlease, follow the case carefully: the index is only 30 pages long. Why is\nPostgreSQL doing 2529 I/O? It drives me crazy.\n\n\nRegards,\nVladimir Sitnikov\n\n\nMaybe there's some hybrid type possible where you can scan the index to find large table regions that are known /not/ to contain tuples of interest and seek over them in your scan. I wouldn't know, really, but it sounds like it'd probably be more I/O than a pure seq scan (given the reading of the index too) unless the table had the values of interest rather neatly clustered. It'd also surely use more memory and CPU time processing the whole index to find table regions without values of interest.\n\n\nIs that what you meant, though?Not exactly.  I mean the following:  there are cases when index scan even over non-clustered values is a complete win (basically, it is a win when the number of returned values is relatively small no matter is it due to selectivity or due to limit clause).\nThe test case that I have provided creates a 667 pages long table and 30 pages long index thus a complete scan of the index is 22 times faster in terms of I/O.Suppose you want to find all the values that contain '%123%'. Currently PostgreSQL will do a sec scan, while the better option might be (and it is) to loop through all the items in the index (it will cost 30 I/O), find records that truly contain %123% (it will find 20 of them) and do 20 I/O to check tuple visiblity. That is 50 I/O versus 667 for seq scan.\n A b-tree index cannot be used on a LIKE query with a leading wildcard. See the FAQ.\nUnfortunately it is true. I would love to improve that particular case. \nIn addition, if your database is not in the C locale you can't use an ordinary index for LIKE queries. See the FAQ. You need to create a text_pattern_ops index instead:\n\ncreate index i_ix_txt on seq_test(i text_pattern_ops);Good catch. However, that does not change the results. PostgresSQL does the same amount of 2529 I/O for index scan on '%123%' for some unknown reason.\n \n\n\nset enable_seqscan=off\n-- Index Scan reads 2529 pages for some reason. I would expect *30 *(index\nsize) + *20 *(number of matching entries) = 50 pages maximum, that is 10\ntimes better than with seq scan.\nIndex Scan using i_ix on seq_test  (cost=0.00..1643.74 rows=356 width=508)\n(actual time=0.334..16.746 rows=*20 *loops=1 read_shared=2529(2529)\nread_local=0(0) flush=0 local_flush=0 file_read=0 file_write=0)\n  Filter: (i ~~ '%123%'::text)\nTotal runtime: 16.863 ms\n\n\nI think it's reading the whole index, because it can't do a prefix search if there's a leading wildcard. I'm a bit confused, though, since I thought in this case it couldn't actually execute the query w/o a sequential scan, and would just use one irrespective of the enable_seqscan param. That's what happens here.\nPlease, follow the case carefully:  the index is only 30 pages long. Why is PostgreSQL doing 2529 I/O? It drives me crazy.Regards,Vladimir Sitnikov", "msg_date": "Mon, 10 Nov 2008 00:18:12 -0800", "msg_from": "\"Vladimir Sitnikov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Seq scan performance" }, { "msg_contents": "Vladimir Sitnikov wrote:\n\n> Suppose you want to find all the values that contain '%123%'. Currently\n> PostgreSQL will do a sec scan, while the better option might be (and it is)\n> to loop through all the items in the index (it will cost 30 I/O), find\n> records that truly contain %123% (it will find 20 of them) and do 20 I/O to\n> check tuple visiblity. That is 50 I/O versus 667 for seq scan.\n\nThat does make sense. The 20 visibility checks/tuple reads have a higher \ncost than you've accounted for given that they require seeks. Assuming \nPg's random_page_cost assumption is right and that every tuple of \ninterest is on a different page it'd cost the equivalent of 80 \nsequential page reads, which still brings the total to only 110.\n\nAnyway, sorry I've bothered you about this. I misunderstood the point \nyou were at in investigating this and hadn't realised you were very \nfamiliar with Pg and its innards, so I tried to bring up some points \nthat might help someone who's facing typical issues like \"why doesn't it \nuse an index for %thing%\".\n\n> Please, follow the case carefully: the index is only 30 pages long. Why is\n> PostgreSQL doing 2529 I/O? It drives me crazy.\n\nI certainly can't help you there, though I'm interested myself...\n\n--\nCraig Ringer\n", "msg_date": "Mon, 10 Nov 2008 17:41:03 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Seq scan performance" }, { "msg_contents": "Dear Vladimir,\n\nThanks for clear description of the problem. :-)\nPlease report it to the bug list.\nI hope it will be accepted as a \"performance bug\" and will be solved.\n\nBest Regards,\n Ferenc\n\nVladimir Sitnikov wrotte:\n>\n> As far as I understand, it is discouraged to implement/suggest patches \n> during Commitfest, however, I would love to treat the following case \n> as a \"performance bug\" and add it to the \"TODO\" list:\n>\n>\n> create table seq_test\n> as select cast(i as text) i, repeat('*', 500) padding from \n> generate_series(1,10000) as s(i);\n>\n> create index i_ix on seq_test(i);\n>\n> vacuum analyze verbose seq_test;\n> -- index \"i_ix\" now contains 10000 row versions in *30 *pages\n> -- \"seq_test\": found 0 removable, 10000 nonremovable row versions in \n> *667 *pages\n>\n> explain analyze select * from seq_test where i like '%123%';\n> -- Seq Scan reads 667 pages (as expected)\n> Seq Scan on seq_test (cost=0.00..792.00 rows=356 width=508) (actual \n> time=0.129..9.071 rows=20 loops=1 read_shared=*667*(667) \n> read_local=0(0) flush=0 local_flush=0 file_read=0 file_write=0)\n> Filter: (i ~~ '%123%'::text)\n> Total runtime: 9.188 ms\n>\n> set enable_seqscan=off\n> -- Index Scan reads 2529 pages for some reason. I would expect *30 \n> *(index size) + *20 *(number of matching entries) = 50 pages maximum, \n> that is 10 times better than with seq scan.\n> Index Scan using i_ix on seq_test (cost=0.00..1643.74 rows=356 \n> width=508) (actual time=0.334..16.746 rows=*20 *loops=1 \n> read_shared=2529(2529) read_local=0(0) flush=0 local_flush=0 \n> file_read=0 file_write=0)\n> Filter: (i ~~ '%123%'::text)\n> Total runtime: 16.863 ms\n>\n> Hopefully, there will be a clear distinction between filtering via \n> index and filtering via table access.\n\n\n", "msg_date": "Mon, 10 Nov 2008 17:19:05 +0100", "msg_from": "=?ISO-8859-1?Q?Lutisch=E1n_Ferenc?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Seq scan performance" }, { "msg_contents": "\n> Dear List,\n>\n> I would like to improve seq scan performance. :-)\n>\n> I have many cols in a table. I use only 1 col for search on it. It is \n> indexed with btree with text_pattern_ops. The search method is: r like \n> '%aaa%'\n> When I make another table with only this col values, the search time is \n> better when the data is cached. But wronger when the data isn't in cache.\n>\n> I think the following:\n> - When there is a big table with many cols, the seq search is read all \n> cols not only searched.\n> - If we use an index with full values of a col, it is possible to seq \n> scan over the index is make better performance (lower io with smaller \n> data).\n>\n> It is possible to make an index on the table, and make a seq index scan \n> on this values?\n\n\tYou can fake this (as a test) by creating a separate table with just your \ncolumn of interest and the row id (primary key), updated via triggers, and \nseq scan this table. Seq scanning this small table should be fast. Of \ncourse if you have several column conditions it will get more complex.\n\n\tNote that btrees cannot optimize '%aaa%'. You could use trigrams.\n", "msg_date": "Sun, 16 Nov 2008 15:54:08 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Seq scan performance" }, { "msg_contents": "\nOK, I see your problem. Try this :\n\nread this : http://www.postgresql.org/docs/current/static/pgtrgm.html\nlocate and \\i the pg_trgm.sql file\n\nCREATE TABLE dict( s TEXT );\n\nI loaded the english - german dictionary in a test table. I didn't parse \nit, so it's just a bunch of 418552 strings, english and german mixed.\n\ntest=> EXPLAIN ANALYZE SELECT * FROM dict WHERE s LIKE '%tation%';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------\n Seq Scan on dict (cost=0.00..7445.90 rows=133 width=13) (actual \ntime=0.102..217.155 rows=802 loops=1)\n Filter: (s ~~ '%tation%'::text)\n Total runtime: 217.451 ms\n(3 lignes)\n\nTemps : 217,846 ms\n\nSince this data does not change very often, let's use a gin index.\n\nCREATE INDEX trgm_idx ON dict USING gin (s gin_trgm_ops);\n\nWith trigrams we can search by similarity. So, we can issue this :\n\nEXPLAIN ANALYZE SELECT s, similarity(s, 'tation') AS sml FROM dict WHERE s \n% 'tation' ORDER BY sml DESC, s;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1114.44..1115.49 rows=419 width=13) (actual \ntime=190.778..190.980 rows=500 loops=1)\n Sort Key: (similarity(s, 'tation'::text)), s\n Sort Method: quicksort Memory: 37kB\n -> Bitmap Heap Scan on dict (cost=35.80..1096.19 rows=419 width=13) \n(actual time=113.486..188.825 rows=500 loops=1)\n Filter: (s % 'tation'::text)\n -> Bitmap Index Scan on trgm_idx (cost=0.00..35.69 rows=419 \nwidth=0) (actual time=112.011..112.011 rows=15891 loops=1)\n Index Cond: (s % 'tation'::text)\n Total runtime: 191.189 ms\n\nIt is not much faster than the seq scan, but it can give you useful \nresults, correct spelling errors, etc.\nPerhaps it's faster when data is not cached.\nSample of returns :\n\n taxation | 0.6\n station | 0.5\n tabulation | 0.5\n taction | 0.5\n Taxation {f} | 0.5\n Taxation {f} | 0.5\n\nIf you do not want to correct for spelling errors, you can do like this :\n\nEXPLAIN ANALYZE SELECT s FROM dict WHERE s LIKE '%tation%' AND s % \n'tation';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on dict (cost=35.70..1096.09 rows=1 width=13) (actual \ntime=66.583..80.980 rows=306 loops=1)\n Filter: ((s ~~ '%tation%'::text) AND (s % 'tation'::text))\n -> Bitmap Index Scan on trgm_idx (cost=0.00..35.69 rows=419 width=0) \n(actual time=65.799..65.799 rows=15891 loops=1)\n Index Cond: (s % 'tation'::text)\n Total runtime: 81.140 ms\n(5 lignes)\n\nTemps : 81,652 ms\n\nIn this case the trigram index is used to narrow the search, and the LIKE \nto get only exact matches.\n\nCareful though, it might not always match, for instance if you search \n\"rat\" you won't find \"consideration\", because the search string is too \nsmall.\n\nAnyway, I would suggest to change your strategy.\n\nYou could try preloading everything into an in-memory array of strings. \nThis would be much faster.\nYou could also try to build a list of unique words from your dictionary, \nwhich contains lots of expressions. Then, when the user enters a query, \nget the words that contain the entered text, and use a full-text index to \nsearch your dictionary.\n\n> I tested first only some words. And later with '%a%', '%b% etc. When I \n> re-query the table with the used term (e.g. 1.'%a%' -slow, 2. '%b%'- \n> slow, '%a%' - fast), it is faster than the old method.\n\nWhen the user enters a very short string like 'a' or 'is', I don't think \nit is relevant to display all entries that contain this, because that \ncould be most of your dictionary. Instead, why not display all unique \nwords which start with this string ? Much less results, faster, and \nprobably more useful too. Then the user can select an longer word and use \nthis.\n\nAlso, pagination is overrated. If there are 50 pages of results, the user \nwill never click on them anyway. They are more likely to refine their \nquery instead. So, just display the first 100 results and be done with it \n;)\n\n\n\n\n", "msg_date": "Mon, 17 Nov 2008 11:51:32 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Seq scan performance" } ]
[ { "msg_contents": "Why is this view 9x slower than the base table?\n\[email protected]=# explain analyze select count(*) from \nloan_tasks_committed;\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------\n Aggregate (cost=994625.69..994625.70 rows=1 width=0) (actual \ntime=7432.306..7432.306 rows=1 loops=1)\n -> Seq Scan on loan_tasks_committed (cost=0.00..929345.35 \nrows=26112135 width=0) (actual time=0.012..5116.776 rows=26115689 \nloops=1)\n Total runtime: 7432.360 ms\n(3 rows)\n\nTime: 7432.858 ms\n\nloan_tasks effectively does SELECT * FROM loan_tasks_committed UNION \nALL SELECT * FROM loan_tasks_pending;. There's some lookup tables for \n_pending, but as this explain shows there's no actual data there \nright now.\n\[email protected]=# explain analyze select count(*) from \nloan_tasks;\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n---------\n Aggregate (cost=1516929.75..1516929.76 rows=1 width=0) (actual \ntime=60396.081..60396.082 rows=1 loops=1)\n -> Append (cost=0.00..1190523.94 rows=26112465 width=240) \n(actual time=0.023..57902.470 rows=26115689 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1190466.70 \nrows=26112135 width=162) (actual time=0.023..54776.335 rows=26115689 \nloops=1)\n -> Seq Scan on loan_tasks_committed \n(cost=0.00..929345.35 rows=26112135 width=162) (actual \ntime=0.014..22531.902 rows=26115689 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=36.10..57.24 rows=330 \nwidth=240) (actual time=0.003..0.003 rows=0 loops=1)\n -> Hash Join (cost=36.10..53.94 rows=330 width=240) \n(actual time=0.002..0.002 rows=0 loops=1)\n Hash Cond: (ltp.loan_task_code_id = ltc.id)\n -> Seq Scan on loan_tasks_pending ltp \n(cost=0.00..13.30 rows=330 width=208) (actual time=0.001..0.001 \nrows=0 loops=1)\n -> Hash (cost=21.60..21.60 rows=1160 \nwidth=36) (never executed)\n -> Seq Scan on loan_task_codes ltc \n(cost=0.00..21.60 rows=1160 width=36) (never executed)\n Total runtime: 60396.174 ms\n(11 rows)\n\nTime: 60397.046 ms\n\n SELECT true AS \"committed\", loan_tasks_committed.id, ..., \nloan_tasks_committed.task_amount\n FROM loan_tasks_committed\nUNION ALL\n SELECT false AS \"committed\", ltp.id, ..., NULL::\"unknown\" AS \ntask_amount\n FROM loan_tasks_pending ltp\n JOIN loan_task_codes ltc ON ltp.loan_task_code_id = ltc.id;\n\nThe stuff I omitted is just some fields and a few other NULLs. This \nis 8.2.9.\n--\nDecibel! [email protected] (512) 569-9461\n\n\n\n", "msg_date": "Mon, 10 Nov 2008 02:27:01 -0600", "msg_from": "Jim 'Decibel!' Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Oddity with view" }, { "msg_contents": "\"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n> loan_tasks effectively does SELECT * FROM loan_tasks_committed UNION \n> ALL SELECT * FROM loan_tasks_pending;.\n\nYou seem to have neglected to mention a join or two.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Nov 2008 08:06:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddity with view " }, { "msg_contents": "On Nov 10, 2008, at 7:06 AM, Tom Lane wrote:\n> \"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n>> loan_tasks effectively does SELECT * FROM loan_tasks_committed UNION\n>> ALL SELECT * FROM loan_tasks_pending;.\n>\n> You seem to have neglected to mention a join or two.\n\n\nYeah, though I did show them at the end of the message...\n\n SELECT true AS \"committed\", loan_tasks_committed.id, ..., \nloan_tasks_committed.task_amount\n FROM loan_tasks_committed\nUNION ALL\n SELECT false AS \"committed\", ltp.id, ..., NULL::\"unknown\" AS \ntask_amount\n FROM loan_tasks_pending ltp\n JOIN loan_task_codes ltc ON ltp.loan_task_code_id = ltc.id;\n\nThing is, there's no data to be had on that side. All of the time is \ngoing into the seqscan of loan_tasks_committed. But here's what's \nreally disturbing...\n\n Aggregate (cost=994625.69..994625.70 rows=1 width=0) (actual \ntime=7432.306..7432.306 rows=1 loops=1)\n -> Seq Scan on loan_tasks_committed (cost=0.00..929345.35 \nrows=26112135 width=0) (actual time=0.012..5116.776 rows=26115689 \nloops=1)\n\nvs\n\n Aggregate (cost=1516929.75..1516929.76 rows=1 width=0) (actual \ntime=60396.081..60396.082 rows=1 loops=1)\n -> Append (cost=0.00..1190523.94 rows=26112465 width=240) \n(actual time=0.023..57902.470 rows=26115689 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1190466.70 \nrows=26112135 width=162) (actual time=0.023..54776.335 rows=26115689 \nloops=1)\n -> Seq Scan on loan_tasks_committed \n(cost=0.00..929345.35 rows=26112135 width=162) (actual \ntime=0.014..22531.902 rows=26115689 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=36.10..57.24 rows=330 \nwidth=240) (actual time=0.003..0.003 rows=0 loops=1)\n\nHow on earth did the seqscan suddenly take 4x longer? And why is the \nsubquery scan then doubling the amount of time again?\n--\nDecibel! [email protected] (512) 569-9461\n\n\n\n", "msg_date": "Mon, 10 Nov 2008 12:16:36 -0600", "msg_from": "Jim 'Decibel!' Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oddity with view " }, { "msg_contents": "Jim 'Decibel!' Nasby wrote:\n> On Nov 10, 2008, at 7:06 AM, Tom Lane wrote:\n>> \"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n>>> loan_tasks effectively does SELECT * FROM loan_tasks_committed UNION\n>>> ALL SELECT * FROM loan_tasks_pending;.\n>>\n>> You seem to have neglected to mention a join or two.\n> \n> \n> Yeah, though I did show them at the end of the message...\n> \n> SELECT true AS \"committed\", loan_tasks_committed.id, ...,\n> loan_tasks_committed.task_amount\n> FROM loan_tasks_committed\n> UNION ALL\n> SELECT false AS \"committed\", ltp.id, ..., NULL::\"unknown\" AS task_amount\n> FROM loan_tasks_pending ltp\n> JOIN loan_task_codes ltc ON ltp.loan_task_code_id = ltc.id;\n> \n> Thing is, there's no data to be had on that side. All of the time is\n> going into the seqscan of loan_tasks_committed. But here's what's really\n> disturbing...\n\n> -> Seq Scan on loan_tasks_committed (cost=0.00..929345.35\n> rows=26112135 width=0) (actual time=0.012..5116.776 rows=26115689 loops=1)\n\n> -> Seq Scan on loan_tasks_committed \n> (cost=0.00..929345.35 rows=26112135 width=162) (actual\n> time=0.014..22531.902 rows=26115689 loops=1)\n\nIt's the width - the view is fetching all the rows. Is the \"true as\ncommitted\" bit confusing it?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 10 Nov 2008 18:21:27 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddity with view" }, { "msg_contents": "\"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n> How on earth did the seqscan suddenly take 4x longer? And why is the \n> subquery scan then doubling the amount of time again?\n\nMaybe the disk access is less sequential because of the need to fetch\nthe other table too?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Nov 2008 13:31:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddity with view " }, { "msg_contents": "On Nov 10, 2008, at 12:21 PM, Richard Huxton wrote:\n> Jim 'Decibel!' Nasby wrote:\n>> On Nov 10, 2008, at 7:06 AM, Tom Lane wrote:\n>>> \"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n>>>> loan_tasks effectively does SELECT * FROM loan_tasks_committed \n>>>> UNION\n>>>> ALL SELECT * FROM loan_tasks_pending;.\n>>>\n>>> You seem to have neglected to mention a join or two.\n>>\n>>\n>> Yeah, though I did show them at the end of the message...\n>>\n>> SELECT true AS \"committed\", loan_tasks_committed.id, ...,\n>> loan_tasks_committed.task_amount\n>> FROM loan_tasks_committed\n>> UNION ALL\n>> SELECT false AS \"committed\", ltp.id, ..., NULL::\"unknown\" AS \n>> task_amount\n>> FROM loan_tasks_pending ltp\n>> JOIN loan_task_codes ltc ON ltp.loan_task_code_id = ltc.id;\n>>\n>> Thing is, there's no data to be had on that side. All of the time is\n>> going into the seqscan of loan_tasks_committed. But here's what's \n>> really\n>> disturbing...\n>\n>> -> Seq Scan on loan_tasks_committed (cost=0.00..929345.35\n>> rows=26112135 width=0) (actual time=0.012..5116.776 rows=26115689 \n>> loops=1)\n>\n>> -> Seq Scan on loan_tasks_committed\n>> (cost=0.00..929345.35 rows=26112135 width=162) (actual\n>> time=0.014..22531.902 rows=26115689 loops=1)\n>\n> It's the width - the view is fetching all the rows. Is the \"true as\n> committed\" bit confusing it?\n\nTurns out, no. I was just writing up a stand-alone test case and \nforgot to include that, but there's still a big difference (note what \nI'm pasting is now from HEAD as of a bit ago, but I see the effect on \n8.2 as well):\n\[email protected]=# explain analyze select count(*) from a;\n QUERY PLAN\n------------------------------------------------------------------------ \n---------------------------------------------\n Aggregate (cost=137164.57..137164.58 rows=1 width=0) (actual \ntime=4320.986..4320.986 rows=1 loops=1)\n -> Seq Scan on a (cost=0.00..120542.65 rows=6648765 width=0) \n(actual time=0.188..2707.433 rows=9999999 loops=1)\n Total runtime: 4321.039 ms\n(3 rows)\n\nTime: 4344.158 ms\[email protected]=# explain analyze select count(*) from v;\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------\n Aggregate (cost=270286.52..270286.53 rows=1 width=0) (actual \ntime=14766.630..14766.630 rows=1 loops=1)\n -> Append (cost=0.00..187150.20 rows=6650905 width=36) (actual \ntime=0.039..12810.073 rows=9999999 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..187030.30 \nrows=6648765 width=36) (actual time=0.039..10581.367 rows=9999999 \nloops=1)\n -> Seq Scan on a (cost=0.00..120542.65 rows=6648765 \nwidth=36) (actual time=0.038..5731.748 rows=9999999 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=37.67..119.90 \nrows=2140 width=40) (actual time=0.002..0.002 rows=0 loops=1)\n -> Hash Join (cost=37.67..98.50 rows=2140 width=40) \n(actual time=0.002..0.002 rows=0 loops=1)\n Hash Cond: (b.c_id = c.c_id)\n -> Seq Scan on b (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.000..0.000 rows=0 loops=1)\n -> Hash (cost=22.30..22.30 rows=1230 \nwidth=36) (never executed)\n -> Seq Scan on c (cost=0.00..22.30 \nrows=1230 width=36) (never executed)\n Total runtime: 14766.784 ms\n(11 rows)\n\nTime: 14767.550 ms\n\nIn 8.2, it took 20 seconds to go through the view:\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------\n Aggregate (cost=303960.98..303960.99 rows=1 width=0) (actual \ntime=20268.877..20268.877 rows=1 loops=1)\n -> Append (cost=0.00..211578.98 rows=7390560 width=40) (actual \ntime=0.038..17112.190 rows=9999999 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..211467.40 \nrows=7388620 width=36) (actual time=0.038..13973.782 rows=9999999 \nloops=1)\n -> Seq Scan on a (cost=0.00..137581.20 rows=7388620 \nwidth=36) (actual time=0.037..8280.204 rows=9999999 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=36.10..111.58 \nrows=1940 width=40) (actual time=0.003..0.003 rows=0 loops=1)\n -> Hash Join (cost=36.10..92.18 rows=1940 width=40) \n(actual time=0.002..0.002 rows=0 loops=1)\n Hash Cond: (b.c_id = c.c_id)\n -> Seq Scan on b (cost=0.00..29.40 rows=1940 \nwidth=8) (actual time=0.000..0.000 rows=0 loops=1)\n -> Hash (cost=21.60..21.60 rows=1160 \nwidth=36) (never executed)\n -> Seq Scan on c (cost=0.00..21.60 \nrows=1160 width=36) (never executed)\n Total runtime: 20269.333 ms\n(11 rows)\n\nThe results for 8.3 are similar to HEAD.\n\nHere's the commands to generate the test case:\n\ncreate table a(a int, b text default 'test text');\ncreate table c(c_id serial primary key, c_text text);\ninsert into c(c_text) values('a'),('b'),('c');\ncreate table b(a int, c_id int references c(c_id));\ncreate view v as select a, b, null as c_id, null as c_text from a \nunion all select a, null, b.c_id, c_text from b join c on (b.c_id= \nc.c_id);\n\\timing\ninsert into a(a) select generate_series(1,9999999);\nselect count(*) from a;\nselect count(*) from v;\nexplain analyze select count(*) from a;\nexplain analyze select count(*) from v;\n--\nDecibel! [email protected] (512) 569-9461\n\n\n\n", "msg_date": "Mon, 10 Nov 2008 12:36:12 -0600", "msg_from": "Jim 'Decibel!' Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oddity with view (now with test case)" }, { "msg_contents": "\"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n> Here's the commands to generate the test case:\n\n> create table a(a int, b text default 'test text');\n> create table c(c_id serial primary key, c_text text);\n> insert into c(c_text) values('a'),('b'),('c');\n> create table b(a int, c_id int references c(c_id));\n> create view v as select a, b, null as c_id, null as c_text from a \n> union all select a, null, b.c_id, c_text from b join c on (b.c_id= \n> c.c_id);\n> \\timing\n> insert into a(a) select generate_series(1,9999999);\n> select count(*) from a;\n> select count(*) from v;\n> explain analyze select count(*) from a;\n> explain analyze select count(*) from v;\n\nI think what you're looking at is projection overhead and per-plan-node\noverhead (the EXPLAIN ANALYZE in itself contributes quite a lot of the\nlatter). One thing you could do is be more careful about making the\nunion input types match up so that no subquery scan nodes are required:\n\ncreate view v2 as select a, b, null::int as c_id, null::text as c_text from a \nunion all select a, null::text, b.c_id, c_text from b join c on (b.c_id=c.c_id); \n\nOn my machine this runs about twice as fast as the original view.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Nov 2008 14:31:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddity with view (now with test case) " }, { "msg_contents": "On Nov 10, 2008, at 1:31 PM, Tom Lane wrote:\n> \"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n>> Here's the commands to generate the test case:\n>\n>> create table a(a int, b text default 'test text');\n>> create table c(c_id serial primary key, c_text text);\n>> insert into c(c_text) values('a'),('b'),('c');\n>> create table b(a int, c_id int references c(c_id));\n>> create view v as select a, b, null as c_id, null as c_text from a\n>> union all select a, null, b.c_id, c_text from b join c on (b.c_id=\n>> c.c_id);\n>> \\timing\n>> insert into a(a) select generate_series(1,9999999);\n>> select count(*) from a;\n>> select count(*) from v;\n>> explain analyze select count(*) from a;\n>> explain analyze select count(*) from v;\n>\n> I think what you're looking at is projection overhead and per-plan- \n> node\n> overhead (the EXPLAIN ANALYZE in itself contributes quite a lot of the\n> latter).\n\nTrue... under HEAD explain took 13 seconds while a plain count took \n10. Still not very good considering the count from the raw table took \nabout 4 seconds (with or without explain).\n\n> One thing you could do is be more careful about making the\n> union input types match up so that no subquery scan nodes are \n> required:\n>\n> create view v2 as select a, b, null::int as c_id, null::text as \n> c_text from a\n> union all select a, null::text, b.c_id, c_text from b join c on \n> (b.c_id=c.c_id);\n>\n> On my machine this runs about twice as fast as the original view.\n\nAm I missing some magic? I'm still getting the subquery scan.\n\[email protected]=# explain select count(*) from v2;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------\n Aggregate (cost=279184.19..279184.20 rows=1 width=0)\n -> Append (cost=0.00..254178.40 rows=10002315 width=0)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..254058.50 \nrows=10000175 width=0)\n -> Seq Scan on a (cost=0.00..154056.75 \nrows=10000175 width=14)\n -> Subquery Scan \"*SELECT* 2\" (cost=37.67..119.90 \nrows=2140 width=0)\n -> Hash Join (cost=37.67..98.50 rows=2140 width=40)\n Hash Cond: (b.c_id = c.c_id)\n -> Seq Scan on b (cost=0.00..31.40 rows=2140 \nwidth=8)\n -> Hash (cost=22.30..22.30 rows=1230 width=36)\n -> Seq Scan on c (cost=0.00..22.30 \nrows=1230 width=36)\n(10 rows)\n\nTime: 0.735 ms\[email protected]=# \\d v2\n View \"public.v2\"\n Column | Type | Modifiers\n--------+---------+-----------\n a | integer |\n b | text |\n c_id | integer |\n c_text | text |\nView definition:\n SELECT a.a, a.b, NULL::integer AS c_id, NULL::text AS c_text\n FROM a\nUNION ALL\n SELECT b.a, NULL::text AS b, b.c_id, c.c_text\n FROM b\n JOIN c ON b.c_id = c.c_id;\n\nThat's on HEAD, btw.\n--\nDecibel! [email protected] (512) 569-9461\n\n\n\n", "msg_date": "Mon, 10 Nov 2008 15:06:36 -0600", "msg_from": "Jim 'Decibel!' Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oddity with view (now with test case) " }, { "msg_contents": "\"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n> On Nov 10, 2008, at 1:31 PM, Tom Lane wrote:\n>> On my machine this runs about twice as fast as the original view.\n\n> Am I missing some magic? I'm still getting the subquery scan.\n\nHmm, I'm getting a core dump :-( ... this seems to be busted in HEAD.\n8.3 gets it right though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Nov 2008 22:20:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddity with view (now with test case) " }, { "msg_contents": "On Nov 10, 2008, at 9:20 PM, Tom Lane wrote:\n> \"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n>> On Nov 10, 2008, at 1:31 PM, Tom Lane wrote:\n>>> On my machine this runs about twice as fast as the original view.\n>\n>> Am I missing some magic? I'm still getting the subquery scan.\n>\n> Hmm, I'm getting a core dump :-( ... this seems to be busted in HEAD.\n> 8.3 gets it right though.\n\nDoesn't seem to for me... :/\n\[email protected]=# select version();\n \nversion\n------------------------------------------------------------------------ \n-----------------------------------------------------------------\n PostgreSQL 8.3.5 on i386-apple-darwin8.11.1, compiled by GCC i686- \napple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5370)\n(1 row)\n\nTime: 0.250 ms\[email protected]=# explain select count(*) from v2;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------\n Aggregate (cost=279184.19..279184.20 rows=1 width=0)\n -> Append (cost=0.00..254178.40 rows=10002315 width=0)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..254058.50 \nrows=10000175 width=0)\n -> Seq Scan on a (cost=0.00..154056.75 \nrows=10000175 width=14)\n -> Subquery Scan \"*SELECT* 2\" (cost=37.67..119.90 \nrows=2140 width=0)\n -> Hash Join (cost=37.67..98.50 rows=2140 width=40)\n Hash Cond: (b.c_id = c.c_id)\n -> Seq Scan on b (cost=0.00..31.40 rows=2140 \nwidth=8)\n -> Hash (cost=22.30..22.30 rows=1230 width=36)\n -> Seq Scan on c (cost=0.00..22.30 \nrows=1230 width=36)\n(10 rows)\n\nTime: 0.923 ms\[email protected]=# \\d v2\n View \"public.v2\"\n Column | Type | Modifiers\n--------+---------+-----------\n a | integer |\n b | text |\n c_id | integer |\n c_text | text |\nView definition:\n SELECT a.a, a.b, NULL::integer AS c_id, NULL::text AS c_text\n FROM a\nUNION ALL\n SELECT b.a, NULL::text AS b, b.c_id, c.c_text\n FROM b\n JOIN c ON b.c_id = c.c_id;\n\[email protected]=#\n--\nDecibel! [email protected] (512) 569-9461\n\n\n\n", "msg_date": "Tue, 11 Nov 2008 12:08:24 -0600", "msg_from": "Jim 'Decibel!' Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oddity with view (now with test case) " }, { "msg_contents": "\"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n> On Nov 10, 2008, at 9:20 PM, Tom Lane wrote:\n>> 8.3 gets it right though.\n\n> Doesn't seem to for me... :/\n\nOh, I was looking at \"select * from v2\" not \"select count(*) from v2\".\nHEAD is a bit smarter about the latter than 8.3 is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Nov 2008 14:15:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddity with view (now with test case) " }, { "msg_contents": "On Nov 11, 2008, at 1:15 PM, Tom Lane wrote:\n> \"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n>> On Nov 10, 2008, at 9:20 PM, Tom Lane wrote:\n>>> 8.3 gets it right though.\n>\n>> Doesn't seem to for me... :/\n>\n> Oh, I was looking at \"select * from v2\" not \"select count(*) from v2\".\n> HEAD is a bit smarter about the latter than 8.3 is.\n\nSo here's something odd... in both 8.3 and HEAD from a while ago it \ngives a better plan for SELECT * than for SELECT count(*):\n\[email protected]=# explain analyze select * from v2;\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------\n Result (cost=0.00..254178.40 rows=10002315 width=72) (actual \ntime=0.049..8452.152 rows=9999999 loops=1)\n -> Append (cost=0.00..254178.40 rows=10002315 width=72) (actual \ntime=0.048..5887.025 rows=9999999 loops=1)\n -> Seq Scan on a (cost=0.00..154056.75 rows=10000175 \nwidth=14) (actual time=0.048..4207.482 rows=9999999 loops=1)\n -> Hash Join (cost=37.67..98.50 rows=2140 width=40) \n(actual time=0.002..0.002 rows=0 loops=1)\n Hash Cond: (b.c_id = c.c_id)\n -> Seq Scan on b (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.000..0.000 rows=0 loops=1)\n -> Hash (cost=22.30..22.30 rows=1230 width=36) \n(never executed)\n -> Seq Scan on c (cost=0.00..22.30 rows=1230 \nwidth=36) (never executed)\n Total runtime: 9494.162 ms\n(9 rows)\n\[email protected]=# explain analyze select count(*) from v2;\n QUERY \nPLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------\n Aggregate (cost=279184.19..279184.20 rows=1 width=0) (actual \ntime=13155.524..13155.524 rows=1 loops=1)\n -> Append (cost=0.00..254178.40 rows=10002315 width=0) (actual \ntime=0.045..11042.562 rows=9999999 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..254058.50 \nrows=10000175 width=0) (actual time=0.045..8976.352 rows=9999999 \nloops=1)\n -> Seq Scan on a (cost=0.00..154056.75 \nrows=10000175 width=14) (actual time=0.045..5936.930 rows=9999999 \nloops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=37.67..119.90 \nrows=2140 width=0) (actual time=0.002..0.002 rows=0 loops=1)\n -> Hash Join (cost=37.67..98.50 rows=2140 width=40) \n(actual time=0.002..0.002 rows=0 loops=1)\n Hash Cond: (b.c_id = c.c_id)\n -> Seq Scan on b (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.001..0.001 rows=0 loops=1)\n -> Hash (cost=22.30..22.30 rows=1230 \nwidth=36) (never executed)\n -> Seq Scan on c (cost=0.00..22.30 \nrows=1230 width=36) (never executed)\n Total runtime: 13155.642 ms\n(11 rows)\n\[email protected]=# explain analyze select count(*) from (select \n* from v2 offset 0) a;\n QUERY \nPLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------\n Aggregate (cost=379207.34..379207.35 rows=1 width=0) (actual \ntime=12592.273..12592.274 rows=1 loops=1)\n -> Limit (cost=0.00..254178.40 rows=10002315 width=72) (actual \ntime=0.173..11057.717 rows=9999999 loops=1)\n -> Result (cost=0.00..254178.40 rows=10002315 width=72) \n(actual time=0.172..9213.524 rows=9999999 loops=1)\n -> Append (cost=0.00..254178.40 rows=10002315 \nwidth=72) (actual time=0.172..6608.656 rows=9999999 loops=1)\n -> Seq Scan on a (cost=0.00..154056.75 \nrows=10000175 width=14) (actual time=0.171..4793.116 rows=9999999 \nloops=1)\n -> Hash Join (cost=37.67..98.50 rows=2140 \nwidth=40) (actual time=0.002..0.002 rows=0 loops=1)\n Hash Cond: (b.c_id = c.c_id)\n -> Seq Scan on b (cost=0.00..31.40 \nrows=2140 width=8) (actual time=0.001..0.001 rows=0 loops=1)\n -> Hash (cost=22.30..22.30 rows=1230 \nwidth=36) (never executed)\n -> Seq Scan on c \n(cost=0.00..22.30 rows=1230 width=36) (never executed)\n Total runtime: 12592.442 ms\n(11 rows)\n\nAnd yes, explain overhead is huge...\n\[email protected]=# \\timing\nTiming is on.\[email protected]=# select count(*) from v2;\n count\n---------\n 9999999\n(1 row)\n\nTime: 6217.624 ms\[email protected]=#\n\n--\nDecibel! [email protected] (512) 569-9461\n\n\n\n", "msg_date": "Tue, 11 Nov 2008 16:42:57 -0600", "msg_from": "Jim 'Decibel!' Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oddity with view (now with test case) " }, { "msg_contents": "\"Jim 'Decibel!' Nasby\" <[email protected]> writes:\n> So here's something odd... in both 8.3 and HEAD from a while ago it \n> gives a better plan for SELECT * than for SELECT count(*):\n\nThe short answer is that the Subquery Scan nodes can be dropped out\nwhen they are no-ops, which is to say producing the same set of columns\ntheir input produces (and not testing any filter conditions, but that's\nnot relevant here). SELECT count(*) doesn't want to know about any\ncolumns so the output of the UNION arm doesn't match ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Nov 2008 21:34:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oddity with view (now with test case) " } ]
[ { "msg_contents": "I found that simple IN query on indexed tables takes too much time.\n\ndok and rid have both indexes on int dokumnr columnr and dokumnr is not \nnull.\nPostgreSql can use index on dok or event on rid so it can executed fast.\n\nHow to make this query to run fast ?\n\nAndrus.\n\n\n\nnote: list contain a lot of integers, output below is abbreviated in this\npart.\n\nexplain analyze select\n sum(rid.kogus)\n from dok JOIN rid USING(dokumnr)\n where dok.dokumnr in\n(869906,869907,869910,869911,869914,869915,869916,869917,869918,869921,869925,869926,869928,869929,869934,869935,869936,...)\n\n\"Aggregate (cost=327569.15..327569.16 rows=1 width=9) (actual\ntime=39749.842..39749.846 rows=1 loops=1)\"\n\" -> Hash Join (cost=83872.74..327537.74 rows=12563 width=9) (actual\ntime=25221.702..39697.249 rows=11857 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Seq Scan on rid (cost=0.00..195342.35 rows=3213135 width=13)\n(actual time=0.046..26347.959 rows=3243468 loops=1)\"\n\" -> Hash (cost=83860.76..83860.76 rows=4792 width=4) (actual\ntime=128.366..128.366 rows=4801 loops=1)\"\n\" -> Bitmap Heap Scan on dok (cost=9618.80..83860.76\nrows=4792 width=4) (actual time=58.667..108.611 rows=4801 loops=1)\"\n\" Recheck Cond: ((dokumnr = 869906) OR (dokumnr = 869907)\nOR (dokumnr = 869910) OR (dokumnr = 869911) OR (dokumnr = 869914) OR\n(dokumnr = 869915) OR (dokumnr = 869916) OR (dokumnr = 869917) OR (dokumnr =\n869918) OR (dokumnr = 869921) OR (dokumnr = 869925) OR (dokumnr = 869926) OR\n(dokumnr = 869928) OR (dokumnr = 869929) OR (dokumnr = 869934) OR (dokumnr =\n869935) OR (dokumnr = 869936) OR (dokumnr = 869937) OR (dokumnr = 869940) OR\n(dokumnr = 869941) OR (dokumnr = 869945) OR (dokumnr = 869951) OR (dokumnr =\n869964) OR (dokumnr = 869966) OR (dokumnr = 869969) OR (dokumnr = 869974) OR\n(dokumnr = 869979) OR (dokumnr = 869986) OR (dokumnr = 869992) OR (dokumnr =\n869993) OR (dokumnr = 869995) OR (dokumnr = 869997) OR (dokumnr = 870007) OR\n(dokumnr = 870018) OR (dokumnr = 870021) OR (dokumnr = 870023) OR (dokumnr =\n870025) OR (dokumnr = 870033) OR (dokumnr = 870034) OR (dokumnr = 870036) OR\n(dokumnr = 870038) OR (dokumnr = 870043) OR (dokumnr = 870044) OR (dokumnr =\n870046) OR (dokumnr = 870050) OR (dokumnr = 870051) OR (dokumnr = 870053) OR\n(dokumnr = 870054) OR (dokumnr = 870055) OR (dokumnr = 870064) OR (dokumnr =\n870066) OR (dokumnr = 870069) OR (dokumnr = 870077) OR (dokumnr = 870079) OR\n(dokumnr = 870081) OR (dokumnr = 870084) OR (dokumnr = 870085) OR (dokumnr =\n870090) OR (dokumnr = 870096) OR (dokumnr = 870110) OR (dokumnr = 870111) OR\n(dokumnr = 870117) OR (dokumnr = 870120) OR (dokumnr = 870124) OR (dokumnr =\n870130)\n...\nOR (dokumnr = 890907) OR (dokumnr = 890908))\"\n\" -> BitmapOr (cost=9618.80..9618.80 rows=4801 width=0)\n(actual time=58.248..58.248 rows=0 loops=1)\"\n\" -> Bitmap Index Scan on dok_dokumnr_idx\n(cost=0.00..2.00 rows=1 width=0) (actual time=0.052..0.052 rows=3 loops=1)\"\n\" Index Cond: (dokumnr = 869906)\"\n\" -> Bitmap Index Scan on dok_dokumnr_idx\n(cost=0.00..2.00 rows=1 width=0) (actual time=0.011..0.011 rows=3 loops=1)\"\n\" Index Cond: (dokumnr = 869907)\"\n\" -> Bitmap Index Scan on dok_dokumnr_idx\n(cost=0.00..2.00 rows=1 width=0) (actual time=0.020..0.020 rows=3 loops=1)\"\n\" Index Cond: (dokumnr = 869910)\"\n\" -> Bitmap Index Scan on dok_dokumnr_idx\n(cost=0.00..2.00 rows=1 width=0) (actual time=0.010..0.010 rows=3 loops=1)\"\n\" Index Cond: (dokumnr = 869911)\"\n\" -> Bitmap Index Scan on dok_dokumnr_idx\n(cost=0.00..2.00 rows=1 width=0) (actual time=0.008..0.008 rows=3 loops=1)\"\n\" Index Cond: (dokumnr = 869914)\"\n...\n\" -> Bitmap Index Scan on dok_dokumnr_idx\n(cost=0.00..2.00 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)\"\n\" Index Cond: (dokumnr = 890908)\"\n\"Total runtime: 39771.385 ms\"\n\n\"PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC \ni686-pc-linux-gnu-gcc (GCC) 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0, \npie-8.7.9)\" \n\n", "msg_date": "Mon, 10 Nov 2008 18:25:00 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Simple indexed IN query takes 40 seconds" }, { "msg_contents": "\"Andrus\" <[email protected]> writes:\n> How to make this query to run fast ?\n\nUsing something newer than 8.1 would help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Nov 2008 11:29:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple indexed IN query takes 40 seconds " }, { "msg_contents": "Obviously, most of the total cost (cost=327569, time=39749ms) comes from\ntwo operations in the execution plan:\n\n(a) sequential scan on the 'rid' table (cost=195342, time=26347ms) that\nproduces almost 3.200.000 rows\n(b) hash join of the two subresults (cost=240000, time=14000ms)\n\nHow many rows are there in the 'rid' table? If the 'IN' clause selects\nmore than a few percent of the table, the index won't be used as the\nsequential scan of the whole table will be faster than random access\n(causing a lot of seeks).\n\nTry to:\n\n(a) analyze the table - might help if the stats are too old and don't\nreflect current state\n(b) increase the statistics target of the table (will give more precise\nstats, allowing to select a better plan)\n(c) tune the 'cost' parameters of the planner - the default values are\nquite conservative, so if you have fast disks (regarding seeks) the\nsequential scan may be chosen too early, you may even 'turn off' the\nsequential scan\n\nregards\nTomas\n\n> I found that simple IN query on indexed tables takes too much time.\n>\n> dok and rid have both indexes on int dokumnr columnr and dokumnr is not\n> null.\n> PostgreSql can use index on dok or event on rid so it can executed fast.\n>\n> How to make this query to run fast ?\n>\n> Andrus.\n>\n>\n>\n> note: list contain a lot of integers, output below is abbreviated in this\n> part.\n>\n> explain analyze select\n> sum(rid.kogus)\n> from dok JOIN rid USING(dokumnr)\n> where dok.dokumnr in\n> (869906,869907,869910,869911,869914,869915,869916,869917,869918,869921,869925,869926,869928,869929,869934,869935,869936,...)\n>\n> \"Aggregate (cost=327569.15..327569.16 rows=1 width=9) (actual\n> time=39749.842..39749.846 rows=1 loops=1)\"\n> \" -> Hash Join (cost=83872.74..327537.74 rows=12563 width=9) (actual\n> time=25221.702..39697.249 rows=11857 loops=1)\"\n> \" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n> \" -> Seq Scan on rid (cost=0.00..195342.35 rows=3213135 width=13)\n> (actual time=0.046..26347.959 rows=3243468 loops=1)\"\n> \" -> Hash (cost=83860.76..83860.76 rows=4792 width=4) (actual\n> time=128.366..128.366 rows=4801 loops=1)\"\n> \" -> Bitmap Heap Scan on dok (cost=9618.80..83860.76\n> rows=4792 width=4) (actual time=58.667..108.611 rows=4801 loops=1)\"\n> \" Recheck Cond: ((dokumnr = 869906) OR (dokumnr =\n> 869907)\n> OR (dokumnr = 869910) OR (dokumnr = 869911) OR (dokumnr = 869914) OR\n> (dokumnr = 869915) OR (dokumnr = 869916) OR (dokumnr = 869917) OR (dokumnr\n> =\n> 869918) OR (dokumnr = 869921) OR (dokumnr = 869925) OR (dokumnr = 869926)\n> OR\n> (dokumnr = 869928) OR (dokumnr = 869929) OR (dokumnr = 869934) OR (dokumnr\n> =\n> 869935) OR (dokumnr = 869936) OR (dokumnr = 869937) OR (dokumnr = 869940)\n> OR\n> (dokumnr = 869941) OR (dokumnr = 869945) OR (dokumnr = 869951) OR (dokumnr\n> =\n> 869964) OR (dokumnr = 869966) OR (dokumnr = 869969) OR (dokumnr = 869974)\n> OR\n> (dokumnr = 869979) OR (dokumnr = 869986) OR (dokumnr = 869992) OR (dokumnr\n> =\n> 869993) OR (dokumnr = 869995) OR (dokumnr = 869997) OR (dokumnr = 870007)\n> OR\n> (dokumnr = 870018) OR (dokumnr = 870021) OR (dokumnr = 870023) OR (dokumnr\n> =\n> 870025) OR (dokumnr = 870033) OR (dokumnr = 870034) OR (dokumnr = 870036)\n> OR\n> (dokumnr = 870038) OR (dokumnr = 870043) OR (dokumnr = 870044) OR (dokumnr\n> =\n> 870046) OR (dokumnr = 870050) OR (dokumnr = 870051) OR (dokumnr = 870053)\n> OR\n> (dokumnr = 870054) OR (dokumnr = 870055) OR (dokumnr = 870064) OR (dokumnr\n> =\n> 870066) OR (dokumnr = 870069) OR (dokumnr = 870077) OR (dokumnr = 870079)\n> OR\n> (dokumnr = 870081) OR (dokumnr = 870084) OR (dokumnr = 870085) OR (dokumnr\n> =\n> 870090) OR (dokumnr = 870096) OR (dokumnr = 870110) OR (dokumnr = 870111)\n> OR\n> (dokumnr = 870117) OR (dokumnr = 870120) OR (dokumnr = 870124) OR (dokumnr\n> =\n> 870130)\n> ...\n> OR (dokumnr = 890907) OR (dokumnr = 890908))\"\n> \" -> BitmapOr (cost=9618.80..9618.80 rows=4801\n> width=0)\n> (actual time=58.248..58.248 rows=0 loops=1)\"\n> \" -> Bitmap Index Scan on dok_dokumnr_idx\n> (cost=0.00..2.00 rows=1 width=0) (actual time=0.052..0.052 rows=3\n> loops=1)\"\n> \" Index Cond: (dokumnr = 869906)\"\n> \" -> Bitmap Index Scan on dok_dokumnr_idx\n> (cost=0.00..2.00 rows=1 width=0) (actual time=0.011..0.011 rows=3\n> loops=1)\"\n> \" Index Cond: (dokumnr = 869907)\"\n> \" -> Bitmap Index Scan on dok_dokumnr_idx\n> (cost=0.00..2.00 rows=1 width=0) (actual time=0.020..0.020 rows=3\n> loops=1)\"\n> \" Index Cond: (dokumnr = 869910)\"\n> \" -> Bitmap Index Scan on dok_dokumnr_idx\n> (cost=0.00..2.00 rows=1 width=0) (actual time=0.010..0.010 rows=3\n> loops=1)\"\n> \" Index Cond: (dokumnr = 869911)\"\n> \" -> Bitmap Index Scan on dok_dokumnr_idx\n> (cost=0.00..2.00 rows=1 width=0) (actual time=0.008..0.008 rows=3\n> loops=1)\"\n> \" Index Cond: (dokumnr = 869914)\"\n> ...\n> \" -> Bitmap Index Scan on dok_dokumnr_idx\n> (cost=0.00..2.00 rows=1 width=0) (actual time=0.008..0.008 rows=1\n> loops=1)\"\n> \" Index Cond: (dokumnr = 890908)\"\n> \"Total runtime: 39771.385 ms\"\n>\n> \"PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC\n> i686-pc-linux-gnu-gcc (GCC) 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,\n> pie-8.7.9)\"\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Mon, 10 Nov 2008 18:12:22 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Simple indexed IN query takes 40 seconds" }, { "msg_contents": "Tom,\n\n> Using something newer than 8.1 would help.\n\nThank you.\n\nIf \n\nCREATE TEMP TABLE ids ( id int ) ON COMMIT DROP;\n\nis created, ids are added to this table and \nids table is used in inner join insted of IN clause or\n\nIN clause is replaced with\n\n... dokumnr IN ( SELECT id FROM ids ) ...\n\n, will this fix the issue in 8.1.4 ?\n\nAndrus.\n", "msg_date": "Mon, 10 Nov 2008 20:23:41 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple indexed IN query takes 40 seconds " } ]
[ { "msg_contents": "Hi,\n\nI have table with cca 60.000 rows and\nwhen I run query as:\n Update table SET column=0;\nafter 10 minutes i must stop query, but it still running :(\n\nI've Postgres 8.1 with all default settings in postgres.conf\n\nWhere is the problem?\n\nThak you for any tips.\n\nbest regards.\nMarek Fiala\n\n\n\n\n\n\n", "msg_date": "Mon, 10 Nov 2008 17:30:28 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "slow full table update" }, { "msg_contents": "Sorry, but you have to provide much more information about the table. The\ninformation you've provided is really not sufficient - the rows might be\nlarge or small. I guess it's the second option, with a lots of dead rows.\n\nTry this:\n\nANALYZE table;\nSELECT relpages, reltuples FROM pg_class WHERE relname = 'table';\n\nAnyway, is the autovacuum running? What are the parameters? Try to execute\n\nVACUUM table;\n\nand then run the two commands above. That might 'clean' the table and\nimprove the update performance. Don't forget each such UPDATE will\nactually create a copy of all the modified rows (that's how PostgreSQL\nworks), so if you don't run VACUUM periodically or autovacuum demon, then\nthe table will bloat (occupy much more disk space than it should).\n\nIf it does not help, try do determine if the UPDATE is CPU or disk bound.\nI'd guess there are problems with I/O bottleneck (due to the bloating).\n\nregards\nTomas\n\n> Hi,\n>\n> I have table with cca 60.000 rows and\n> when I run query as:\n> Update table SET column=0;\n> after 10 minutes i must stop query, but it still running :(\n>\n> I've Postgres 8.1 with all default settings in postgres.conf\n>\n> Where is the problem?\n>\n> Thak you for any tips.\n>\n> best regards.\n> Marek Fiala\n>\n>\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Mon, 10 Nov 2008 17:41:40 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: slow full table update" }, { "msg_contents": "Hi, \n\nthank you for your reply.\n\nHere is some aditional information:\n\nthe problem is on every tables with small and large rows too.\nautovacuum is running.\n\nrelpages\treltuples\n6213 54743\n\ntables are almost write-only\nMunin Graphs shows that problems is with I/O bottleneck.\n\nI found out that\nUpdate 100 rows takes 0.3s\nbut update 1000 rows takes 50s\n\nIs this better information?\n\nThanks for any help.\n\nbest regards\nMarek Fiala\n______________________________________________________________\n> Od: [email protected]\n> Komu: [email protected]\n&gt; CC: [email protected]\n> Datum: 10.11.2008 17:42\n> Předmět: Re: [PERFORM] slow full table update\n>\n>Sorry, but you have to provide much more information about the table. The\n>information you've provided is really not sufficient - the rows might be\n>large or small. I guess it's the second option, with a lots of dead rows.\n>\n>Try this:\n>\n>ANALYZE table;\n>SELECT relpages, reltuples FROM pg_class WHERE relname = 'table';\n>\n>Anyway, is the autovacuum running? What are the parameters? Try to execute\n>\n>VACUUM table;\n>\n>and then run the two commands above. That might 'clean' the table and\n>improve the update performance. Don't forget each such UPDATE will\n>actually create a copy of all the modified rows (that's how PostgreSQL\n>works), so if you don't run VACUUM periodically or autovacuum demon, then\n>the table will bloat (occupy much more disk space than it should).\n>\n>If it does not help, try do determine if the UPDATE is CPU or disk bound.\n>I'd guess there are problems with I/O bottleneck (due to the bloating).\n>\n>regards\n>Tomas\n>\n>> Hi,\n>>\n>> I have table with cca 60.000 rows and\n>> when I run query as:\n>> Update table SET column=0;\n>> after 10 minutes i must stop query, but it still running :(\n>>\n>> I've Postgres 8.1 with all default settings in postgres.conf\n>>\n>> Where is the problem?\n>>\n>> Thak you for any tips.\n>>\n>> best regards.\n>> Marek Fiala\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n", "msg_date": "Wed, 12 Nov 2008 17:25:49 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow full table update" }, { "msg_contents": "Hi,\n\nso the table occupies about 50 MB, i.e. each row has about 1 kB, right?\nUpdating 1000 rows should means about 1MB of data to be updated.\n\nThere might be a problem with execution plan of the updates - I guess the\n100 rows update uses index scan and the 1000 rows update might use seq\nscan.\n\nAnyway the table is not too big, so I wouldn't expect such I/O bottleneck\non a properly tuned system. Have you checked the postgresql.conf settings?\nWhat are the values for\n\n1) shared_buffers - 8kB pages used as a buffer (try to increase this a\nlittle, for example to 1000, i.e. 8MB, or even more)\n\n2) checkpoint_segments - number of 16MB checkpoint segments, aka\ntransaction logs, this usually improves the write / update performance a\nlot, so try to increase the default value (3) to at least 8\n\n3) wal_buffers - 8kB pages used to store WAL (minimal effect usually, but\ntry to increase it to 16 - 64, just to be sure)\n\nThere is a nicely annotated config, with recommendations on how to set the\nvalues based on usage etc. See this:\n\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.html\nhttp://www.powerpostgresql.com/PerfList\n\nregards\nTomas\n\n> Hi,\n>\n> thank you for your reply.\n>\n> Here is some aditional information:\n>\n> the problem is on every tables with small and large rows too.\n> autovacuum is running.\n>\n> relpages\treltuples\n> 6213 54743\n>\n> tables are almost write-only\n> Munin Graphs shows that problems is with I/O bottleneck.\n>\n> I found out that\n> Update 100 rows takes 0.3s\n> but update 1000 rows takes 50s\n>\n> Is this better information?\n>\n> Thanks for any help.\n>\n> best regards\n> Marek Fiala\n> ______________________________________________________________\n>> Od: [email protected]\n>> Komu: [email protected]\n> &gt; CC: [email protected]\n>> Datum: 10.11.2008 17:42\n>> Pďż˝&#65533;edmďż˝&#65533;t: Re: [PERFORM] slow full table update\n>>\n>>Sorry, but you have to provide much more information about the table. The\n>>information you've provided is really not sufficient - the rows might be\n>>large or small. I guess it's the second option, with a lots of dead rows.\n>>\n>>Try this:\n>>\n>>ANALYZE table;\n>>SELECT relpages, reltuples FROM pg_class WHERE relname = 'table';\n>>\n>>Anyway, is the autovacuum running? What are the parameters? Try to\n>> execute\n>>\n>>VACUUM table;\n>>\n>>and then run the two commands above. That might 'clean' the table and\n>>improve the update performance. Don't forget each such UPDATE will\n>>actually create a copy of all the modified rows (that's how PostgreSQL\n>>works), so if you don't run VACUUM periodically or autovacuum demon, then\n>>the table will bloat (occupy much more disk space than it should).\n>>\n>>If it does not help, try do determine if the UPDATE is CPU or disk bound.\n>>I'd guess there are problems with I/O bottleneck (due to the bloating).\n>>\n>>regards\n>>Tomas\n>>\n>>> Hi,\n>>>\n>>> I have table with cca 60.000 rows and\n>>> when I run query as:\n>>> Update table SET column=0;\n>>> after 10 minutes i must stop query, but it still running :(\n>>>\n>>> I've Postgres 8.1 with all default settings in postgres.conf\n>>>\n>>> Where is the problem?\n>>>\n>>> Thak you for any tips.\n>>>\n>>> best regards.\n>>> Marek Fiala\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>>\n>>\n>>--\n>>Sent via pgsql-performance mailing list\n>> ([email protected])\n>>To make changes to your subscription:\n>>http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Wed, 12 Nov 2008 17:47:59 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: slow full table update" }, { "msg_contents": "Hi,\n\nI've changed settings, \nbut with no effect on speed.\n\nI try explain query with this result\nfor 10.000 rows > update songs set views = 0 where sid > 20000 and sid < 30000\n\nBitmap Heap Scan on songs (cost=151.59..6814.29 rows=8931 width=526) (actual time=4.848..167.855 rows=8945 loops=1)\n\n Recheck Cond: ((sid > 20000) AND (sid < 30000))\n\n -> Bitmap Index Scan on pk_songs2 (cost=0.00..151.59 rows=8931 width=0) (actual time=4.071..4.071 rows=9579 loops=1)\n\n Index Cond: ((sid > 20000) AND (sid < 30000))\n\nIs there a way to run this query on sigle throughpass with no Recheck Cond?\n\nThank you.\n\nbest regards\nMarek Fiala\n\n______________________________________________________________\n> Od: [email protected]\n> Komu: [email protected]\n> Datum: 12.11.2008 17:48\n> Předmět: Re: [PERFORM] slow full table update\n>\n>Hi,\n>\n>so the table occupies about 50 MB, i.e. each row has about 1 kB, right?\n>Updating 1000 rows should means about 1MB of data to be updated.\n>\n>There might be a problem with execution plan of the updates - I guess the\n>100 rows update uses index scan and the 1000 rows update might use seq\n>scan.\n>\n>Anyway the table is not too big, so I wouldn't expect such I/O bottleneck\n>on a properly tuned system. Have you checked the postgresql.conf settings?\n>What are the values for\n>\n>1) shared_buffers - 8kB pages used as a buffer (try to increase this a\n>little, for example to 1000, i.e. 8MB, or even more)\n>\n>2) checkpoint_segments - number of 16MB checkpoint segments, aka\n>transaction logs, this usually improves the write / update performance a\n>lot, so try to increase the default value (3) to at least 8\n>\n>3) wal_buffers - 8kB pages used to store WAL (minimal effect usually, but\n>try to increase it to 16 - 64, just to be sure)\n>\n>There is a nicely annotated config, with recommendations on how to set the\n>values based on usage etc. See this:\n>\n>http://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n>http://www.powerpostgresql.com/PerfList\n>\n>regards\n>Tomas\n>\n>> Hi,\n>>\n>> thank you for your reply.\n>>\n>> Here is some aditional information:\n>>\n>> the problem is on every tables with small and large rows too.\n>> autovacuum is running.\n>>\n>> relpages\treltuples\n>> 6213 54743\n>>\n>> tables are almost write-only\n>> Munin Graphs shows that problems is with I/O bottleneck.\n>>\n>> I found out that\n>> Update 100 rows takes 0.3s\n>> but update 1000 rows takes 50s\n>>\n>> Is this better information?\n>>\n>> Thanks for any help.\n>>\n>> best regards\n>> Marek Fiala\n>> ______________________________________________________________\n>>> Od: [email protected]\n>>> Komu: [email protected]\n>> &gt; CC: [email protected]\n>>> Datum: 10.11.2008 17:42\n>>> PĹ&#65533;edmÄ&#65533;t: Re: [PERFORM] slow full table update\n>>>\n>>>Sorry, but you have to provide much more information about the table. The\n>>>information you've provided is really not sufficient - the rows might be\n>>>large or small. I guess it's the second option, with a lots of dead rows.\n>>>\n>>>Try this:\n>>>\n>>>ANALYZE table;\n>>>SELECT relpages, reltuples FROM pg_class WHERE relname = 'table';\n>>>\n>>>Anyway, is the autovacuum running? What are the parameters? Try to\n>>> execute\n>>>\n>>>VACUUM table;\n>>>\n>>>and then run the two commands above. That might 'clean' the table and\n>>>improve the update performance. Don't forget each such UPDATE will\n>>>actually create a copy of all the modified rows (that's how PostgreSQL\n>>>works), so if you don't run VACUUM periodically or autovacuum demon, then\n>>>the table will bloat (occupy much more disk space than it should).\n>>>\n>>>If it does not help, try do determine if the UPDATE is CPU or disk bound.\n>>>I'd guess there are problems with I/O bottleneck (due to the bloating).\n>>>\n>>>regards\n>>>Tomas\n>>>\n>>>> Hi,\n>>>>\n>>>> I have table with cca 60.000 rows and\n>>>> when I run query as:\n>>>> Update table SET column=0;\n>>>> after 10 minutes i must stop query, but it still running :(\n>>>>\n>>>> I've Postgres 8.1 with all default settings in postgres.conf\n>>>>\n>>>> Where is the problem?\n>>>>\n>>>> Thak you for any tips.\n>>>>\n>>>> best regards.\n>>>> Marek Fiala\n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> --\n>>>> Sent via pgsql-performance mailing list\n>>>> ([email protected])\n>>>> To make changes to your subscription:\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>\n>>>\n>>>\n>>>\n>>>--\n>>>Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>>To make changes to your subscription:\n>>>http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n", "msg_date": "Wed, 12 Nov 2008 18:25:59 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow full table update" }, { "msg_contents": "[email protected] wrote:\n> Hi,\n> \n> I've changed settings, \n> but with no effect on speed.\n> \n> I try explain query with this result\n> for 10.000 rows > update songs set views = 0 where sid > 20000 and sid < 30000\n> \n> Bitmap Heap Scan on songs (cost=151.59..6814.29 rows=8931 width=526) (actual time=4.848..167.855 rows=8945 loops=1)\n\nThis query says t is taking 167 milli-seconds, not 10 minutes as your\nfirst message said. Is this query actually slow?\n\n> \n> Recheck Cond: ((sid > 20000) AND (sid < 30000))\n> \n> -> Bitmap Index Scan on pk_songs2 (cost=0.00..151.59 rows=8931 width=0) (actual time=4.071..4.071 rows=9579 loops=1)\n> \n> Index Cond: ((sid > 20000) AND (sid < 30000))\n> \n> Is there a way to run this query on sigle throughpass with no Recheck Cond?\n\nOnly a sequential scan.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 12 Nov 2008 17:45:55 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow full table update" }, { "msg_contents": "> >\n> > Recheck Cond: ((sid > 20000) AND (sid < 30000))\n> >\n> > -> Bitmap Index Scan on pk_songs2 (cost=0.00..151.59 rows=8931\n> width=0) (actual time=4.071..4.071 rows=9579 loops=1)\n> >\n> > Index Cond: ((sid > 20000) AND (sid < 30000))\n> >\n> > Is there a way to run this query on sigle throughpass with no Recheck\n> Cond?\n>\n> \"Recheck Cond\" is somewhat misleading here.\n\nBitmap Index Scan has almost void \"recheck\" impact in case the whole bitmap\nfits in work_mem. That means bitmap scan degrades when the number of rows in\ntable (not the total number of returned rows) is greater than\nwork_mem*1024*8. 60'000 rows bitmap scan will require 60'000/8=7'500 bytes ~\n8Kbytes of memory to run without additional recheck, thus I do not believe\nit hurts you in this particular case\n\n\nRegards,\nVladimir Sitnikov\n\n\n>\n>   Recheck Cond: ((sid > 20000) AND (sid < 30000))\n>\n>   ->  Bitmap Index Scan on pk_songs2  (cost=0.00..151.59 rows=8931 width=0) (actual time=4.071..4.071 rows=9579 loops=1)\n>\n>         Index Cond: ((sid > 20000) AND (sid < 30000))\n>\n> Is there a way to run this query on sigle  throughpass with no Recheck Cond?\n\n\"Recheck Cond\" is somewhat misleading here.  Bitmap Index Scan has almost void \"recheck\" impact in case the whole bitmap fits in work_mem. That means bitmap scan degrades when the number of rows in table (not the total number of returned rows) is greater than work_mem*1024*8. 60'000 rows bitmap scan will require 60'000/8=7'500 bytes ~ 8Kbytes of memory to run without additional recheck, thus I do not believe it hurts you in this particular case\nRegards,Vladimir Sitnikov", "msg_date": "Wed, 12 Nov 2008 09:58:35 -0800", "msg_from": "\"Vladimir Sitnikov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow full table update" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> [email protected] wrote:\n>> I try explain query with this result\n>> for 10.000 rows > update songs set views = 0 where sid > 20000 and sid < 30000\n>> \n>> Bitmap Heap Scan on songs (cost=151.59..6814.29 rows=8931 width=526) (actual time=4.848..167.855 rows=8945 loops=1)\n\n> This query says t is taking 167 milli-seconds, not 10 minutes as your\n> first message said. Is this query actually slow?\n\nThe explain plan tree only shows the time to fetch/compute the new rows,\nnot to actually perform the update, update indexes, or fire triggers.\nIf there is a big discrepancy then the extra time must be going into\none of those steps.\n\n8.1 does show trigger execution time separately, so the most obvious\nproblem (unindexed foreign key reference) seems to be excluded, unless\nthe OP just snipped that part of the output ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Nov 2008 15:54:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow full table update " }, { "msg_contents": "On Mon, Nov 10, 2008 at 9:30 AM, <[email protected]> wrote:\n> Hi,\n>\n> I have table with cca 60.000 rows and\n> when I run query as:\n> Update table SET column=0;\n> after 10 minutes i must stop query, but it still running :(\n\nWhat does\n\nvacuum verbose table;\n\nsay? I'm wondering if it's gotten overly bloated.\n\nHow long does\n\nselect count(*) from table;\n\ntake to run (use \\timing to time it)\n", "msg_date": "Wed, 12 Nov 2008 13:55:52 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow full table update" }, { "msg_contents": "hi,\n\n select count(*) from songs;\n count\n-------\n 54909\n(1 row)\n\nTime: 58.182 ms\n\nupdate songs set views = 0;\nUPDATE 54909\nTime: 101907.837 ms\ntime is actually less than 10 minutes, but it is still very long :(\n\nvacuum said>\n\nVACUUM VERBOSE songs;\nINFO: vacuuming \"public.songs\"\nINFO: index \"pk_songs2\" now contains 54909 row versions in 595 pages\nDETAIL: 193 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.06 sec.\nINFO: index \"fk_albums_aid_index\" now contains 54909 row versions in 1330 pages\nDETAIL: 193 index row versions were removed.\n812 index pages have been deleted, 812 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.04 sec.\nINFO: index \"fk_artists_artid_index\" now contains 54910 row versions in 628 pages\nDETAIL: 193 index row versions were removed.\n114 index pages have been deleted, 114 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.10 sec.\nINFO: index \"fk_users_uid_karaoke_index\" now contains 54910 row versions in 2352 pages\nDETAIL: 193 index row versions were removed.\n2004 index pages have been deleted, 2004 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.95 sec.\nINFO: index \"datum_tag_indx\" now contains 54910 row versions in 2083 pages\nDETAIL: 193 index row versions were removed.\n1728 index pages have been deleted, 1728 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.47 sec.\nINFO: index \"datum_video_indx\" now contains 54910 row versions in 1261 pages\nDETAIL: 193 index row versions were removed.\n826 index pages have been deleted, 826 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.06 sec.\nINFO: \"songs\": removed 193 row versions in 164 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: \"songs\": found 193 removable, 54909 nonremovable row versions in 6213 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 132969 unused item pointers.\n0 pages are entirely empty.\nCPU 0.07s/0.04u sec elapsed 1.74 sec.\nINFO: vacuuming \"pg_toast.pg_toast_28178\"\nINFO: index \"pg_toast_28178_index\" now contains 2700 row versions in 13 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_28178\": found 0 removable, 2700 nonremovable row versions in 645 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 88 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\nTime: 1750.460 ms\n\nbest regards\nMarek Fiala\n______________________________________________________________\n> Od: [email protected]\n> Komu: [email protected]\n&gt; CC: [email protected]\n> Datum: 12.11.2008 21:55\n> Předmět: Re: [PERFORM] slow full table update\n>\n>On Mon, Nov 10, 2008 at 9:30 AM, <[email protected]> wrote:\n>> Hi,\n>>\n>> I have table with cca 60.000 rows and\n>> when I run query as:\n>> Update table SET column=0;\n>> after 10 minutes i must stop query, but it still running :(\n>\n>What does\n>\n>vacuum verbose table;\n>\n>say? I'm wondering if it's gotten overly bloated.\n>\n>How long does\n>\n>select count(*) from table;\n>\n>take to run (use timing to time it)\n>\n\n", "msg_date": "Wed, 12 Nov 2008 23:47:20 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow full table update" }, { "msg_contents": "This is the critical point. You have this line:\n\nThere were 132969 unused item pointers.\n\nWhich says there's 132k or so dead rows in your table. Which means\nvacuum / autovacuum isn't keeping up. Did you try and stop the update\nseveral times? Each time it starts then gets killed it creates dead\nrows.\n\nTry doing a vacuum full followed by a reindex OR a cluster on this\ntable and see if that helps.\n", "msg_date": "Wed, 12 Nov 2008 15:58:49 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow full table update" }, { "msg_contents": "> This is the critical point. You have this line:\n> \n> There were 132969 unused item pointers.\n> \n> Which says there's 132k or so dead rows in your table. Which means\n> vacuum / autovacuum isn't keeping up. Did you try and stop the update\n> several times? Each time it starts then gets killed it creates dead\n> rows.\n\nTry to run just ANALYZE on the table and then run the\n\nSELECT relpages, reltuples FROM pg_class WHERE relname = 'table'\n\nagain. It should report about 20k of pages, i.e. 160MB. That might slow \nthe things down ;-)\n\n> Try doing a vacuum full followed by a reindex OR a cluster on this\n> table and see if that helps.\n\nWell, maybe the vacuum will fix the problem - have you executed the \nquery that took 167ms (according to the explain analyze output posted by \nyou) over a clean table? But I doubt the growth from 6.000 to 20.000 \nalone might cause degradation from 170ms to several minutes ...\n\nregards\nTomas\n", "msg_date": "Thu, 13 Nov 2008 01:13:23 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow full table update" }, { "msg_contents": "\n> The explain plan tree only shows the time to fetch/compute the new rows,\n> not to actually perform the update, update indexes, or fire triggers.\n> If there is a big discrepancy then the extra time must be going into\n> one of those steps.\n> \n> 8.1 does show trigger execution time separately, so the most obvious\n> problem (unindexed foreign key reference) seems to be excluded, unless\n> the OP just snipped that part of the output ...\n\nYeah, that quite frequent problem with updates. Try to separate create a \ncopy of the table, i.e.\n\nCREATE TABLE test_table AS SELECT * FROM table;\n\nand try to execute the query on it.\n\nWhat tables do reference the original table using a foreign key? Do they \nhave indexes on the foreign key column? How large are there referencing \ntables? Are these tables updated heavily and vacuumed properly (i.e. \naren't they bloated with dead rows)?\n\nI'm not sure if the FK constraints are checked always, or just in case \nthe referenced column is updated. I guess the FK check is performed only \nin case of DELETE or when the value in the FK column is modified (but \nupdate of the primary key is not very frequent I guess).\n\nAre there any triggers and / or rules on the table?\n\nregards\nTomas\n", "msg_date": "Thu, 13 Nov 2008 01:20:11 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow full table update" }, { "msg_contents": "\n> update songs set views = 0;\n> UPDATE 54909\n> Time: 101907.837 ms\n> time is actually less than 10 minutes, but it is still very long :(\n\n\tWow.\n\ntest=> CREATE TABLE test (id SERIAL PRIMARY KEY, value INTEGER);\ntest=> INSERT INTO test (value) SELECT n FROM generate_series( 1,100000 ) \nAS n;\nTemps : 1706,495 ms\ntest=> UPDATE test SET value=0;\nTemps : 1972,420 ms\n\n\tNote this is 8.3.3 on a desktop PC with the database and xlog on a Linux \nSoftware RAID1 of rather slow drives (about 50 MB/s).\n\tAnyway your 10 minutes are really wrong.\n\n\tFirst thing to check is if there is a problem with your IO subsystem, try \nthe example queries above, you should get timings in the same ballpark. If \nyou get 10x slower than that, you have a problem.\n\n\tAre the rows large ? I would believe so, because a \"songs\" table will \nprobably contain things like artist, title, comments, and lots of other \ninformation in strings that are too small to be TOAST'ed. Perhaps your \nproblem is in index updates, too.\n\n\tSo, make a copy of the songs table, without any indices, and no foreign \nkeys :\n\n\tCREATE TABLE songs2 AS SELECT * FROM songs;\n\n\tThen try your UPDATE on this. How slow is it ?\n\n\tNow drop this table, and recreate it with the foreign keys. Test the \nupdate again.\n\tNow drop this table, and recreate it with the foreign keys and indexes. \nTest the update again.\n\n\tThis will give you some meaningful information.\n\n\tYou will probably update the 'views' column quite often, it will even \nprobably be the most often updated column in your application. In this \ncase, you could try moving it to a separate table with just (song_id, \nview), that way you will update a very small table.\n", "msg_date": "Sun, 16 Nov 2008 15:50:28 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow full table update" } ]
[ { "msg_contents": "Hi All,\nThank you all in advance for your answers.\n \nMy application does bulk inserts on a table (50,000 rows concurrently by 10\nthreads), There are 3 indexes on the table whose update takes a lot of time.\n(Deleting those indexes speeds up inserts).\nOne thing I have noticed in my task manager is paging. I have got enough of\nmemory available on the system. The system cache is high. However I still\nsee a lot of paging going on. (PF Delta column for postgres processes in the\nwindows task manager).\n I tried to tweak various postgres settings without any results. Could any\none suggest me how can I force postgres to use up the available memory?\nI have changed following settings already\n \nshared_buffers\neffective_cache_size\nwork_mem\nwal_buffers\ncommit_delay\ncheckpoint_segments\n \nThanks,\nrangde.\n\n\n________________________________________________________________________\nThis email has been scanned for all known viruses by the MessageLabs Email Security Service and the Macro 4 plc internal virus protection system.\n________________________________________________________________________\n\n\n\n\nHi \nAll,\nThank you all in \nadvance for your answers.\n \nMy application does bulk inserts on a table (50,000 rows \nconcurrently by 10 threads), There are 3 indexes on the table whose update takes \na lot of time. (Deleting those indexes speeds up inserts).\nOne thing I have \nnoticed in my task manager is paging. I have got enough of memory available on \nthe system. The system cache is high. However I still see a lot of paging going \non. (PF Delta column for postgres processes in the windows task \nmanager).\n I tried to \ntweak various postgres settings without any results. Could any one suggest me \nhow can I force postgres to use up the available memory?\nI have changed \nfollowing settings already\n \nshared_buffers\neffective_cache_size\nwork_mem\nwal_buffers\ncommit_delay\ncheckpoint_segments\n \nThanks,\nrangde.\n\n________________________________________________________________________\nThis email has been scanned for all known viruses by the MessageLabs Email Security Service and the Macro 4 plc internal virus protection system.\n________________________________________________________________________", "msg_date": "Mon, 10 Nov 2008 17:14:10 -0000", "msg_from": "Anshul Dutta <[email protected]>", "msg_from_op": true, "msg_subject": "paging on windows" } ]
[ { "msg_contents": "Index is not used for\n\n is null\n\ncondition:\n\ncreate index makse_dokumnr_idx on makse(dokumnr);\nexplain select\n sum( summa)\n from MAKSE\n where dokumnr is null\n\n\"Aggregate (cost=131927.95..131927.96 rows=1 width=10)\"\n\" -> Seq Scan on makse (cost=0.00..131927.94 rows=1 width=10)\"\n\" Filter: (dokumnr IS NULL)\"\n\n\n\nTable makse contains 1200000 rows and about 800 rows with dokumnr is null so \nusing index is much faster that seq scan.\nHow to fix ?\n\nAndrus.\n\n\"PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC \ni686-pc-linux-gnu-gcc (GCC) 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0, \npie-8.7.9)\"\n\n", "msg_date": "Tue, 11 Nov 2008 21:55:55 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Using index for IS NULL query" }, { "msg_contents": "\"Andrus\" <[email protected]> writes:\n> Index is not used for\n> is null\n\n> How to fix ?\n\nUpdate to something newer than 8.1 (specifically, you'll need 8.3).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Nov 2008 15:45:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for IS NULL query " }, { "msg_contents": "Andrus <[email protected]> schrieb:\n\n> Index is not used for\n>\n> is null\n>\n> condition:\n>\n> create index makse_dokumnr_idx on makse(dokumnr);\n> explain select\n> sum( summa)\n> from MAKSE\n> where dokumnr is null\n>\n> \"Aggregate (cost=131927.95..131927.96 rows=1 width=10)\"\n> \" -> Seq Scan on makse (cost=0.00..131927.94 rows=1 width=10)\"\n> \" Filter: (dokumnr IS NULL)\"\n>\n>\n>\n> Table makse contains 1200000 rows and about 800 rows with dokumnr is null \n> so using index is much faster that seq scan.\n> How to fix ?\n\nCreate a partial index like below:\n\ntest=# create table foo ( i float);\nCREATE TABLE\nZeit: 1,138 ms\ntest=*# insert into foo select random() from generate_series(1,1000000);\nINSERT 0 1000000\ntest=*# insert into foo values (NULL);\nINSERT 0 1\ntest=*# create index idx_foo on foo(i) where i is null;\nCREATE INDEX\ntest=*# explain analyse select * from foo where i is null;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on foo (cost=5.51..4690.89 rows=5000 width=8) (actual\ntime=0.037..0.038 rows=1 loops=1)\n Recheck Cond: (i IS NULL)\n -> Bitmap Index Scan on idx_foo (cost=0.00..4.26 rows=5000 width=0)\n(actual time=0.033..0.033 rows=1 loops=1)\n Index Cond: (i IS NULL)\n Total runtime: 0.068 ms\n(5 Zeilen)\n\n\nMaybe there are other solutions...\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Tue, 11 Nov 2008 21:47:03 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for IS NULL query" }, { "msg_contents": "Tom Lane <[email protected]> schrieb:\n\n> \"Andrus\" <[email protected]> writes:\n> > Index is not used for\n> > is null\n> \n> > How to fix ?\n> \n> Update to something newer than 8.1 (specifically, you'll need 8.3).\n\nRight. For my example in the other mail:\n\ntest=*# create index idx_foo on foo(i);\nCREATE INDEX\ntest=*# explain analyse select * from foo where i is null;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on foo (cost=95.11..4780.49 rows=5000 width=8) (actual time=0.052..0.053 rows=1 loops=1)\n Recheck Cond: (i IS NULL)\n -> Bitmap Index Scan on idx_foo (cost=0.00..93.86 rows=5000 width=0) (actual time=0.047..0.047 rows=1 loops=1)\n Index Cond: (i IS NULL)\n Total runtime: 0.076 ms\n\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Tue, 11 Nov 2008 21:50:49 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for IS NULL query" }, { "msg_contents": "> Index is not used for\n> \n> is null\n> \n> condition:\n> \n> create index makse_dokumnr_idx on makse(dokumnr);\n> explain select\n> sum( summa)\n> from MAKSE\n> where dokumnr is null\n> \n> \"Aggregate (cost=131927.95..131927.96 rows=1 width=10)\"\n> \" -> Seq Scan on makse (cost=0.00..131927.94 rows=1 width=10)\"\n> \" Filter: (dokumnr IS NULL)\"\n >\n >\n> Table makse contains 1200000 rows and about 800 rows with dokumnr is \n> null so using index is much faster that seq scan.\n> How to fix ?\n\nYes, NULL values are not stored in the index, but you may create \nfunctional index on\n\n(CASE WHEN dokumnr IS NULL THEN -1 ELSE dokumnr END)\n\nand then use the same expression in the WHERE clause. You may replace \nthe '-1' value by something that's not used in the dokumnr column.\n\nregards\nTomas\n", "msg_date": "Tue, 11 Nov 2008 22:01:58 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for IS NULL query" }, { "msg_contents": "> Yes, NULL values are not stored in the index, but you may create functional\n> index on\n>\nAre you sure NULL values are not stored? btree, gist and bitmap index and\nsearch for NULL values.\n\nselect amname, amindexnulls, amsearchnulls from pg_am;\n\n amname | amindexnulls | amsearchnulls\n--------+--------------+---------------\n btree | t | t\n hash | f | f\n gist | t | t\n gin | f | f\n bitmap | t | t\n(5 rows)\n\n\nSincerely yours,\nVladimir Sitnikov\n\nYes, NULL values are not stored in the index, but you may create functional index on\nAre you sure NULL values are not stored? btree, gist and bitmap index and search for NULL values.select amname, amindexnulls, amsearchnulls from pg_am;\n amname | amindexnulls | amsearchnulls --------+--------------+---------------\n btree  | t            | t\n hash   | f            | f gist   | t            | t\n gin    | f            | f bitmap | t            | t\n(5 rows)Sincerely yours,Vladimir Sitnikov", "msg_date": "Tue, 11 Nov 2008 15:00:13 -0800", "msg_from": "\"Vladimir Sitnikov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for IS NULL query" }, { "msg_contents": "On Tue, Nov 11, 2008 at 4:00 PM, Vladimir Sitnikov\n<[email protected]> wrote:\n>\n>> Yes, NULL values are not stored in the index, but you may create\n>> functional index on\n>\n> Are you sure NULL values are not stored? btree, gist and bitmap index and\n> search for NULL values.\n\nIt's not that they're not stored, it's that before 8.3 pg didn't know\nhow to compare to them I believe. The standard trick was to create a\npartial index with \"where x is null\" on the table / column. 8.3 knows\nhow to compare them and doesn't need the partial index.\n", "msg_date": "Tue, 11 Nov 2008 16:34:42 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for IS NULL query" }, { "msg_contents": "Hello,\n\nI am doing some performances testing on Postgres & I discovered the\nfollowing behavior, when using 2 different ways of writing selects (but\ndoing the same aggregations at the end):\n1. test case 1, using outer join:\ncreate table test2 as \nselect\nsoj_session_log_id, pv_timestamp, vi_pv_id,a.item_id,\ncoalesce(sum(case when (bid_date<pv_timestamp and bid_date>=pv_timestamp -\nINTERVAL '3 day') then 1 else 0 end)) as recent_sales_3d1,\ncoalesce(sum(case when (bid_date<pv_timestamp and bid_date>=pv_timestamp -\nINTERVAL '7 day') then 1 else 0 end)) as recent_sales_7d1,\ncoalesce(sum(case when (bid_date<pv_timestamp and bid_date>=pv_timestamp -\nINTERVAL '14 day') then 1 else 0 end)) as recent_sales_14d1,\ncoalesce(sum(case when (bid_date<pv_timestamp and bid_date>=pv_timestamp -\nINTERVAL '30 day') then 1 else 0 end)) as recent_sales_30d1,\ncoalesce(sum(case when (bid_date<pv_timestamp and bid_date>=pv_timestamp -\nINTERVAL '60 day') then 1 else 0 end)) as recent_sales_60d1\nfrom bm_us_views_main_1609 a\nleft outer join bm_us_bids b on (b.item_id=a.item_id and\nb.bid_date<a.pv_timestamp and (b.bid_date>=a.pv_timestamp - INTERVAL '60\nday'))\nwhere a.item_type in (7,9) and qty>1\ngroup by soj_session_log_id, pv_timestamp, vi_pv_id, a.item_id;;\n\nThis query doesn't use any index according to the explain plan:\n\"HashAggregate (cost=672109.07..683054.81 rows=182429 width=49)\"\n\" -> Merge Left Join (cost=646489.83..668004.42 rows=182429 width=49)\"\n\" Merge Cond: (a.item_id = b.item_id)\"\n\" Join Filter: ((b.bid_date < a.pv_timestamp) AND (b.bid_date >=\n(a.pv_timestamp - '60 days'::interval)))\"\n\" -> Sort (cost=331768.62..332224.69 rows=182429 width=41)\"\n\" Sort Key: a.item_id\"\n\" -> Seq Scan on bm_us_views_main_1609 a\n(cost=0.00..315827.08 rows=182429 width=41)\"\n\" Filter: ((item_type = ANY ('{7,9}'::numeric[])) AND\n(qty > 1))\"\n\" -> Sort (cost=314669.01..320949.52 rows=2512205 width=19)\"\n\" Sort Key: b.item_id\"\n\" -> Seq Scan on bm_us_bids b (cost=0.00..47615.05\nrows=2512205 width=19)\"\n\n2. Test case 2, using sub queries:\ncreate table test2 as \nselect\nsoj_session_log_id, pv_timestamp, vi_pv_id,item_id,\ncoalesce((select count(*) from bm_us_bids b where b.item_id=a.item_id and\nbid_date<pv_timestamp and bid_date>=pv_timestamp - INTERVAL '3 day' group by\nitem_id ),0) as recent_sales_3d,\ncoalesce((select count(*) from bm_us_bids b where b.item_id=a.item_id and\nbid_date<pv_timestamp and bid_date>=pv_timestamp - INTERVAL '7 day' group by\nitem_id ),0) as recent_sales_7d,\ncoalesce((select count(*) from bm_us_bids b where b.item_id=a.item_id and\nbid_date<pv_timestamp and bid_date>=pv_timestamp - INTERVAL '14 day' group\nby item_id ),0) as recent_sales_14d,\ncoalesce((select count(*) from bm_us_bids b where b.item_id=a.item_id and\nbid_date<pv_timestamp and bid_date>=pv_timestamp - INTERVAL '30 day' group\nby item_id ),0) as recent_sales_30d,\ncoalesce((select count(*) from bm_us_bids b where b.item_id=a.item_id and\nbid_date<pv_timestamp and bid_date>=pv_timestamp - INTERVAL '60 day' group\nby item_id ),0) as recent_sales_60d\nfrom bm_us_views_main_1609 a\nwhere item_type in (7,9) and qty>1;\n\nThis query uses indexes according to the explain plan:\n\"Seq Scan on bm_us_views_main_1609 a (cost=0.00..8720230.77 rows=182429\nwidth=41)\"\n\" Filter: ((item_type = ANY ('{7,9}'::numeric[])) AND (qty > 1))\"\n\" SubPlan\"\n\" -> GroupAggregate (cost=0.00..9.21 rows=1 width=11)\"\n\" -> Index Scan using bm_us_bids_item_ix on bm_us_bids b\n(cost=0.00..9.20 rows=1 width=11)\"\n\" Index Cond: ((item_id = $0) AND (bid_date < $1) AND\n(bid_date >= ($1 - '60 days'::interval)))\"\n\" -> GroupAggregate (cost=0.00..9.21 rows=1 width=11)\"\n\" -> Index Scan using bm_us_bids_item_ix on bm_us_bids b\n(cost=0.00..9.20 rows=1 width=11)\"\n\" Index Cond: ((item_id = $0) AND (bid_date < $1) AND\n(bid_date >= ($1 - '30 days'::interval)))\"\n\" -> GroupAggregate (cost=0.00..9.21 rows=1 width=11)\"\n\" -> Index Scan using bm_us_bids_item_ix on bm_us_bids b\n(cost=0.00..9.20 rows=1 width=11)\"\n\" Index Cond: ((item_id = $0) AND (bid_date < $1) AND\n(bid_date >= ($1 - '14 days'::interval)))\"\n\" -> GroupAggregate (cost=0.00..9.21 rows=1 width=11)\"\n\" -> Index Scan using bm_us_bids_item_ix on bm_us_bids b\n(cost=0.00..9.20 rows=1 width=11)\"\n\" Index Cond: ((item_id = $0) AND (bid_date < $1) AND\n(bid_date >= ($1 - '7 days'::interval)))\"\n\" -> GroupAggregate (cost=0.00..9.21 rows=1 width=11)\"\n\" -> Index Scan using bm_us_bids_item_ix on bm_us_bids b\n(cost=0.00..9.20 rows=1 width=11)\"\n\" Index Cond: ((item_id = $0) AND (bid_date < $1) AND\n(bid_date >= ($1 - '3 days'::interval)))\"\n\nThe index bm_us_bids_item_ix is on columns item_id, bidder_id, bid_date\n\n\nQUESTION: Why the planner choose seq scan in the first case & indexes scan\nin the second case? In a more general way, I observed that the planner has\ndifficulties to select index scans & does in almost all the cases seq scan,\nwhen doing join queries. After investigations, it looks like when you join\ntable a with table b on a column x and y and you have an index on column x\nonly, the planner is not able to choose the index scan. You have to build\nthe index corresponding exactly to the join statement btw the 2 tables\n \nFor example,by creating an new index on item_id and bid_date, the planner\nhas been able to choose this last index in both cases. Would it be possible\nthat the planner can choose in any case the closest index for queries having\nouter join\n\nLast thing, I am running Postgres 8.3.4 on a Windows laptop having 3.5Gb\nRAM, 161Gb disk and dual core 2.5Gz processor\n\nRegards,\nJulien Theulier", "msg_date": "Wed, 12 Nov 2008 14:22:47 +0100", "msg_from": "\"Julien Theulier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Index usage with sub select or inner joins" }, { "msg_contents": "On Tue, 11 Nov 2008, Tom Lane wrote:\n>> Index is not used for\n>> is null\n>> How to fix ?\n>\n> Update to something newer than 8.1 (specifically, you'll need 8.3).\n\nOooh, that's useful to know. We can get rid of all our extra nulls \nindexes. Thanks.\n\nMatthew\n\n-- \nAs you approach the airport, you see a sign saying \"Beware - low\nflying airplanes\". There's not a lot you can do about that. Take \nyour hat off? -- Michael Flanders\n", "msg_date": "Wed, 12 Nov 2008 13:29:31 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for IS NULL query " }, { "msg_contents": "On Wed, Nov 12, 2008 at 02:22:47PM +0100, Julien Theulier wrote:\n> QUESTION: Why the planner choose seq scan in the first case & indexes scan\n> in the second case? In a more general way, I observed that the planner has\n> difficulties to select index scans & does in almost all the cases seq scan,\n> when doing join queries. After investigations, it looks like when you join\n> table a with table b on a column x and y and you have an index on column x\n> only, the planner is not able to choose the index scan. You have to build\n> the index corresponding exactly to the join statement btw the 2 tables\n\nShort, general answer: index scans aren't always faster than sequential\nscans, and the planner is smart enough to know that. Googling \"Why isn't\npostgresql using my index\" provides more detailed results, but in short,\nif it scans an index, it has to read pages from the index, and for all\nthe tuples it finds in the index, it has to read once again from the\nheap, whereas a sequential scan requires reading once from the heap. If\nyour query will visit most of the rows of the table, pgsql will choose a\nsequential scan over an index scan.\n\n- Josh / eggyknap", "msg_date": "Wed, 12 Nov 2008 06:54:05 -0700", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage with sub select or inner joins" }, { "msg_contents": "Hello, Joshua,\n\nI did different test cases and here are the results (numbers in seconds),\nusing (case sub queries) or not (case join) the index:\nRows (main table)\tOuter Join\t\tSub queries\nsetting\n1396163 rows\t39.2\t\t\t19.6\nwork_mem=256Mb\n3347443 rows \t72.2\t\t\t203.1\nwork_mem=256Mb\n3347443 rows \t70.3\t\t\t31.1\nwork_mem=1024Mb\n4321072 rows \t115\t\t\t554.9\nwork_mem=256Mb\n4321072 rows \t111\t\t\t583\nwork_mem=1024Mb\nAll outer joins where done without index uses\n\nTo force the use of the index for the first case (outer join), I have change\nthe seq_scan cost (from 1 to 2.5), it takes now only 6.1s for the outer join\non 1.4M rows. New explain plan below:\n\"HashAggregate (cost=457881.84..460248.84 rows=39450 width=49)\"\n\" -> Nested Loop Left Join (cost=0.00..456994.22 rows=39450 width=49)\"\n\" -> Seq Scan on bm_us_views_main_2608 a (cost=0.00..223677.45\nrows=39450 width=41)\"\n\" Filter: ((item_type = ANY ('{7,9}'::numeric[])) AND (qty >\n1))\"\n\" -> Index Scan using bm_us_bids_item_ix on bm_us_bids b\n(cost=0.00..5.65 rows=13 width=19)\"\n\" Index Cond: ((b.item_id = a.item_id) AND (b.bid_date <\na.pv_timestamp) AND (b.bid_date >= (a.pv_timestamp - '60 days'::interval)))\"\n\nIndex bm_us_bids_item_ix is on item_id, bidder_id (not used in the\ncondition) & bid_date\n\nWhat can be the recommendations on tuning the different costs so it can\nbetter estimate the seq scan & index scans costs? I think the issue is\nthere. But didn't find any figures helping to choose the correct parameters\naccording to cpu & disks speed\n\nRegards,\nJulien Theulier\n\n-----Message d'origine-----\nDe : Joshua Tolley [mailto:[email protected]] \nEnvoyé : mercredi 12 novembre 2008 14:54\nÀ : Julien Theulier\nCc : [email protected]\nObjet : Re: [PERFORM] Index usage with sub select or inner joins\n\nOn Wed, Nov 12, 2008 at 02:22:47PM +0100, Julien Theulier wrote:\n> QUESTION: Why the planner choose seq scan in the first case & indexes \n> scan in the second case? In a more general way, I observed that the \n> planner has difficulties to select index scans & does in almost all \n> the cases seq scan, when doing join queries. After investigations, it \n> looks like when you join table a with table b on a column x and y and \n> you have an index on column x only, the planner is not able to choose \n> the index scan. You have to build the index corresponding exactly to \n> the join statement btw the 2 tables\n\nShort, general answer: index scans aren't always faster than sequential\nscans, and the planner is smart enough to know that. Googling \"Why isn't\npostgresql using my index\" provides more detailed results, but in short, if\nit scans an index, it has to read pages from the index, and for all the\ntuples it finds in the index, it has to read once again from the heap,\nwhereas a sequential scan requires reading once from the heap. If your query\nwill visit most of the rows of the table, pgsql will choose a sequential\nscan over an index scan.\n\n- Josh / eggyknap\n\n", "msg_date": "Wed, 12 Nov 2008 16:09:35 +0100", "msg_from": "\"Julien Theulier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage with sub select or outer joins" }, { "msg_contents": "Well, you're obviously right - I didn't know this. I guess I've found \nthat the index is not used for null values, and deduced somehow that \nNULL values are not stored in the index.\n\nThanks, it's nice to find out a 'bug' before it's too late :-)\n\nregards\nTomas\n\n> Are you sure NULL values are not stored? btree, gist and bitmap index \n> and search for NULL values.\n> \n> select amname, amindexnulls, amsearchnulls from pg_am;\n> \n> amname | amindexnulls | amsearchnulls\n> --------+--------------+---------------\n> btree | t | t\n> hash | f | f\n> gist | t | t\n> gin | f | f\n> bitmap | t | t\n> (5 rows)\n> \n> \n> Sincerely yours,\n> Vladimir Sitnikov\n\n", "msg_date": "Thu, 13 Nov 2008 01:27:31 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for IS NULL query" } ]
[ { "msg_contents": "I've been searching for performance metrics and tweaks for a few weeks now. I'm trying to determine if the length of time to process my queries is accurate or not and I'm having a difficult time determining that. I know postgres performance is very dependent on hardware and settings and I understand how difficult it is to tackle. However, I was wondering if I could get some feedback based on my results please.\n\nThe database is running on a dual-core 2GHz Opteron processor with 8GB of RAM. The drives are 10K RPM 146GB drives in RAID 5 (I've read RAID 5 is bad for Postgres, but moving the database to another server didn't change performance at all). Some of the key parameters from postgresql.conf are:\n\nmax_connections = 100\nshared_buffers = 16MB\nwork_mem = 64MB\neverything else is set to the default\n\nOne of my tables has 660,000 records and doing a SELECT * from that table (without any joins or sorts) takes 72 seconds. Ordering the table based on 3 columns almost doubles that time to an average of 123 seconds. To me, those numbers are crazy slow and I don't understand why the queries are taking so long. The tables are UTF-8 encode and contain a mix of languages (English, Spanish, etc). I'm running the query from pgadmin3 on a remote host. The server has nothing else running on it except the database.\n\nAs a test I tried splitting up the data across a number of other tables. I ran 10 queries (to correspond with the 10 tables) with a UNION ALL to join the results together. This was even slower, taking an average of 103 seconds to complete the generic select all query.\n\nI'm convinced something is wrong, I just can't pinpoint where it is. I can provide any other information necessary. If anyone has any suggestions it would be greatly appreciated. \n\n\n\n \nI've been searching for performance metrics and tweaks for a few weeks now. I'm trying to determine if the length of time to process my queries is accurate or not and I'm having a difficult time determining that. I know postgres performance is very dependent on hardware and settings and I understand how difficult it is to tackle. However, I was wondering if I could get some feedback based on my results please.The database is running on a dual-core 2GHz Opteron processor with 8GB of RAM. The drives are 10K RPM 146GB drives in RAID 5 (I've read RAID 5 is bad for Postgres, but moving the database to another server didn't change performance at all). Some of the key parameters from postgresql.conf are:max_connections = 100shared_buffers = 16MBwork_mem = 64MBeverything else is set\n to the defaultOne of my tables has 660,000 records and doing a SELECT * from that table (without any joins or sorts) takes 72 seconds. Ordering the table based on 3 columns almost doubles that time to an average of 123 seconds. To me, those numbers are crazy slow and I don't understand why the queries are taking so long. The tables are UTF-8 encode and contain a mix of languages (English, Spanish, etc). I'm running the query from pgadmin3 on a remote host. The server has nothing else running on it except the database.As a test I tried splitting up the data across a number of other tables. I ran 10 queries (to correspond with the 10 tables) with a UNION ALL to join the results together. This was even slower, taking an average of 103 seconds to complete the generic select all query.I'm convinced something is wrong, I just can't pinpoint where it is. I can provide any other information necessary. If anyone has any suggestions it\n would be greatly appreciated.", "msg_date": "Wed, 12 Nov 2008 08:27:46 -0800 (PST)", "msg_from": "- - <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Question" }, { "msg_contents": "There are a few things you didn't mention...\n\nFirst off, what is the context this database is being used in? Is it the\nbackend for a web server? Data warehouse? Etc?\n\nSecond, you didn't mention the use of indexes. Do you have any indexes on\nthe table in question, and if so, does EXPLAIN ANALYZE show the planner\nutilizing the index(es)?\n\nThird, you have 8 GB of RAM on a dedicated machine. Consider upping the\nmemory settings in postgresql.conf. For instance, on my data warehouse\nmachines (8 GB RAM each) I have shared_buffers set to almost 2 GB and\neffective_cache_size set to nearly 5.5 GB. (This is dependent on how you're\nutilizing this database, so don't blindly set these values!)\n\nLast, you didn't mention what RAID level the other server you tested this on\nwas running.\n\nOn Wed, Nov 12, 2008 at 10:27 AM, - - <[email protected]> wrote:\n\n> I've been searching for performance metrics and tweaks for a few weeks now.\n> I'm trying to determine if the length of time to process my queries is\n> accurate or not and I'm having a difficult time determining that. I know\n> postgres performance is very dependent on hardware and settings and I\n> understand how difficult it is to tackle. However, I was wondering if I\n> could get some feedback based on my results please.\n>\n> The database is running on a dual-core 2GHz Opteron processor with 8GB of\n> RAM. The drives are 10K RPM 146GB drives in RAID 5 (I've read RAID 5 is bad\n> for Postgres, but moving the database to another server didn't change\n> performance at all). Some of the key parameters from postgresql.conf are:\n>\n> max_connections = 100\n> shared_buffers = 16MB\n> work_mem = 64MB\n> everything else is set to the default\n>\n> One of my tables has 660,000 records and doing a SELECT * from that table\n> (without any joins or sorts) takes 72 seconds. Ordering the table based on 3\n> columns almost doubles that time to an average of 123 seconds. To me, those\n> numbers are crazy slow and I don't understand why the queries are taking so\n> long. The tables are UTF-8 encode and contain a mix of languages (English,\n> Spanish, etc). I'm running the query from pgadmin3 on a remote host. The\n> server has nothing else running on it except the database.\n>\n> As a test I tried splitting up the data across a number of other tables. I\n> ran 10 queries (to correspond with the 10 tables) with a UNION ALL to join\n> the results together. This was even slower, taking an average of 103 seconds\n> to complete the generic select all query.\n>\n> I'm convinced something is wrong, I just can't pinpoint where it is. I can\n> provide any other information necessary. If anyone has any suggestions it\n> would be greatly appreciated.\n>\n>\n\n\n-- \nComputers are like air conditioners...\nThey quit working when you open Windows.\n\nThere are a few things you didn't mention...First off, what is the context this database is being used in?  Is it the backend for a web server?  Data warehouse?  Etc?Second, you didn't mention the use of indexes.  Do you have any indexes on the table in question, and if so, does EXPLAIN ANALYZE show the planner utilizing the index(es)?\nThird, you have 8 GB of RAM on a dedicated machine.  Consider upping the memory settings in postgresql.conf.  For instance, on my data warehouse machines (8 GB RAM each) I have shared_buffers set to almost 2 GB and effective_cache_size set to nearly 5.5 GB.  (This is dependent on how you're utilizing this database, so don't blindly set these values!)\nLast, you didn't mention what RAID level the other server you tested this on was running.On Wed, Nov 12, 2008 at 10:27 AM, - - <[email protected]> wrote:\nI've been searching for performance metrics and tweaks for a few weeks now. I'm trying to determine if the length of time to process my queries is accurate or not and I'm having a difficult time determining that. I know postgres performance is very dependent on hardware and settings and I understand how difficult it is to tackle. However, I was wondering if I could get some feedback based on my results please.\nThe database is running on a dual-core 2GHz Opteron processor with 8GB of RAM. The drives are 10K RPM 146GB drives in RAID 5 (I've read RAID 5 is bad for Postgres, but moving the database to another server didn't change performance at all). Some of the key parameters from postgresql.conf are:\nmax_connections = 100shared_buffers = 16MBwork_mem = 64MBeverything else is set\n to the defaultOne of my tables has 660,000 records and doing a SELECT * from that table (without any joins or sorts) takes 72 seconds. Ordering the table based on 3 columns almost doubles that time to an average of 123 seconds. To me, those numbers are crazy slow and I don't understand why the queries are taking so long. The tables are UTF-8 encode and contain a mix of languages (English, Spanish, etc). I'm running the query from pgadmin3 on a remote host. The server has nothing else running on it except the database.\nAs a test I tried splitting up the data across a number of other tables. I ran 10 queries (to correspond with the 10 tables) with a UNION ALL to join the results together. This was even slower, taking an average of 103 seconds to complete the generic select all query.\nI'm convinced something is wrong, I just can't pinpoint where it is. I can provide any other information necessary. If anyone has any suggestions it\n would be greatly appreciated. \n-- Computers are like air conditioners...They quit working when you open Windows.", "msg_date": "Wed, 12 Nov 2008 10:48:21 -0600", "msg_from": "\"J Sisson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Question" }, { "msg_contents": "> max_connections = 100\n> shared_buffers = 16MB\n> work_mem = 64MB\n> everything else is set to the default\n\nOK, but what about effective_cache_size for example?\n\nAnyway, we need more information about the table itself - the number of\nrows is nice, but it does not say how large the table is. The rows might\nbe small (say 100B each) or large (say several kilobytes), affecting the\namount of data to be read.\n\nWe need to know the structure of the table, and the output of the\nfollowing commands:\n\nANALYZE table;\nSELECT relpages, reltuples FROM pg_class WHERE relname = 'table';\nEXPLAIN SELECT * FROM table;\n\n>\n> One of my tables has 660,000 records and doing a SELECT * from that table\n> (without any joins or sorts) takes 72 seconds. Ordering the table based on\n> 3 columns almost doubles that time to an average of 123 seconds. To me,\n> those numbers are crazy slow and I don't understand why the queries are\n> taking so long. The tables are UTF-8 encode and contain a mix of languages\n> (English, Spanish, etc). I'm running the query from pgadmin3 on a remote\n> host. The server has nothing else running on it except the database.\n>\n> As a test I tried splitting up the data across a number of other tables. I\n> ran 10 queries (to correspond with the 10 tables) with a UNION ALL to join\n> the results together. This was even slower, taking an average of 103\n> seconds to complete the generic select all query.\n\nWell, splitting the tables just to read all of them won't help. It will\nmake the problem even worse, due to the necessary processing (UNION ALL).\n\nregards\nTomas\n\n", "msg_date": "Wed, 12 Nov 2008 17:56:29 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance Question" }, { "msg_contents": "Incrementing shared_buffers to 1024MB and set effective_cache_size to 6000MB\nand test again.\nTo speed up sort operations, increase work_mem till you notice an\nimprovement.\nPlay with those settings with different values.\n \n\n\n _____ \n\nDe: [email protected]\n[mailto:[email protected]] En nombre de - -\nEnviado el: Miércoles, 12 de Noviembre de 2008 14:28\nPara: [email protected]\nAsunto: [PERFORM] Performance Question\n\n\nI've been searching for performance metrics and tweaks for a few weeks now.\nI'm trying to determine if the length of time to process my queries is\naccurate or not and I'm having a difficult time determining that. I know\npostgres performance is very dependent on hardware and settings and I\nunderstand how difficult it is to tackle. However, I was wondering if I\ncould get some feedback based on my results please.\n\nThe database is running on a dual-core 2GHz Opteron processor with 8GB of\nRAM. The drives are 10K RPM 146GB drives in RAID 5 (I've read RAID 5 is bad\nfor Postgres, but moving the database to another server didn't change\nperformance at all). Some of the key parameters from postgresql.conf are:\n\nmax_connections = 100\nshared_buffers = 16MB\nwork_mem = 64MB\neverything else is set to the default\n\nOne of my tables has 660,000 records and doing a SELECT * from that table\n(without any joins or sorts) takes 72 seconds. Ordering the table based on 3\ncolumns almost doubles that time to an average of 123 seconds. To me, those\nnumbers are crazy slow and I don't understand why the queries are taking so\nlong. The tables are UTF-8 encode and contain a mix of languages (English,\nSpanish, etc). I'm running the query from pgadmin3 on a remote host. The\nserver has nothing else running on it except the database.\n\nAs a test I tried splitting up the data across a number of other tables. I\nran 10 queries (to correspond with the 10 tables) with a UNION ALL to join\nthe results together. This was even slower, taking an average of 103 seconds\nto complete the generic select all query.\n\nI'm convinced something is wrong, I just can't pinpoint where it is. I can\nprovide any other information necessary. If anyone has any suggestions it\nwould be greatly appreciated. \n\n\n\n\n\n\n\n\n\nIncrementing shared_buffers to 1024MB and \nset effective_cache_size to 6000MB and test again.\nTo speed up sort operations, increase work_mem till you \nnotice an improvement.\nPlay with those settings with different \nvalues.\n \n\n\n\nDe: [email protected] \n [mailto:[email protected]] En nombre de - \n -Enviado el: Miércoles, 12 de Noviembre de 2008 \n 14:28Para: [email protected]: \n [PERFORM] Performance Question\n\n\nI've been searching for performance metrics and tweaks for a few weeks \n now. I'm trying to determine if the length of time to process my queries is \n accurate or not and I'm having a difficult time determining that. I know \n postgres performance is very dependent on hardware and settings and I \n understand how difficult it is to tackle. However, I was wondering if I could \n get some feedback based on my results please.The database is running \n on a dual-core 2GHz Opteron processor with 8GB of RAM. The drives are 10K RPM \n 146GB drives in RAID 5 (I've read RAID 5 is bad for Postgres, but moving the \n database to another server didn't change performance at all). Some of the key \n parameters from postgresql.conf are:max_connections = \n 100shared_buffers = 16MBwork_mem = 64MBeverything else is set to \n the defaultOne of my tables has 660,000 records and doing a SELECT * \n from that table (without any joins or sorts) takes 72 seconds. Ordering the \n table based on 3 columns almost doubles that time to an average of 123 \n seconds. To me, those numbers are crazy slow and I don't understand why the \n queries are taking so long. The tables are UTF-8 encode and contain a mix of \n languages (English, Spanish, etc). I'm running the query from pgadmin3 on a \n remote host. The server has nothing else running on it except the \n database.As a test I tried splitting up the data across a number of \n other tables. I ran 10 queries (to correspond with the 10 tables) with a UNION \n ALL to join the results together. This was even slower, taking an average of \n 103 seconds to complete the generic select all query.I'm convinced \n something is wrong, I just can't pinpoint where it is. I can provide any other \n information necessary. If anyone has any suggestions it would be greatly \n appreciated.", "msg_date": "Wed, 12 Nov 2008 15:16:11 -0200", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Question" }, { "msg_contents": "- - <[email protected]> writes:\n> One of my tables has 660,000 records and doing a SELECT * from that table (without any joins or sorts) takes 72 seconds. Ordering the table based on 3 columns almost doubles that time to an average of 123 seconds. To me, those numbers are crazy slow and I don't understand why the queries are taking so long. The tables are UTF-8 encode and contain a mix of languages (English, Spanish, etc). I'm running the query from pgadmin3 on a remote host. The server has nothing else running on it except the database.\n\npgadmin has got its own performance issues with large select results.\nAre you sure the bulk of the time isn't being spent on the client side?\nWatching top or vmstat on both machines would probably tell much.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Nov 2008 15:57:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Question " }, { "msg_contents": "On Wed, Nov 12, 2008 at 9:27 AM, - - <[email protected]> wrote:\n> I've been searching for performance metrics and tweaks for a few weeks now.\n> I'm trying to determine if the length of time to process my queries is\n> accurate or not and I'm having a difficult time determining that. I know\n> postgres performance is very dependent on hardware and settings and I\n> understand how difficult it is to tackle. However, I was wondering if I\n> could get some feedback based on my results please.\n>\n> The database is running on a dual-core 2GHz Opteron processor with 8GB of\n> RAM. The drives are 10K RPM 146GB drives in RAID 5 (I've read RAID 5 is bad\n> for Postgres, but moving the database to another server didn't change\n> performance at all). Some of the key parameters from postgresql.conf are:\n\nI'm not sure what you mean. Did you move it to another server with a\nsingle drive? A 100 drive RAID-10 array with a battery backed caching\ncontroller? There's a lot of possibility in \"another server\".\n\n>\n> max_connections = 100\n> shared_buffers = 16MB\n\nWAY low. try 512M to 2G on a machine that big.\n\n> work_mem = 64MB\n\nacceptable. For 100 clients, if each did a sort you'd need 6.4Gig of\nfree ram, but since the chances of all 100 clients doing a sort that\nbig at the same time are small, you're probably safe.\n\n>\n> One of my tables has 660,000 records and doing a SELECT * from that table\n> (without any joins or sorts) takes 72 seconds. Ordering the table based on 3\n> columns almost doubles that time to an average of 123 seconds. To me, those\n\nHow wide is this table? IF it's got 300 columns, then it's gonna be a\nlot slower than if it has 10 columns.\n\nTry running your query like this:\n\n\\timing\nselect count(*) from (my big query goes here) as a;\n\nand see how long it takes. This will remove the network effect of\ntransferring the data. If that runs fast enough, then the real\nproblem is that your client is waiting til it gets all the data to\ndisplay it.\n", "msg_date": "Wed, 12 Nov 2008 19:26:02 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Question" }, { "msg_contents": "On Wed, Nov 12, 2008 at 8:57 PM, Tom Lane <[email protected]> wrote:\n> - - <[email protected]> writes:\n>> One of my tables has 660,000 records and doing a SELECT * from that table (without any joins or sorts) takes 72 seconds. Ordering the table based on 3 columns almost doubles that time to an average of 123 seconds. To me, those numbers are crazy slow and I don't understand why the queries are taking so long. The tables are UTF-8 encode and contain a mix of languages (English, Spanish, etc). I'm running the query from pgadmin3 on a remote host. The server has nothing else running on it except the database.\n>\n> pgadmin has got its own performance issues with large select results.\n\nThey were fixed a couple of years ago. We're essentially at the mercy\nof libpq now.\n\n\n-- \nDave Page\nEnterpriseDB UK: http://www.enterprisedb.com\n", "msg_date": "Thu, 13 Nov 2008 08:55:04 +0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Question" }, { "msg_contents": "On Wed, Nov 12, 2008 at 11:27 AM, - - <[email protected]> wrote:\n> I've been searching for performance metrics and tweaks for a few weeks now.\n> I'm trying to determine if the length of time to process my queries is\n> accurate or not and I'm having a difficult time determining that. I know\n> postgres performance is very dependent on hardware and settings and I\n> understand how difficult it is to tackle. However, I was wondering if I\n> could get some feedback based on my results please.\n>\n> The database is running on a dual-core 2GHz Opteron processor with 8GB of\n> RAM. The drives are 10K RPM 146GB drives in RAID 5 (I've read RAID 5 is bad\n> for Postgres, but moving the database to another server didn't change\n> performance at all). Some of the key parameters from postgresql.conf are:\n>\n> max_connections = 100\n> shared_buffers = 16MB\n> work_mem = 64MB\n> everything else is set to the default\n>\n> One of my tables has 660,000 records and doing a SELECT * from that table\n> (without any joins or sorts) takes 72 seconds. Ordering the table based on 3\n> columns almost doubles that time to an average of 123 seconds. To me, those\n> numbers are crazy slow and I don't understand why the queries are taking so\n> long. The tables are UTF-8 encode and contain a mix of languages (English,\n> Spanish, etc). I'm running the query from pgadmin3 on a remote host. The\n> server has nothing else running on it except the database.\n>\n> As a test I tried splitting up the data across a number of other tables. I\n> ran 10 queries (to correspond with the 10 tables) with a UNION ALL to join\n> the results together. This was even slower, taking an average of 103 seconds\n> to complete the generic select all query.\n>\n> I'm convinced something is wrong, I just can't pinpoint where it is. I can\n> provide any other information necessary. If anyone has any suggestions it\n> would be greatly appreciated.\n\nMaybe there is a lot of dead rows? Do a\nVACUUM VERBOSE;\n\nThat performance is quite slow unless the rows are really big (you\nhave huge text or bytea columns). What is the average row size in\nbytes? Try running the following command as a benchmark:\n\nselect generate_series(1,500000);\n\non my imac that takes about 600ms.\n\nmerlin\n", "msg_date": "Thu, 13 Nov 2008 08:05:55 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Question" }, { "msg_contents": "\n> I've been searching for performance metrics and tweaks for a few weeks \n> now. I'm trying to determine if the length of time to process my queries \n> is accurate or not and I'm having a difficult time determining that. I \n> know postgres performance is very dependent on hardware and settings and \n> I understand how difficult it is to tackle. However, I was wondering if \n> I could get some feedback based on my results please.\n\n\tWell, the simplest thing is to measure the time it takes to process a \nquery, but :\n\n\t- EXPLAIN ANALYZE will always report a longer time than the reality, \nbecause instrumenting the query takes time. For instance, EXPLAIN ANALYZE \non a count(*) on a query could take more time to count how many times the \n\"count\" aggregate is called and how much time is spent in it, than to \nactually compute the aggregate... This is because it takes much longer to \nmeasure the time it takes to call \"count\" on a row (syscalls...) than it \ntakes to increment the count.\n\tThis is not a problem as long as you are aware of it, and the information \nprovided by EXPLAIN ANALYZE is very valuable.\n\n\t- Using \\timing in psql is also a good way to examine queries, but if \nyour query returns lots of results, the time it takes for the client to \nprocess those results will mess with your measurements. In this case a \nsimple : SELECT sum(1) FROM (your query) can provide less polluted \ntimings. Remember you are not that interested in client load : you can \nalways add more webservers, but adding more database servers is a lot more \ndifficult.\n\n\t- You can add some query logging in your application (always a good idea \nIMHO). For instance, the administrator (you) could see a list of queries \nat the bottom of the page with the time it takes to run them. In that \ncase, keep in mind that any load will add randomness to this measurements. \nFor instance, when you hit F5 in your browser, of the webserver and \ndatabase run on the same machine as the browser, the browser's CPU usage \ncan make one of your queries appear to take up to half a second... even if \nit takes, in reality, half a millisecond... So, average.\n\tYou could push the idea further. Sometimes I log the parameterized query \n(without args), the args separately, and the query time, so I can get \naverage timings for things like \"SELECT stuff FROM table WHERE column=$1\", \nnot get a zillion separate queries depending on the parameters. Such \nlogging can destroy your performance, though, use with care.\n\n\tOF COURSE YOU SHOULD MEASURE WHAT IS RELEVANT, that is, queries that your \napplication uses.\n\n> The database is running on a dual-core 2GHz Opteron processor with 8GB \n> of RAM.\n\n\t8GB. 64 bits I presume ?\n\n> The drives are 10K RPM 146GB drives in RAID 5 (I've read RAID 5 is bad \n> for Postgres, but moving the database to another server didn't change \n> performance at all).\n\n\tRAID5 = good for reads, and large writes.\n\tRAID5 = hell for small random writes.\n\tDepends on your load...\n\t\n> shared_buffers = 16MB\n\n\tThat's a bit small IMHO. (try 2 GB).\n\n> work_mem = 64MB\n> everything else is set to the default\n>\n> One of my tables has 660,000 records and doing a SELECT * from that \n> table (without any joins or sorts) takes 72 seconds.\n\n\tWell, sure, but why would you do such a thing ? I mean, I don't know your \nrow size, but say it is 2 KB, you just used 1.5 GB of RAM on the client \nand on the server. Plus of course transferring all this data over your \nnetwork connection. If client and server are on the same machine, you just \nzapped 3 GB of RAM. I hope you don't do too many of those concurrently...\n\tThis is never going to be fast and it is never going to be a good \nperformance metric.\n\n\tIf you need to pull 600.000 rows from a table, use a CURSOR, and pull \nthem in batches of say, 1000.\n\tThen you will use 600 times less RAM. I hope you have gigabit ethernet \nthough. Network and disk IO will be your main bottleneck.\n\n\tIf you don't need to pull 600.000 rows from a table, well then, don't do \nit.\n\n\tIf you're using a client app to display the results, well, how long does \nit take to display 600.000 rows in a GUI box ?...\n\n> Ordering the table based on 3 columns almost doubles that time to an \n> average of 123 seconds.\n\n\tSame as above, if your rows are small, say 100 bytes, you're sorting 66 \nmegabytes, which would easily be done in RAM, but you specified work_mem \ntoo small, so it is done on disk, with several passes. If your rows are \nlarge, well you're facing a multi gigabyte disksort with only 64 MB of \nworking memory, so it's really going to take lots of passes.\n\n\tIf you often need to pull 600.000 rows from a table in a specific order, \ncreate an index on the column, use a CURSOR, and pull them in batches of \nsay, 1000.\n\tIf you seldom need to, don't create an index but do use a CURSOR, and \npull them in batches of say, 1000.\n\tIf you don't need to pull 600.000 rows from a table in a specific order, \nwell then, don't do it.\n\n> To me, those numbers are crazy slow and I don't understand why the \n> queries are taking so long. The tables are UTF-8 encode and contain a \n> mix of languages (English, Spanish, etc). I'm running the query from \n> pgadmin3 on a remote host. The server has nothing else running on it \n> except the database.\n\n\tOK, I presume you are sorting UNICODE strings (which is also slower than \nbinary compare) so in this case you should really try to minimize the \nnumber of string comparisons which means using a much larger work_mem.\n\n> I'm convinced something is wrong, I just can't pinpoint where it is. I \n> can provide any other information necessary. If anyone has any \n> suggestions it would be greatly appreciated.\n\n\tWell, the big questions are :\n\n\t- do you need to run this query often ?\n\t- what do you use it for ?\n\t- how many bytes does it weigh ?\n\n\tUntil you answer that, it is difficult to help...\n\n\n\n", "msg_date": "Sun, 16 Nov 2008 16:20:04 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Question" } ]
[ { "msg_contents": "There are columns\n kuupaev date, cr char(10), db char(10)\n and regular indexes for all those fields.\nbilkaib table contains large number of rows.\n\nThe following query takes too much time.\nHow to make it faster ?\nI think PostgreSql should use multiple indexes as bitmaps to speed it.\n\nI can re-write this query in any way or split to multiple statements if this\nmakes it faster.\n\nAndrus.\n\nexplain analyze select max(kuupaev) from bilkaib where\nkuupaev<=date'2008-11-01' and (cr='00' or db='00')\n\n\"Result (cost=339.75..339.76 rows=1 width=0) (actual\ntime=52432.256..52432.260 rows=1 loops=1)\"\n\" InitPlan\"\n\" -> Limit (cost=0.00..339.75 rows=1 width=4) (actual\ntime=52432.232..52432.236 rows=1 loops=1)\"\n\" -> Index Scan Backward using bilkaib_kuupaev_idx on bilkaib\n(cost=0.00..1294464.73 rows=3810 width=4) (actual time=52432.222..52432.222\nrows=1 loops=1)\"\n\" Index Cond: (kuupaev <= '2008-11-01'::date)\"\n\" Filter: ((kuupaev IS NOT NULL) AND ((cr = '00'::bpchar) OR\n(db = '00'::bpchar)))\"\n\"Total runtime: 52432.923 ms\"\n\n\"PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC\ni686-pc-linux-gnu-gcc (GCC) 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,\npie-8.7.9)\"\n\n", "msg_date": "Wed, 12 Nov 2008 19:02:10 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Increasing select max(datecol) from bilkaib where\n\tdatecol<=date'2008-11-01' and (cr='00' or db='00') speed" }, { "msg_contents": "Firstly, please upgrade to Postgres 8.3 if possible.\n\nOn Wed, 12 Nov 2008, Andrus wrote:\n> There are columns\n> kuupaev date, cr char(10), db char(10)\n> and regular indexes for all those fields.\n\nCreate a single index on (cr, db, datecol).\n\nMatthew\n\n-- \nThose who do not understand Unix are condemned to reinvent it, poorly.\n -- Henry Spencer\n", "msg_date": "Wed, 12 Nov 2008 17:26:14 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing select max(datecol) from bilkaib where\n\tdatecol<=date'2008-11-01' and (cr='00' or db='00') speed" }, { "msg_contents": "On Wed, Nov 12, 2008 at 9:02 AM, Andrus <[email protected]> wrote:\n\n> There are columns\n> kuupaev date, cr char(10), db char(10)\n> and regular indexes for all those fields.\n> bilkaib table contains large number of rows.\n>\n> The following query takes too much time.\n> How to make it faster ?\n> I think PostgreSql should use multiple indexes as bitmaps to speed it.\n\nI am afraid I do not see a way to use bitmaps to get any improvement here:\nthe server will still need to read the whole indices to figure out the\nanswer.\n\nI suggest you to create two more indices:\n\ncreate index date_with_zero_cr on bilkaib(date) where cr='00';\ncreate index date_with_zero_db on bilkaib(date) where db='00';\n\nAnd rewrite query as follows:\nselect greatest(\n (select max(date) from bilkaib where datecol<=date'2008-11-01' and\ncr='00'),\n (select max(date) from bilkaib where datecol<=date'2008-11-01' and\ndb='00'))\n\n\nRegards,\nVladimir Sitnikov\n\nOn Wed, Nov 12, 2008 at 9:02 AM, Andrus <[email protected]> wrote:\nThere are columns\nkuupaev date,  cr char(10), db char(10)\nand regular indexes  for all those fields.\nbilkaib table contains large number of rows.\n\nThe following query takes too much time.\nHow to make it faster ?\nI think PostgreSql should use multiple indexes as bitmaps to speed it.I am afraid I do not see a way to use bitmaps to get any improvement here: the server will still need to read the whole indices to figure out the answer.\nI suggest you to create two more indices:create index date_with_zero_cr on bilkaib(date) where cr='00';\ncreate index date_with_zero_db on bilkaib(date) where db='00';And rewrite query as follows:select greatest(\n   (select max(date) from bilkaib where datecol<=date'2008-11-01' and cr='00'),    (select max(date) from bilkaib where datecol<=date'2008-11-01' and db='00'))\n\nRegards,Vladimir Sitnikov", "msg_date": "Wed, 12 Nov 2008 09:28:53 -0800", "msg_from": "\"Vladimir Sitnikov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing select max(datecol) from bilkaib where\n\tdatecol<=date'2008-11-01' and (cr='00' or db='00') speed" }, { "msg_contents": "On Wed, 12 Nov 2008, Vladimir Sitnikov wrote:\n> And rewrite query as follows:\n> select greatest(\n>    (select max(date) from bilkaib where datecol<=date'2008-11-01' and cr='00'),\n>    (select max(date) from bilkaib where datecol<=date'2008-11-01' and db='00'))\n\nOops, yes, I missed the \"OR\" in the query. This rewrite is good - my \nsuggested index would not have helped.\n\n> I suggest you to create two more indices:\n> \n> create index date_with_zero_cr on bilkaib(date) where cr='00';\n> create index date_with_zero_db on bilkaib(date) where db='00';\n\nAlternatively if you create an index on (cr, bilkaib) and one on (db, \nbilkaib) then you will be able to use other values in the query too.\n\nMatthew\n\n-- \nContrary to popular belief, Unix is user friendly. It just happens to be\nvery selective about who its friends are. -- Kyle Hearn", "msg_date": "Wed, 12 Nov 2008 17:33:41 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing select max(datecol) from bilkaib where\n\tdatecol<=date'2008-11-01' and (cr='00' or db='00') speed" }, { "msg_contents": "Matthew,\n\nThank you.\n\nbilkaib table contains GL transactions for every day.\n00 records are initial balance records and they appear only in start of year \nor start of month.\nThey may present or may be not present for some month if initial balance is \nnot calculated yet.\nIf 00 records are present, usuallly there are lot of them for single date \nfor db and cr columns.\nThis query finds initial balance date befeore given date.\nbilkaib table contains several year transactions so it is large.\n\n>Alternatively if you create an index on (cr, bilkaib) and one on (db, \n>bilkaib) then you will be able to use other values in the query too.\n\nI'm sorry I do'nt understand this.\nWhat does the (cr, bilkaib) syntax mean?\nShould I create two functions indexes and re-write query as Vladimir \nsuggests or is there better appoach ?\n\nAndrus. \n\n", "msg_date": "Wed, 12 Nov 2008 20:06:47 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing select max(datecol) from bilkaib where\n\tdatecol<=date'2008-11-01' and (cr='00' or db='00') speed" }, { "msg_contents": ">\n> This query finds initial balance date befeore given date.\n\nIf you are not interested in other balances except initial ones (the ones\nthat have '00') the best way is to create partial indices that I have\nsuggested.\nThat will keep size of indices small, while providing good performance\n(constant response time)\n\n\n> bilkaib table contains several year transactions so it is large.\n>\nThat is not a problem for the particular case. However, when you evaluate\nquery performance, it really makes sense giving number of rows in each table\n(is 100K rows a \"large\" table? what about 10M rows?) and other properties\nof the data stored in the table (like number of rows that have cr='00')\n\n\n> Alternatively if you create an index on (cr, bilkaib) and one on (db,\n> bilkaib) then you will be able to use other values in the query too.\n>\nThat means if you create one index on biklaib (cr, datecol) and another\nindex on (db, datecol) you will be able to improve queries like\nselect greatest(\n (select max(date) from bilkaib where datecol<=date'2008-11-01' and\ncr=XXX),\n (select max(date) from bilkaib where datecol<=date'2008-11-01' and\ndb=YYY)).\nwith arbitrary XXX and YYY. I am not sure if you really want this.\n\n\n> I'm sorry I do'nt understand this.\n> What does the (cr, bilkaib) syntax mean?\n\nI believe that should be read as (cr, datecol).\n\n\n\n> Should I create two functions indexes and re-write query as Vladimir\n> suggests or is there better appoach ?\n\nI am afraid PostgreSQL is not smart enough to rewrite query with \"or\" into\ntwo separate index scans. There is no way to improve the query significantly\nwithout rewriting it.\n\nNote: for this case indices on (datecol), (cr) and (db) are not very\nhelpful.\n\nRegards,\nVladimir Sitnikov\n\n\nThis query finds initial balance date befeore given date.If you are not interested in other balances except initial ones (the ones that have '00') the best way is to create partial indices that I have suggested.\nThat will keep size of indices small, while providing good performance (constant response time) \n\nbilkaib table contains several year transactions so it is large.That is not a problem for the particular case. However, when you evaluate query performance, it really makes sense giving number of rows in each table (is 100K rows a \"large\" table? what about 10M rows?)  and other properties of the data stored in the table (like number of rows that have cr='00')\n \n\nAlternatively if you create an index on (cr, bilkaib) and one on (db, bilkaib) then you will be able to use other values in the query too.\n\nThat means if you create one index on biklaib (cr, datecol) and another index on (db, datecol) you will be able to improve queries like select greatest(\n   (select max(date) from bilkaib where datecol<=date'2008-11-01' and cr=XXX),    (select max(date) from bilkaib where datecol<=date'2008-11-01' and db=YYY)). \nwith arbitrary XXX and YYY. I am not sure if you really want this.\n\nI'm sorry I do'nt understand this.\nWhat does the (cr, bilkaib) syntax mean?I believe that should be read as (cr, datecol).  \n\nShould I create two functions indexes and re-write query as Vladimir suggests or is there better appoach ?I am afraid PostgreSQL is not smart enough to rewrite query with \"or\" into two separate index scans. There is no way to improve the query significantly without rewriting it.\nNote:  for this case indices on (datecol), (cr) and (db) are not very helpful. Regards,Vladimir Sitnikov", "msg_date": "Wed, 12 Nov 2008 11:26:23 -0800", "msg_from": "\"Vladimir Sitnikov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing select max(datecol) from bilkaib where\n\tdatecol<=date'2008-11-01' and (cr='00' or db='00') speed" }, { "msg_contents": "On Wed, Nov 12, 2008 at 07:02:10PM +0200, Andrus wrote:\n> explain analyze select max(kuupaev) from bilkaib where\n> kuupaev<=date'2008-11-01' and (cr='00' or db='00')\n\ndo you always have this: \"(cr='00' or db='00')\"? or do the values (00)\nchange?\nif they don't change, or *most* of the queries have \"(cr='00' or\ndb='00')\", than the biggest time difference you will get after creating\nthis index:\ncreate index test on bilkaib (kuupaev) where cr='00' or db='00';\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Wed, 12 Nov 2008 20:39:19 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing select max(datecol) from bilkaib where\n\tdatecol<=date'2008-11-01' and (cr='00' or db='00') speed" }, { "msg_contents": "Vladimir,\n\n>I am afraid PostgreSQL is not smart enough to rewrite query with \"or\" into \n>two separate index scans. There is no way to improve the query \n>significantly without rewriting it.\n>Note: for this case indices on (datecol), (cr) and (db) are not very \n>helpful.\n\nThank you very much.\nI added you indexes to db and re-write query.\nNow it runs fast.\n\nAndrus.\n\n", "msg_date": "Wed, 12 Nov 2008 21:44:38 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing select max(datecol) from bilkaib where\n\tdatecol<=date'2008-11-01' and (cr='00' or db='00') speed" }, { "msg_contents": "Depesz,\n\n> do you always have this: \"(cr='00' or db='00')\"? or do the values (00)\n> change?\n> if they don't change, or *most* of the queries have \"(cr='00' or\n> db='00')\", than the biggest time difference you will get after creating\n> this index:\n> create index test on bilkaib (kuupaev) where cr='00' or db='00';\n\nI have always cr='00' or db='00' clause. Separate values are never tested.\nI changed by queries back to old values and created this single index.\nThis seems to be even better that Vladimir suggestion.\nThank you very much.\n\nAndrus.\n\n", "msg_date": "Wed, 12 Nov 2008 21:57:06 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing select max(datecol) from bilkaib\n\twheredatecol<=date'2008-11-01' and (cr='00' or db='00') speed" } ]
[ { "msg_contents": "Hi,\n\nI have to manage a database that is getting way too big for us.\nCurrently db size is 304 GB.\n\nOne table is accounting for a third of this space.\nThe table itself has 68.800.000 tuples, taking 28GB.\n\nThere are 39 indices on the table, and many of them use multiple\ncolumns. A lot of these indices share the same column(s).\nThe indices are taking 95GB.\n\nSo, here are my questions:\n\n- do these figures seem normal or is there likely a bigger problem ?\n\n- when indices share a column, is it worth creating several multi-column\nindices (as we do now), or would we get the same result (from a\nperformance point of view) by creating several single column indices\n(one for each column) ?\n\n- does the order in which a multi-column index is created matter ? That\nis, if I have a column A with less discriminating values and a column B\nwith more discriminating values, does it matter if I:\n 'CREATE INDEX myindex ON mytable USING (A,B) '\nor \n'CREATE INDEX myindex ON mytable USING (A,B) '\nIs the second solution likely to behave faster ?\nOr is it simply better to:\nCREATE INDEX myindexa ON mytable USING (A);\nCREATE INDEX myindexb ON mytable USING (B);\n\n- as we do many insert and very few update/delete, I thought REINDEX was\ngoing to be superfluous. But REINDEXing is often needed to keep the size\nof the db _relatively_ reasonable. Does it sound normal ?\n\nThanks for any tip,\nFranck\n\n\n", "msg_date": "Wed, 12 Nov 2008 18:02:31 +0100", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": true, "msg_subject": "Disk usage question" }, { "msg_contents": "On Wed, Nov 12, 2008 at 10:02 AM, Franck Routier\n<[email protected]> wrote:\n> Hi,\n>\n> I have to manage a database that is getting way too big for us.\n> Currently db size is 304 GB.\n>\n> One table is accounting for a third of this space.\n> The table itself has 68.800.000 tuples, taking 28GB.\n>\n> There are 39 indices on the table, and many of them use multiple\n> columns. A lot of these indices share the same column(s).\n> The indices are taking 95GB.\n>\n> So, here are my questions:\n>\n> - do these figures seem normal or is there likely a bigger problem ?\n\nCan't really say. Is this a table with a single integer column? Then\nit's way too big. If it's got plenty of columns, some of which are\ntext or bytea then probably not. What does vacuum verbose say about\nyour tables / db?\n\n> - when indices share a column, is it worth creating several multi-column\n> indices (as we do now), or would we get the same result (from a\n> performance point of view) by creating several single column indices\n> (one for each column) ?\n\nNo, single field indexes are not as fast as multi-field indexes when\nthe where clause hits the fields starting from the left of the index.\nNote that an index on (a,b) will not help a where clause on only b.\n\n> - does the order in which a multi-column index is created matter ? That\n> is, if I have a column A with less discriminating values and a column B\n> with more discriminating values, does it matter if I:\n> 'CREATE INDEX myindex ON mytable USING (A,B) '\n> or\n> 'CREATE INDEX myindex ON mytable USING (A,B) '\n\nThose look the same, I assume you meant USING (B,A) for one.\n\nAssuming both fields are used by the query's where clause, the more\nselective one should be first. I think. testing will tell for sure.\n\n> Is the second solution likely to behave faster ?\n> Or is it simply better to:\n> CREATE INDEX myindexa ON mytable USING (A);\n> CREATE INDEX myindexb ON mytable USING (B);\n\nMaybe. If one is very selective then there's no great need for the\nother anyway.\n\n> - as we do many insert and very few update/delete, I thought REINDEX was\n> going to be superfluous. But REINDEXing is often needed to keep the size\n> of the db _relatively_ reasonable. Does it sound normal ?\n\nYes, if you have a lot of failed inserts. a failed insert = insert + delete.\n", "msg_date": "Wed, 12 Nov 2008 16:44:07 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk usage question" } ]
[ { "msg_contents": "Hello !\n\nSorry for the subject, I didn't found a better one ! :-/\n\nI'm having a problem with this query (below) that takes betweend 14 and \n15 seconds to run, which is too long for the end-user.\n\nI've done a EXPLAIN ANALYZE (below below) but I'm having difficulties to \nsee which part of that query is taking so many times.\n\nIf the lines are too long, your mailreader may cut them and make the SQL \nquery and the query plan unreadable, so I've put a copy of them on \npastebin.com : <http://pastebin.com/m53ca365>\n\nCan you give me some tips to see which part of the query is guilty ?\n\nMany thanks in advance for any tips to solve that slowness !\n\n####################################\nSELECT pk_societe_id,\n denomination_commerciale,\n denomination_sociale,\n numero_client,\n COALESCE(stats_commandes.nombre, 0) AS societe_nbre_commandes,\n COALESCE(stats_adresses_livraison.nombre, 0) AS \nsociete_adresses_livraison_quantite,\n COALESCE(stats_adresses_facturation.nombre, 0) AS \nsociete_adresses_facturation_quantite,\n COALESCE(NULLIF(admin_email,''), NULLIF(admin_bis_email,''), \nNULLIF(admin_ter_email,''), 'n/a') AS email,\n COALESCE(NULLIF(admin_tel,''), NULLIF(admin_bis_tel,''), \nNULLIF(admin_ter_tel,''), 'n/a') AS telephone,\n remise_permanente,\n is_horeca\nFROM societes\nLEFT JOIN (\n SELECT societes.pk_societe_id AS societe_id,\n COUNT(commandes.pk_commande_id) AS nombre\n FROM commandes\n INNER JOIN clients ON commandes.fk_client_id = \nclients.pk_client_id\n INNER JOIN societes ON clients.fk_societe_id = \nsocietes.pk_societe_id\n GROUP BY societes.pk_societe_id\n ) AS stats_commandes ON stats_commandes.societe_id = \nsocietes.pk_societe_id\nLEFT JOIN (\n SELECT fk_societe_id AS societe_id,\n COUNT(pk_adresse_livraison_id) AS nombre\n FROM societes_adresses_livraison\n WHERE is_deleted = FALSE\n GROUP BY fk_societe_id\n ) AS stats_adresses_livraison ON \nstats_adresses_livraison.societe_id = societes.pk_societe_id\nLEFT JOIN (\n SELECT fk_societe_id AS societe_id,\n COUNT(pk_adresse_facturation_id) AS nombre\n FROM societes_adresses_facturation\n WHERE is_deleted = FALSE\n GROUP BY fk_societe_id\n ) AS stats_adresses_facturation ON \nstats_adresses_facturation.societe_id = societes.pk_societe_id\nWHERE societes.is_deleted = FALSE\nAND EXISTS (\n SELECT 1 FROM commandes\n INNER JOIN clients ON commandes.fk_client_id = \nclients.pk_client_id\n INNER JOIN societes AS societe_client ON \nclients.fk_societe_id = societe_client.pk_societe_id\n WHERE delivery_date_livraison BETWEEN (NOW() - '1 \nyear'::interval) AND NOW() AND societe_client.pk_societe_id = \nsocietes.pk_societe_id\n )\nORDER BY LOWER(denomination_commerciale);\n\n####################################\n\n\nHere's an EXPLAIN ANALYZE of that query :\n\n \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=189404.60..189405.63 rows=414 width=147) (actual \ntime=13614.677..13615.138 rows=285 loops=1)\n Sort Key: lower((societes.denomination_commerciale)::text)\n -> Hash Left Join (cost=695.29..189386.60 rows=414 width=147) \n(actual time=143.767..13612.052 rows=285 loops=1)\n Hash Cond: (\"outer\".pk_societe_id = \"inner\".societe_id)\n -> Merge Left Join (cost=640.55..189226.33 rows=414 \nwidth=139) (actual time=132.203..13598.267 rows=285 loops=1)\n Merge Cond: (\"outer\".pk_societe_id = \"inner\".societe_id)\n -> Merge Left Join (cost=549.82..189126.52 rows=414 \nwidth=131) (actual time=120.373..13581.980 rows=285 loops=1)\n Merge Cond: (\"outer\".pk_societe_id = \n\"inner\".societe_id)\n -> Index Scan using pkey_societe_id on societes \n(cost=0.00..188566.96 rows=414 width=123) (actual time=53.993..13511.770 \nrows=285 loops=1)\n Filter: ((NOT is_deleted) AND (subplan))\n SubPlan\n -> Nested Loop (cost=35.56..378.16 \nrows=2 width=0) (actual time=16.511..16.511 rows=0 loops=818)\n -> Nested Loop (cost=35.56..368.82 \nrows=2 width=8) (actual time=16.504..16.504 rows=0 loops=818)\n Join Filter: \n(\"inner\".fk_client_id = \"outer\".pk_client_id)\n -> Seq Scan on clients \n(cost=0.00..69.69 rows=1 width=16) (actual time=0.255..0.474 rows=1 \nloops=818)\n Filter: ($0 = fk_societe_id)\n -> Bitmap Heap Scan on \ncommandes (cost=35.56..264.64 rows=2759 width=8) (actual \ntime=6.119..10.385 rows=2252 loops=911)\n Recheck Cond: \n((delivery_date_livraison >= (now() - '1 year'::interval)) AND \n(delivery_date_livraison <= now()))\n -> Bitmap Index Scan on \nidx_date_livraison (cost=0.00..35.56 rows=2759 width=0) (actual \ntime=6.097..6.097 rows=3109 loops=911)\n Index Cond: \n((delivery_date_livraison >= (now() - '1 year'::interval)) AND \n(delivery_date_livraison <= now()))\n -> Index Scan using pkey_societe_id \non societes societe_client (cost=0.00..4.66 rows=1 width=8) (actual \ntime=0.006..0.006 rows=1 loops=285)\n Index Cond: (pk_societe_id = $0)\n -> Sort (cost=549.82..552.10 rows=911 width=16) \n(actual time=66.362..67.343 rows=562 loops=1)\n Sort Key: stats_commandes.societe_id\n -> Subquery Scan stats_commandes \n(cost=484.54..505.04 rows=911 width=16) (actual time=61.656..64.737 \nrows=563 loops=1)\n -> HashAggregate \n(cost=484.54..495.93 rows=911 width=16) (actual time=61.651..62.790 \nrows=563 loops=1)\n -> Hash Join \n(cost=135.22..457.01 rows=5506 width=16) (actual time=13.889..49.362 \nrows=5958 loops=1)\n Hash Cond: \n(\"outer\".fk_client_id = \"inner\".pk_client_id)\n -> Seq Scan on commandes \n (cost=0.00..233.50 rows=6650 width=16) (actual time=0.003..12.145 \nrows=5958 loops=1)\n -> Hash \n(cost=132.46..132.46 rows=1105 width=16) (actual time=13.855..13.855 \nrows=1082 loops=1)\n -> Hash Join \n(cost=48.39..132.46 rows=1105 width=16) (actual time=4.088..11.448 \nrows=1082 loops=1)\n Hash Cond: \n(\"outer\".fk_societe_id = \"inner\".pk_societe_id)\n -> Seq Scan \non clients (cost=0.00..66.35 rows=1335 width=16) (actual \ntime=0.004..2.644 rows=1308 loops=1)\n -> Hash \n(cost=46.11..46.11 rows=911 width=8) (actual time=4.051..4.051 rows=903 \nloops=1)\n -> Seq \nScan on societes (cost=0.00..46.11 rows=911 width=8) (actual \ntime=0.009..2.074 rows=903 loops=1)\n -> Sort (cost=90.72..92.83 rows=844 width=16) (actual \ntime=11.784..13.245 rows=883 loops=1)\n Sort Key: stats_adresses_livraison.societe_id\n -> Subquery Scan stats_adresses_livraison \n(cost=30.71..49.70 rows=844 width=16) (actual time=4.724..9.537 rows=885 \nloops=1)\n -> HashAggregate (cost=30.71..41.26 \nrows=844 width=16) (actual time=4.719..6.486 rows=885 loops=1)\n -> Seq Scan on \nsocietes_adresses_livraison (cost=0.00..25.90 rows=962 width=16) \n(actual time=0.010..2.328 rows=991 loops=1)\n Filter: (NOT is_deleted)\n -> Hash (cost=52.48..52.48 rows=903 width=16) (actual \ntime=11.507..11.507 rows=903 loops=1)\n -> Subquery Scan stats_adresses_facturation \n(cost=32.16..52.48 rows=903 width=16) (actual time=4.604..9.510 rows=903 \nloops=1)\n -> HashAggregate (cost=32.16..43.45 rows=903 \nwidth=16) (actual time=4.600..6.399 rows=903 loops=1)\n -> Seq Scan on \nsocietes_adresses_facturation (cost=0.00..27.25 rows=983 width=16) \n(actual time=0.009..2.297 rows=943 loops=1)\n Filter: (NOT is_deleted)\n Total runtime: 13618.033 ms\n(47 lignes)\n\n\n####################################\n\nRegards,\n\n-- \nBruno Baguette\n", "msg_date": "Thu, 13 Nov 2008 12:02:58 +0100", "msg_from": "Bruno Baguette <[email protected]>", "msg_from_op": true, "msg_subject": "Slow SQL query (14-15 seconds)" }, { "msg_contents": "Bruno Baguette napisal 13.11.2008 12:02:\n> Hello !\n> \n> Sorry for the subject, I didn't found a better one ! :-/\n> \n> I'm having a problem with this query (below) that takes betweend 14 and \n> 15 seconds to run, which is too long for the end-user.\n> \n> I've done a EXPLAIN ANALYZE (below below) but I'm having difficulties to \n> see which part of that query is taking so many times.\n> \n> If the lines are too long, your mailreader may cut them and make the SQL \n> query and the query plan unreadable, so I've put a copy of them on \n> pastebin.com : <http://pastebin.com/m53ca365>\n> \n> Can you give me some tips to see which part of the query is guilty ?\n\n1. Your explain analyze points to a lot of loops in exists clause:\n\nFilter: ((NOT is_deleted) AND (subplan))\n16.5msec * 800loops = ~13sec.\n\nTry to replace exists() with in() or inner joins/distinct.\n\n2. Those 3 left joins can be replaced with subselects:\nselect (select count(*)... ) as societe_nbre_commandes\nfrom societes ...\n\n-- \nRegards,\nTomasz Myrta\n", "msg_date": "Thu, 13 Nov 2008 14:16:09 +0100", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow SQL query (14-15 seconds)" }, { "msg_contents": "On Thu, 13 Nov 2008, Bruno Baguette wrote:\n> I'm having a problem with this query (below) that takes between 14 and 15 \n> seconds to run, which is too long for the end-user.\n>\n> I've done a EXPLAIN ANALYZE (below below) but I'm having difficulties to see \n> which part of that query is taking so many times.\n\nAs a general tip, if you're trying to work out which part of a query is \ntaking time, and the query is fairly obviously made up of several parts, \nit would make sense to try them individually.\n\nIn any case, it appears that the time is being taken performing a full \nindex scan over the societe table, in one of the subqueries. Perhaps you \ncould run each of the subqueries individually, and send us the one that \ntakes loads of time as a simpler problem to solve.\n\nMatthew\n\n-- \nThose who do not understand Unix are condemned to reinvent it, poorly.\n -- Henry Spencer\n", "msg_date": "Thu, 13 Nov 2008 13:28:13 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow SQL query (14-15 seconds)" }, { "msg_contents": "Bruno Baguette <[email protected]> writes:\n> I'm having a problem with this query (below) that takes betweend 14 and \n> 15 seconds to run, which is too long for the end-user.\n> I've done a EXPLAIN ANALYZE (below below) but I'm having difficulties to \n> see which part of that query is taking so many times.\n\nIt's the repeatedly executed EXISTS subplan that's hurting you:\n\n> SubPlan\n> -> Nested Loop (cost=35.56..378.16 \n> rows=2 width=0) (actual time=16.511..16.511 rows=0 loops=818)\n\n16.511 * 818 = 13505.998, so this is all but about 100 msec of the\nruntime. Can't tell if there's any easy way to improve it. In\npre-8.4 releases trying to convert the EXISTS into an IN might help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Nov 2008 08:31:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow SQL query (14-15 seconds) " }, { "msg_contents": "Le 13/11/08 14:31, Tom Lane a �crit :\n> It's the repeatedly executed EXISTS subplan that's hurting you:\n> \n>> SubPlan\n>> -> Nested Loop (cost=35.56..378.16 \n>> rows=2 width=0) (actual time=16.511..16.511 rows=0 loops=818)\n> \n> 16.511 * 818 = 13505.998, so this is all but about 100 msec of the\n> runtime. Can't tell if there's any easy way to improve it. In\n> pre-8.4 releases trying to convert the EXISTS into an IN might help.\n\nHello Tom !\n\nIf I replace the EXISTS by a IN subquery, it falls from 14-15 seconds to \n5 seconds !\n\n####################################\nAND EXISTS (\n SELECT 1 FROM commandes\n INNER JOIN clients ON commandes.fk_client_id = \nclients.pk_client_id\n INNER JOIN societes AS societe_client ON \nclients.fk_societe_id = societe_client.pk_societe_id\n WHERE delivery_date_livraison BETWEEN (NOW() - '1 \nyear'::interval) AND NOW() AND societe_client.pk_societe_id = \nsocietes.pk_societe_id\n )\n####################################\n\nreplaced by a IN subquery\n\n####################################\nAND societes.pk_societe_id IN (\n SELECT societes.pk_societe_id\n FROM commandes\n INNER JOIN clients ON \ncommandes.fk_client_id = clients.pk_client_id\n INNER JOIN societes AS societe_client \nON clients.fk_societe_id = societe_client.pk_societe_id\n WHERE delivery_date_livraison BETWEEN \n(NOW() - '1 year'::interval) AND NOW()\n )\n####################################\n\nHeres's the EXPLAIN ANALYZE of the new SQL query :\n\n\n####################################\n \n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=280995.27..280996.30 rows=414 width=147) (actual \ntime=5164.297..5165.638 rows=818 loops=1)\n Sort Key: lower((societes.denomination_commerciale)::text)\n -> Hash Left Join (cost=697.38..280977.27 rows=414 width=147) \n(actual time=110.093..5156.853 rows=818 loops=1)\n Hash Cond: (\"outer\".pk_societe_id = \"inner\".societe_id)\n -> Merge Left Join (cost=642.64..280817.00 rows=414 \nwidth=139) (actual time=98.886..5141.305 rows=818 loops=1)\n Merge Cond: (\"outer\".pk_societe_id = \"inner\".societe_id)\n -> Merge Left Join (cost=551.92..280717.18 rows=414 \nwidth=131) (actual time=87.278..5123.133 rows=818 loops=1)\n Merge Cond: (\"outer\".pk_societe_id = \n\"inner\".societe_id)\n -> Index Scan using pkey_societe_id on societes \n(cost=0.00..280155.54 rows=414 width=123) (actual time=21.748..5051.976 \nrows=818 loops=1)\n Filter: ((NOT is_deleted) AND (subplan))\n SubPlan\n -> Hash Join (cost=170.88..438.17 \nrows=2298 width=0) (actual time=6.165..6.165 rows=1 loops=818)\n Hash Cond: (\"outer\".fk_client_id = \n\"inner\".pk_client_id)\n -> Bitmap Heap Scan on commandes \n(cost=35.66..266.10 rows=2775 width=8) (actual time=6.144..6.144 rows=1 \nloops=818)\n Recheck Cond: \n((delivery_date_livraison >= (now() - '1 year'::interval)) AND \n(delivery_date_livraison <= now()))\n -> Bitmap Index Scan on \nidx_date_livraison (cost=0.00..35.66 rows=2775 width=0) (actual \ntime=6.121..6.121 rows=3109 loops=818)\n Index Cond: \n((delivery_date_livraison >= (now() - '1 year'::interval)) AND \n(delivery_date_livraison <= now()))\n -> Hash (cost=132.46..132.46 \nrows=1105 width=8) (actual time=13.573..13.573 rows=1082 loops=1)\n -> Hash Join \n(cost=48.39..132.46 rows=1105 width=8) (actual time=3.933..11.246 \nrows=1082 loops=1)\n Hash Cond: \n(\"outer\".fk_societe_id = \"inner\".pk_societe_id)\n -> Seq Scan on clients \n (cost=0.00..66.35 rows=1335 width=16) (actual time=0.004..2.623 \nrows=1308 loops=1)\n -> Hash \n(cost=46.11..46.11 rows=911 width=8) (actual time=3.900..3.900 rows=903 \nloops=1)\n -> Seq Scan on \nsocietes societe_client (cost=0.00..46.11 rows=911 width=8) (actual \ntime=0.004..1.947 rows=903 loops=1)\n -> Sort (cost=551.92..554.20 rows=911 width=16) \n(actual time=65.518..66.453 rows=563 loops=1)\n Sort Key: stats_commandes.societe_id\n -> Subquery Scan stats_commandes \n(cost=486.64..507.14 rows=911 width=16) (actual time=61.034..64.117 \nrows=563 loops=1)\n -> HashAggregate \n(cost=486.64..498.03 rows=911 width=16) (actual time=61.028..62.177 \nrows=563 loops=1)\n -> Hash Join \n(cost=135.22..458.94 rows=5539 width=16) (actual time=13.517..48.643 \nrows=5971 loops=1)\n Hash Cond: \n(\"outer\".fk_client_id = \"inner\".pk_client_id)\n -> Seq Scan on commandes \n (cost=0.00..234.90 rows=6690 width=16) (actual time=0.004..11.951 \nrows=5971 loops=1)\n -> Hash \n(cost=132.46..132.46 rows=1105 width=16) (actual time=13.486..13.486 \nrows=1082 loops=1)\n -> Hash Join \n(cost=48.39..132.46 rows=1105 width=16) (actual time=3.827..11.123 \nrows=1082 loops=1)\n Hash Cond: \n(\"outer\".fk_societe_id = \"inner\".pk_societe_id)\n -> Seq Scan \non clients (cost=0.00..66.35 rows=1335 width=16) (actual \ntime=0.003..2.566 rows=1308 loops=1)\n -> Hash \n(cost=46.11..46.11 rows=911 width=8) (actual time=3.802..3.802 rows=903 \nloops=1)\n -> Seq \nScan on societes (cost=0.00..46.11 rows=911 width=8) (actual \ntime=0.004..1.906 rows=903 loops=1)\n -> Sort (cost=90.72..92.83 rows=844 width=16) (actual \ntime=11.566..13.070 rows=885 loops=1)\n Sort Key: stats_adresses_livraison.societe_id\n -> Subquery Scan stats_adresses_livraison \n(cost=30.71..49.70 rows=844 width=16) (actual time=4.504..9.357 rows=885 \nloops=1)\n -> HashAggregate (cost=30.71..41.26 \nrows=844 width=16) (actual time=4.499..6.304 rows=885 loops=1)\n -> Seq Scan on \nsocietes_adresses_livraison (cost=0.00..25.90 rows=962 width=16) \n(actual time=0.005..2.221 rows=991 loops=1)\n Filter: (NOT is_deleted)\n -> Hash (cost=52.48..52.48 rows=903 width=16) (actual \ntime=11.164..11.164 rows=903 loops=1)\n -> Subquery Scan stats_adresses_facturation \n(cost=32.16..52.48 rows=903 width=16) (actual time=4.339..9.220 rows=903 \nloops=1)\n -> HashAggregate (cost=32.16..43.45 rows=903 \nwidth=16) (actual time=4.334..6.116 rows=903 loops=1)\n -> Seq Scan on \nsocietes_adresses_facturation (cost=0.00..27.25 rows=983 width=16) \n(actual time=0.006..2.128 rows=943 loops=1)\n Filter: (NOT is_deleted)\n Total runtime: 5167.896 ms\n(48 lignes)\n\n####################################\n\n\nMany thanks for the help, that's already better (3x time faster) !\n\nCan you explain why a IN is fastest than an EXISTS subquery ? Until now, \nI was thinking that IN would require PostgreSQL to scan all the table \n(from the beginning to the end) and that EXISTS would require to scan \nall the table (from the beginning until getting one match).\n\nDo you think I can improve again the performance of that query ? I \nexpected more speed since theses are little tables\n\ndelivery=> SELECT COUNT(*) FROM societes;\n count\n-------\n 903\n(1 ligne)\n\ndelivery=> SELECT COUNT(*) FROM clients;\n count\n-------\n 1308\n(1 ligne)\n\ndelivery=> SELECT COUNT(*) FROM commandes;\n count\n-------\n 5972\n(1 ligne)\n\n\nOne reader told me Gmail was guilty for cutting the lines, so I've put a \ncopy of the query plan on pastebin.com to keep it readable : \n<http://pastebin.com/m6434f639>\n\nThanks in advance for any tips !\n\nRegards,\n\n-- \nBruno Baguette\n", "msg_date": "Thu, 13 Nov 2008 15:19:42 +0100", "msg_from": "Bruno Baguette <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow SQL query (14-15 seconds)" }, { "msg_contents": "Could you please try this one:\n\nSELECT pk_societe_id,\n denomination_commerciale,\n denomination_sociale,\n numero_client,\n COALESCE(stats_commandes.nombre, 0) AS societe_nbre_commandes,\n COALESCE(stats_adresses_livraison.nombre, 0) AS\nsociete_adresses_livraison_quantite,\n COALESCE(stats_adresses_facturation.nombre, 0) AS\nsociete_adresses_facturation_quantite,\n COALESCE(NULLIF(admin_email,''), NULLIF(admin_bis_email,''),\nNULLIF(admin_ter_email,''), 'n/a') AS email,\n COALESCE(NULLIF(admin_tel,''), NULLIF(admin_bis_tel,''),\nNULLIF(admin_ter_tel,''), 'n/a') AS telephone,\n remise_permanente,\n is_horeca\nFROM societes\nLEFT JOIN (\n SELECT societes.pk_societe_id AS societe_id,\n COUNT(commandes.pk_commande_id) AS nombre,\n max(case when delivery_date_livraison BETWEEN (NOW() - '1\nyear'::interval) AND NOW() then 1 end) AS il_y_avait_un_commande\n FROM commandes\n INNER JOIN clients ON commandes.fk_client_id =\nclients.pk_client_id\n INNER JOIN societes ON clients.fk_societe_id =\nsocietes.pk_societe_id\n GROUP BY societes.pk_societe_id\n ) AS stats_commandes ON stats_commandes.societe_id =\nsocietes.pk_societe_id\nLEFT JOIN (\n SELECT fk_societe_id AS societe_id,\n COUNT(pk_adresse_livraison_id) AS nombre,\n\n FROM societes_adresses_livraison\n WHERE is_deleted = FALSE\n GROUP BY fk_societe_id\n ) AS stats_adresses_livraison ON\nstats_adresses_livraison.societe_id = societes.pk_societe_id\nLEFT JOIN (\n SELECT fk_societe_id AS societe_id,\n COUNT(pk_adresse_facturation_id) AS nombre\n FROM societes_adresses_facturation\n WHERE is_deleted = FALSE\n GROUP BY fk_societe_id\n ) AS stats_adresses_facturation ON\nstats_adresses_facturation.societe_id = societes.pk_societe_id\nWHERE societes.is_deleted = FALSE and il_y_avait_un_commande=1\nORDER BY LOWER(denomination_commerciale);\n\nBien a vous,\nVladimir Sitnikov\n\nCould you please try this one:SELECT pk_societe_id,      denomination_commerciale,\n      denomination_sociale,      numero_client,\n      COALESCE(stats_commandes.nombre, 0) AS societe_nbre_commandes,      COALESCE(stats_adresses_livraison.nombre, 0) AS societe_adresses_livraison_quantite,\n      COALESCE(stats_adresses_facturation.nombre, 0) AS societe_adresses_facturation_quantite,      COALESCE(NULLIF(admin_email,''), NULLIF(admin_bis_email,''), NULLIF(admin_ter_email,''), 'n/a') AS email,\n      COALESCE(NULLIF(admin_tel,''), NULLIF(admin_bis_tel,''), NULLIF(admin_ter_tel,''), 'n/a') AS telephone,\n      remise_permanente,      is_horeca\nFROM societesLEFT JOIN (\n           SELECT societes.pk_societe_id AS societe_id,                  COUNT(commandes.pk_commande_id) AS nombre,\n                  max(case when delivery_date_livraison BETWEEN (NOW() - '1 year'::interval) AND NOW() then 1 end) AS il_y_avait_un_commande\n           FROM commandes           INNER JOIN clients ON commandes.fk_client_id = clients.pk_client_id\n           INNER JOIN societes ON clients.fk_societe_id = societes.pk_societe_id           GROUP BY societes.pk_societe_id\n         ) AS stats_commandes ON stats_commandes.societe_id = societes.pk_societe_idLEFT JOIN (\n           SELECT fk_societe_id AS societe_id,                  COUNT(pk_adresse_livraison_id) AS nombre,\n           FROM societes_adresses_livraison           WHERE is_deleted = FALSE\n           GROUP BY fk_societe_id         ) AS stats_adresses_livraison ON stats_adresses_livraison.societe_id = societes.pk_societe_id\nLEFT JOIN (           SELECT fk_societe_id AS societe_id,\n                  COUNT(pk_adresse_facturation_id) AS nombre           FROM societes_adresses_facturation\n           WHERE is_deleted = FALSE           GROUP BY fk_societe_id\n         ) AS stats_adresses_facturation ON stats_adresses_facturation.societe_id = societes.pk_societe_idWHERE societes.is_deleted = FALSE and il_y_avait_un_commande=1\nORDER BY LOWER(denomination_commerciale);Bien a vous,Vladimir Sitnikov", "msg_date": "Thu, 13 Nov 2008 06:29:10 -0800", "msg_from": "\"Vladimir Sitnikov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow SQL query (14-15 seconds)" }, { "msg_contents": "Bruno Baguette <[email protected]> writes:\n> Le 13/11/08 14:31, Tom Lane a �crit :\n>> 16.511 * 818 = 13505.998, so this is all but about 100 msec of the\n>> runtime. Can't tell if there's any easy way to improve it. In\n>> pre-8.4 releases trying to convert the EXISTS into an IN might help.\n\n> Can you explain why a IN is fastest than an EXISTS subquery ?\n\nThe planner is smarter about IN than EXISTS --- it can usually convert\nthe former into a join plan instead of a subplan. (This situation will\nimprove in 8.4.)\n\n> Do you think I can improve again the performance of that query ?\n\nYou've still got a subplan in there, not quite sure why. Anyway,\nincreasing work_mem might get it to change to a hashed subplan,\nwhich'd likely be faster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Nov 2008 09:30:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow SQL query (14-15 seconds) " }, { "msg_contents": "Le 13/11/08 14:28, Matthew Wakeling a �crit :\n> On Thu, 13 Nov 2008, Bruno Baguette wrote:\n>> I'm having a problem with this query (below) that takes between 14 and \n>> 15 seconds to run, which is too long for the end-user.\n>>\n>> I've done a EXPLAIN ANALYZE (below below) but I'm having difficulties \n>> to see which part of that query is taking so many times.\n\nHello Matthew !\n\n> As a general tip, if you're trying to work out which part of a query is \n> taking time, and the query is fairly obviously made up of several parts, \n> it would make sense to try them individually.\n\nI did a try separately for each LEFT JOIN and EXISTS, but I didn't \nunderstood that the EXISTS was guilty, since it was fast to me (indeed, \nI did't saw that it was runned 818 times !).\n\nWith Tomasz, Tom and your suggest, I've changed the EXISTS subquery to a \nIN subquery (cf. my answer to Tom). The query time was going from 14-15 \nseconds to ~5 seconds.\n\nI just found an faster way by moving the \"AND societe.is_deleted = FALSE \n\"from the main query to the IN subquery. The query is now running in 165 \nms !!! :-)\n\nHere's the current SQL query :\n\n####################################\nSELECT pk_societe_id,\n denomination_commerciale,\n denomination_sociale,\n numero_client,\n COALESCE(stats_commandes.nombre, 0) AS societe_nbre_commandes,\n COALESCE(stats_adresses_livraison.nombre, 0) AS \nsociete_adresses_livraison_quantite,\n COALESCE(stats_adresses_facturation.nombre, 0) AS \nsociete_adresses_facturation_quantite,\n COALESCE(NULLIF(admin_email,''), NULLIF(admin_bis_email,''), \nNULLIF(admin_ter_email,''), 'n/a') AS email,\n COALESCE(NULLIF(admin_tel,''), NULLIF(admin_bis_tel,''), \nNULLIF(admin_ter_tel,''), 'n/a') AS telephone,\n remise_permanente,\n is_horeca\nFROM societes\nLEFT JOIN (\n SELECT societes.pk_societe_id AS societe_id,\n COUNT(commandes.pk_commande_id) AS nombre\n FROM commandes\n INNER JOIN clients ON commandes.fk_client_id = \nclients.pk_client_id\n INNER JOIN societes ON clients.fk_societe_id = \nsocietes.pk_societe_id\n GROUP BY societes.pk_societe_id\n ) AS stats_commandes ON stats_commandes.societe_id = \nsocietes.pk_societe_id\nLEFT JOIN (\n SELECT fk_societe_id AS societe_id,\n COUNT(pk_adresse_livraison_id) AS nombre\n FROM societes_adresses_livraison\n WHERE is_deleted = FALSE\n GROUP BY fk_societe_id\n ) AS stats_adresses_livraison ON \nstats_adresses_livraison.societe_id = societes.pk_societe_id\nLEFT JOIN (\n SELECT fk_societe_id AS societe_id,\n COUNT(pk_adresse_facturation_id) AS nombre\n FROM societes_adresses_facturation\n WHERE is_deleted = FALSE\n GROUP BY fk_societe_id\n ) AS stats_adresses_facturation ON \nstats_adresses_facturation.societe_id = societes.pk_societe_id\nWHERE societes.pk_societe_id IN (\n SELECT societe_client.pk_societe_id\n FROM commandes\n INNER JOIN clients ON \ncommandes.fk_client_id = clients.pk_client_id\n INNER JOIN societes AS societe_client \nON clients.fk_societe_id = societe_client.pk_societe_id\n WHERE delivery_date_livraison BETWEEN \n(NOW() - '1 year'::interval) AND NOW()\n AND societe_client.is_deleted = FALSE\n )\nORDER BY LOWER(denomination_commerciale);\n####################################\n\nand the query plan :\n\n####################################\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1311.74..1313.79 rows=821 width=147) (actual \ntime=162.924..163.400 rows=285 loops=1)\n Sort Key: lower((societes.denomination_commerciale)::text)\n -> Hash IN Join (cost=1196.61..1272.00 rows=821 width=147) (actual \ntime=137.164..160.354 rows=285 loops=1)\n Hash Cond: (\"outer\".pk_societe_id = \"inner\".pk_societe_id)\n -> Merge Left Join (cost=788.60..837.19 rows=903 width=147) \n(actual time=95.140..116.124 rows=903 loops=1)\n Merge Cond: (\"outer\".pk_societe_id = \"inner\".societe_id)\n -> Merge Left Join (cost=695.31..728.65 rows=903 \nwidth=139) (actual time=83.413..97.585 rows=903 loops=1)\n Merge Cond: (\"outer\".pk_societe_id = \n\"inner\".societe_id)\n -> Merge Left Join (cost=602.27..620.33 rows=903 \nwidth=131) (actual time=71.751..79.176 rows=903 loops=1)\n Merge Cond: (\"outer\".pk_societe_id = \n\"inner\".societe_id)\n -> Sort (cost=89.36..91.62 rows=903 \nwidth=123) (actual time=5.966..7.494 rows=903 loops=1)\n Sort Key: societes.pk_societe_id\n -> Seq Scan on societes \n(cost=0.00..45.03 rows=903 width=123) (actual time=0.007..2.775 rows=903 \nloops=1)\n -> Sort (cost=512.91..515.17 rows=903 \nwidth=16) (actual time=65.773..66.726 rows=563 loops=1)\n Sort Key: stats_commandes.societe_id\n -> Subquery Scan stats_commandes \n(cost=448.26..468.58 rows=903 width=16) (actual time=61.278..64.345 \nrows=563 loops=1)\n -> HashAggregate \n(cost=448.26..459.55 rows=903 width=16) (actual time=61.273..62.413 \nrows=563 loops=1)\n -> Hash Join \n(cost=132.44..423.38 rows=4977 width=16) (actual time=13.740..48.912 \nrows=5972 loops=1)\n Hash Cond: \n(\"outer\".fk_client_id = \"inner\".pk_client_id)\n -> Seq Scan on \ncommandes (cost=0.00..211.11 rows=6011 width=16) (actual \ntime=0.004..11.882 rows=5972 loops=1)\n -> Hash \n(cost=129.74..129.74 rows=1083 width=16) (actual time=13.711..13.711 \nrows=1082 loops=1)\n -> Hash Join \n (cost=47.29..129.74 rows=1083 width=16) (actual time=3.882..11.315 \nrows=1082 loops=1)\n Hash \nCond: (\"outer\".fk_societe_id = \"inner\".pk_societe_id)\n -> Seq \nScan on clients (cost=0.00..65.08 rows=1308 width=16) (actual \ntime=0.003..2.652 rows=1308 loops=1)\n -> Hash \n (cost=45.03..45.03 rows=903 width=8) (actual time=3.846..3.846 \nrows=903 loops=1)\n -> \n Seq Scan on societes (cost=0.00..45.03 rows=903 width=8) (actual \ntime=0.004..1.897 rows=903 loops=1)\n -> Sort (cost=93.04..95.21 rows=868 width=16) \n(actual time=11.651..13.149 rows=885 loops=1)\n Sort Key: stats_adresses_livraison.societe_id\n -> Subquery Scan stats_adresses_livraison \n(cost=31.14..50.67 rows=868 width=16) (actual time=4.602..9.398 rows=885 \nloops=1)\n -> HashAggregate (cost=31.14..41.99 \nrows=868 width=16) (actual time=4.598..6.370 rows=885 loops=1)\n -> Seq Scan on \nsocietes_adresses_livraison (cost=0.00..26.19 rows=990 width=16) \n(actual time=0.006..2.225 rows=991 loops=1)\n Filter: (NOT is_deleted)\n -> Sort (cost=93.29..95.46 rows=866 width=16) (actual \ntime=11.718..13.221 rows=903 loops=1)\n Sort Key: stats_adresses_facturation.societe_id\n -> Subquery Scan stats_adresses_facturation \n(cost=31.55..51.04 rows=866 width=16) (actual time=4.502..9.424 rows=903 \nloops=1)\n -> HashAggregate (cost=31.55..42.38 \nrows=866 width=16) (actual time=4.498..6.311 rows=903 loops=1)\n -> Seq Scan on \nsocietes_adresses_facturation (cost=0.00..26.84 rows=943 width=16) \n(actual time=0.006..2.180 rows=943 loops=1)\n Filter: (NOT is_deleted)\n -> Hash (cost=403.31..403.31 rows=1877 width=16) (actual \ntime=41.623..41.623 rows=2677 loops=1)\n -> Hash Join (cost=164.98..403.31 rows=1877 width=16) \n(actual time=19.522..35.816 rows=2677 loops=1)\n Hash Cond: (\"outer\".fk_client_id = \n\"inner\".pk_client_id)\n -> Bitmap Heap Scan on commandes \n(cost=33.97..241.06 rows=2493 width=8) (actual time=6.043..11.625 \nrows=2774 loops=1)\n Recheck Cond: ((delivery_date_livraison >= \n(now() - '1 year'::interval)) AND (delivery_date_livraison <= now()))\n -> Bitmap Index Scan on idx_date_livraison \n (cost=0.00..33.97 rows=2493 width=0) (actual time=6.018..6.018 \nrows=2774 loops=1)\n Index Cond: ((delivery_date_livraison \n >= (now() - '1 year'::interval)) AND (delivery_date_livraison <= now()))\n -> Hash (cost=128.55..128.55 rows=985 width=24) \n(actual time=13.465..13.465 rows=1016 loops=1)\n -> Hash Join (cost=47.08..128.55 rows=985 \nwidth=24) (actual time=4.062..11.293 rows=1016 loops=1)\n Hash Cond: (\"outer\".fk_societe_id = \n\"inner\".pk_societe_id)\n -> Seq Scan on clients \n(cost=0.00..65.08 rows=1308 width=16) (actual time=0.003..2.635 \nrows=1308 loops=1)\n -> Hash (cost=45.03..45.03 rows=821 \nwidth=8) (actual time=4.002..4.002 rows=818 loops=1)\n -> Seq Scan on societes \nsociete_client (cost=0.00..45.03 rows=821 width=8) (actual \ntime=0.006..2.363 rows=818 loops=1)\n Filter: (NOT is_deleted)\n Total runtime: 164.639 ms\n(53 lignes)\n\n####################################\n\nTo keep the reading easy, I've put a copy on pastebin : \n<http://pastebin.com/m33388d93>\n\nMany thanks everybody for your help !\n\nKing regards,\n\n-- \nBruno Baguette\n", "msg_date": "Thu, 13 Nov 2008 15:39:20 +0100", "msg_from": "Bruno Baguette <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow SQL query (14-15 seconds)" }, { "msg_contents": "Le 13/11/08 15:29, Vladimir Sitnikov a �crit :\n> Could you please try this one:\n\nHello Vladimir !\n\nThanks for your suggest ! I've changed a small typo in your SQL query \nsuggestion (extra comma in the second LEFT JOIN).\nYour suggest is fast also (137 ms), but it returns less rows than mine \n(39 rows instead of 48). I'm looking to find why there is a difference \nbetween theses queries.\n\n####################################\nSELECT pk_societe_id,\n denomination_commerciale,\n denomination_sociale,\n numero_client,\n COALESCE(stats_commandes.nombre, 0) AS societe_nbre_commandes,\n COALESCE(stats_adresses_livraison.nombre, 0) AS \nsociete_adresses_livraison_quantite,\n COALESCE(stats_adresses_facturation.nombre, 0) AS \nsociete_adresses_facturation_quantite,\n COALESCE(NULLIF(admin_email,''), NULLIF(admin_bis_email,''), \nNULLIF(admin_ter_email,''), 'n/a') AS email,\n COALESCE(NULLIF(admin_tel,''), NULLIF(admin_bis_tel,''), \nNULLIF(admin_ter_tel,''), 'n/a') AS telephone,\n remise_permanente,\n is_horeca\nFROM societes\nLEFT JOIN (\n SELECT societes.pk_societe_id AS societe_id,\n COUNT(commandes.pk_commande_id) AS nombre,\n max(case when delivery_date_livraison BETWEEN (NOW() \n- '1 year'::interval) AND NOW() then 1 end) AS il_y_avait_un_commande\n FROM commandes\n INNER JOIN clients ON commandes.fk_client_id = \nclients.pk_client_id\n INNER JOIN societes ON clients.fk_societe_id = \nsocietes.pk_societe_id\n GROUP BY societes.pk_societe_id\n ) AS stats_commandes ON stats_commandes.societe_id = \nsocietes.pk_societe_id\nLEFT JOIN (\n SELECT fk_societe_id AS societe_id,\n COUNT(pk_adresse_livraison_id) AS nombre\n FROM societes_adresses_livraison\n WHERE is_deleted = FALSE\n GROUP BY fk_societe_id\n ) AS stats_adresses_livraison ON \nstats_adresses_livraison.societe_id = societes.pk_societe_id\nLEFT JOIN (\n SELECT fk_societe_id AS societe_id,\n COUNT(pk_adresse_facturation_id) AS nombre\n FROM societes_adresses_facturation\n WHERE is_deleted = FALSE\n GROUP BY fk_societe_id\n ) AS stats_adresses_facturation ON \nstats_adresses_facturation.societe_id = societes.pk_societe_id\nWHERE societes.is_deleted = FALSE and il_y_avait_un_commande=1\nORDER BY LOWER(denomination_commerciale);\n####################################\n\n\nand the query plan :\n\n####################################\n\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=937.72..939.77 rows=821 width=147) (actual \ntime=136.103..136.586 rows=285 loops=1)\n Sort Key: lower((societes.denomination_commerciale)::text)\n -> Merge Left Join (cost=838.25..897.98 rows=821 width=147) \n(actual time=119.986..133.567 rows=285 loops=1)\n Merge Cond: (\"outer\".pk_societe_id = \"inner\".societe_id)\n -> Merge Left Join (cost=744.95..776.07 rows=821 width=139) \n(actual time=108.233..117.249 rows=285 loops=1)\n Merge Cond: (\"outer\".pk_societe_id = \"inner\".societe_id)\n -> Merge Join (cost=651.92..668.75 rows=821 width=131) \n(actual time=96.664..101.378 rows=285 loops=1)\n Merge Cond: (\"outer\".pk_societe_id = \n\"inner\".societe_id)\n -> Sort (cost=84.77..86.82 rows=821 width=123) \n(actual time=5.215..6.612 rows=816 loops=1)\n Sort Key: societes.pk_societe_id\n -> Seq Scan on societes (cost=0.00..45.03 \nrows=821 width=123) (actual time=0.009..2.569 rows=818 loops=1)\n Filter: (NOT is_deleted)\n -> Sort (cost=567.15..569.40 rows=903 width=16) \n(actual time=91.432..91.926 rows=290 loops=1)\n Sort Key: stats_commandes.societe_id\n -> Subquery Scan stats_commandes \n(cost=473.15..522.81 rows=903 width=16) (actual time=89.009..90.736 \nrows=290 loops=1)\n -> HashAggregate \n(cost=473.15..513.78 rows=903 width=20) (actual time=89.005..89.714 \nrows=290 loops=1)\n Filter: (max(CASE WHEN \n((delivery_date_livraison >= (now() - '1 year'::interval)) AND \n(delivery_date_livraison <= now())) THEN 1 ELSE NULL::integer END) = 1)\n -> Hash Join \n(cost=132.44..423.38 rows=4977 width=20) (actual time=13.531..51.192 \nrows=5972 loops=1)\n Hash Cond: \n(\"outer\".fk_client_id = \"inner\".pk_client_id)\n -> Seq Scan on commandes \n (cost=0.00..211.11 rows=6011 width=20) (actual time=0.004..12.644 \nrows=5972 loops=1)\n -> Hash \n(cost=129.74..129.74 rows=1083 width=16) (actual time=13.511..13.511 \nrows=1082 loops=1)\n -> Hash Join \n(cost=47.29..129.74 rows=1083 width=16) (actual time=3.661..11.094 \nrows=1082 loops=1)\n Hash Cond: \n(\"outer\".fk_societe_id = \"inner\".pk_societe_id)\n -> Seq Scan \non clients (cost=0.00..65.08 rows=1308 width=16) (actual \ntime=0.003..2.655 rows=1308 loops=1)\n -> Hash \n(cost=45.03..45.03 rows=903 width=8) (actual time=3.645..3.645 rows=903 \nloops=1)\n -> Seq \nScan on societes (cost=0.00..45.03 rows=903 width=8) (actual \ntime=0.003..1.847 rows=903 loops=1)\n -> Sort (cost=93.04..95.21 rows=868 width=16) (actual \ntime=11.525..13.049 rows=883 loops=1)\n Sort Key: stats_adresses_livraison.societe_id\n -> Subquery Scan stats_adresses_livraison \n(cost=31.14..50.67 rows=868 width=16) (actual time=4.627..9.393 rows=885 \nloops=1)\n -> HashAggregate (cost=31.14..41.99 \nrows=868 width=16) (actual time=4.622..6.366 rows=885 loops=1)\n -> Seq Scan on \nsocietes_adresses_livraison (cost=0.00..26.19 rows=990 width=16) \n(actual time=0.005..2.259 rows=991 loops=1)\n Filter: (NOT is_deleted)\n -> Sort (cost=93.29..95.46 rows=866 width=16) (actual \ntime=11.667..13.180 rows=901 loops=1)\n Sort Key: stats_adresses_facturation.societe_id\n -> Subquery Scan stats_adresses_facturation \n(cost=31.55..51.04 rows=866 width=16) (actual time=4.482..9.404 rows=903 \nloops=1)\n -> HashAggregate (cost=31.55..42.38 rows=866 \nwidth=16) (actual time=4.478..6.306 rows=903 loops=1)\n -> Seq Scan on \nsocietes_adresses_facturation (cost=0.00..26.84 rows=943 width=16) \n(actual time=0.006..2.174 rows=943 loops=1)\n Filter: (NOT is_deleted)\n Total runtime: 137.650 ms\n####################################\n\nAs usual, I've put a copy on pastebin : <http://pastebin.com/m7611d419>\n\nRegards,\n\n-- \nBruno Baguette\n\n", "msg_date": "Thu, 13 Nov 2008 16:22:47 +0100", "msg_from": "Bruno Baguette <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow SQL query (14-15 seconds)" } ]
[ { "msg_contents": "ok, I have an application that I am trying to speed up. Its a reporting\napplication that makes heavy use of the crosstab function.\n\nHere is some of the setup / configuration details:\nPostgres 8.3.3\nRedHat Enterprise 5.2 (2.6.18 kernel)\nsun x4600, 8 dual core opteron 8218 processors, 32BG, StorageTek SAN\n6 15k FC disks raid 10 for data,\n2 15k FC disks raid 1 for xlog,\n2 10k SAS disks raid 1 for OS\nThe table that I am querying has just under 600k records, 55 columns, 30\nindexes\nThe table is not static, there are several hundred inserts a day into it.\nThis is not the only application that uses postgres on this server. There\nare several other transactional apps as well\n\nhere is an example query\n select \"COL_HEAD\"[1] as site, \"COL_HEAD\"[2] as product_line_description,\n\"COL_HEAD\"[3] as report_sls, \"COL_HEAD\"[4] as fy_period, \"2006\" , \"2007\" ,\n\"2008\" , \"2009\" from public.crosstab('select\nARRAY[site::text,product_line_description::text,report_sls::text,fy_period::text]\nas COL_HEADER, fy_year, sum(invoice_value) from order_data_tbl where\nfy_year is not null group by\nsite::text,product_line_description::text,report_sls::text,fy_period::text,\nfy_year order by\nsite::text,product_line_description::text,report_sls::text,fy_period::text',\n'select fy_year from order_data_tbl where fy_year is not null group by\nfy_year order by fy_year') as order_data_tbl(\"COL_HEAD\" text[], \"2006\"\nnumeric(20,2) , \"2007\" numeric(20,2) , \"2008\" numeric(20,2) , \"2009\"\nnumeric(20,2) )\n\nThe crostab function is taking between 5 and 15 seconds to return. While the\nquery is running one of the cores will be close to 100%, but watching iostat\nmakes be believe that the entire table is cached and none of it is being\nread from disk. Depending on what report is being run the indexes may or may\nnot be of any assistance. In the above query the planner does not use an\nindex. Depending on what the user is looking for some indexes will be used\nbecause there is more specified in the where clause, at which point the\nquery time can be under two seconds. The problem is that most reports that\nget generated with this application don't have a where clause. Are there any\nchanges that can make to my config to speed up these huge aggregating\nqueries?\n\nHere is my postgresql.conf\n\nmax_connections = 1500\nshared_buffers = 8GB\nwork_mem = 2GB\nmaintenance_work_mem = 8GB\nmax_fsm_pages = 2048000\nwal_buffers = 1024kB\ncheckpoint_segments = 256\ncheckpoint_timeout = 10min\neffective_cache_size = 20GB\ndefault_statistics_target = 100\nlog_destination = 'stderr'\nlogging_collector = on\nlog_directory = 'pg_log'\nlog_truncate_on_rotation = on\nlog_rotation_age = 1d\nlog_rotation_size = 1GB\nlog_error_verbosity = default\nautovacuum = on\nautovacuum_max_workers = 9\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\ndefault_text_search_config = 'pg_catalog.english'\nsynchronize_seqscans = on\nlog_min_duration_statement = 250\n\n\n-Jeremiah Elliott\n\nok, I have an application that I am trying to speed up. Its a reporting application that makes heavy use of the crosstab function. Here is some of the setup / configuration details:Postgres 8.3.3RedHat Enterprise 5.2 (2.6.18 kernel)\nsun x4600, 8 dual core opteron 8218 processors,  32BG, StorageTek SAN 6 15k FC disks raid 10 for data, 2 15k FC disks raid 1 for xlog, 2 10k SAS disks raid 1 for OS The table that I am querying has just under 600k records, 55 columns, 30 indexes\nThe table is not static, there are several hundred inserts a day into it.This is not the only application that uses postgres on this server. There are several other transactional apps as wellhere is an example query\n  select \"COL_HEAD\"[1] as site, \"COL_HEAD\"[2] as product_line_description, \"COL_HEAD\"[3] as report_sls, \"COL_HEAD\"[4] as fy_period, \"2006\"  , \"2007\"  , \"2008\"  , \"2009\"   from public.crosstab('select  ARRAY[site::text,product_line_description::text,report_sls::text,fy_period::text] as COL_HEADER, fy_year, sum(invoice_value) from order_data_tbl  where  fy_year is not null  group by site::text,product_line_description::text,report_sls::text,fy_period::text, fy_year order by site::text,product_line_description::text,report_sls::text,fy_period::text', 'select fy_year from order_data_tbl where   fy_year is not null   group by fy_year  order by fy_year') as order_data_tbl(\"COL_HEAD\" text[], \"2006\"  numeric(20,2) , \"2007\"  numeric(20,2) , \"2008\"  numeric(20,2) , \"2009\"  numeric(20,2) )\nThe crostab function is taking between 5 and 15 seconds to return. While the query is running one of the cores will be close to 100%, but watching iostat makes be believe that the entire table is cached and none of it is being read from disk. Depending on what report is being run the indexes may or may not be of any assistance. In the above query the planner does not use an index. Depending on what the user is looking for some indexes will be used because there is more specified in the where clause, at which point the query time can be under two seconds. The problem is that most reports that get generated with this application don't have a where clause. Are there any changes that can make to my config to speed up these huge aggregating queries? \nHere is my postgresql.confmax_connections = 1500    shared_buffers = 8GB        work_mem = 2GB            maintenance_work_mem = 8GB    max_fsm_pages = 2048000            wal_buffers = 1024kB            \ncheckpoint_segments = 256checkpoint_timeout = 10min        effective_cache_size = 20GBdefault_statistics_target = 100        log_destination = 'stderr'        logging_collector = on        \nlog_directory = 'pg_log'        log_truncate_on_rotation = onlog_rotation_age = 1d        log_rotation_size = 1GBlog_error_verbosity = defaultautovacuum = onautovacuum_max_workers = 9datestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8'lc_monetary = 'en_US.UTF-8'    lc_numeric = 'en_US.UTF-8'    lc_time = 'en_US.UTF-8'        default_text_search_config = 'pg_catalog.english'\nsynchronize_seqscans = onlog_min_duration_statement = 250-Jeremiah Elliott", "msg_date": "Thu, 13 Nov 2008 14:42:32 -0600", "msg_from": "\"Jeremiah Elliott\" <[email protected]>", "msg_from_op": true, "msg_subject": "crosstab speed" }, { "msg_contents": "On Thu, Nov 13, 2008 at 1:42 PM, Jeremiah Elliott <[email protected]> wrote:\n> ok, I have an application that I am trying to speed up. Its a reporting\n> application that makes heavy use of the crosstab function.\n>\n> Here is some of the setup / configuration details:\n> Postgres 8.3.3\n> RedHat Enterprise 5.2 (2.6.18 kernel)\n> sun x4600, 8 dual core opteron 8218 processors, 32BG, StorageTek SAN\n> 6 15k FC disks raid 10 for data,\n> 2 15k FC disks raid 1 for xlog,\n> 2 10k SAS disks raid 1 for OS\n> The table that I am querying has just under 600k records, 55 columns, 30\n> indexes\n> The table is not static, there are several hundred inserts a day into it.\n> This is not the only application that uses postgres on this server. There\n> are several other transactional apps as well\n>\n> here is an example query\n> select \"COL_HEAD\"[1] as site, \"COL_HEAD\"[2] as product_line_description,\n> \"COL_HEAD\"[3] as report_sls, \"COL_HEAD\"[4] as fy_period, \"2006\" , \"2007\" ,\n> \"2008\" , \"2009\" from public.crosstab('select\n> ARRAY[site::text,product_line_description::text,report_sls::text,fy_period::text]\n> as COL_HEADER, fy_year, sum(invoice_value) from order_data_tbl where\n> fy_year is not null group by\n> site::text,product_line_description::text,report_sls::text,fy_period::text,\n> fy_year order by\n> site::text,product_line_description::text,report_sls::text,fy_period::text',\n> 'select fy_year from order_data_tbl where fy_year is not null group by\n> fy_year order by fy_year') as order_data_tbl(\"COL_HEAD\" text[], \"2006\"\n> numeric(20,2) , \"2007\" numeric(20,2) , \"2008\" numeric(20,2) , \"2009\"\n> numeric(20,2) )\n\nProviding explain analyze output form that would probably help.\n\n> The crostab function is taking between 5 and 15 seconds to return. While the\n> query is running one of the cores will be close to 100%, but watching iostat\n> makes be believe that the entire table is cached and none of it is being\n> read from disk. Depending on what report is being run the indexes may or may\n> not be of any assistance. In the above query the planner does not use an\n> index. Depending on what the user is looking for some indexes will be used\n> because there is more specified in the where clause, at which point the\n> query time can be under two seconds. The problem is that most reports that\n> get generated with this application don't have a where clause. Are there any\n> changes that can make to my config to speed up these huge aggregating\n> queries?\n\nEither get a faster CPU (incremental change at best) or rethink your\nqueries or pre-create the output ahead of time with either a\nmaterialized view or in a table your app knows to use. Most other\noptions won't help that much if you're running over a metric ton of\ndata at a shot.\n\n> Here is my postgresql.conf\n>\n> max_connections = 1500\n> work_mem = 2GB\n\nThese two settings are kind of incompatble. It means you expect to\nupwards of a thousand users, and each one can grab 8G for each sort\nthey run. If they're large datasets with multiple sorts required,\neven a handful of queries could put your machine in a swap storm and\nbasically your own queries would DOS the machine.\n\nIt's better, if you have a lot of users who don't need large work_mem\nto set it to something more sane, like 2 or 4 Meg, and then issue a\nset work_mem=xxxx when you run your single monstrous query.\n\n> maintenance_work_mem = 8GB\n> autovacuum_max_workers = 9\n\nThese two are also quite dangerous together, as you can have each\nthread grab 8Gigs at a time. (Someone correct me if I'm wrong, but\nI'm pretty sure maint_work_mem is per vacuum thread).\n\nGenerally you'll not see a big return after the first few hundreds of\nmegabytes. Same goes for work_mem.\n\nIf you repeat the same basic query, or parts of it over and over, it\nmay be faster to look into building some materialized views on top of\nthe tables to use for that. Jonathan Gardner wrote an excellent\ntutorial on how to \"roll your own\" that's located here:\n http://jonathangardner.net/tech/w/PostgreSQL/Materialized_Views\n", "msg_date": "Thu, 13 Nov 2008 14:41:07 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crosstab speed" }, { "msg_contents": "Jeremiah Elliott wrote:\n> ok, I have an application that I am trying to speed up. Its a reporting \n> application that makes heavy use of the crosstab function.\n\n<snip>\n\n> here is an example query\n\n> \n> The crostab function is taking between 5 and 15 seconds to return.\n\nPlease run the two embedded queries independently, i.e.\n\nselect\nARRAY[site::text,product_line_description::text,report_sls::text,fy_period::text] \nas COL_HEADER, fy_year, sum(invoice_value) from order_data_tbl \n where fy_year is not null group by \nsite::text,product_line_description::text,report_sls::text,fy_period::text, \nfy_year order by \nsite::text,product_line_description::text,report_sls::text,fy_period::text;\n\n-- and --\n\nselect fy_year from order_data_tbl\n where fy_year is not null\n group by fy_year\n order by fy_year;\n\nHow long does each take? crosstab cannot run any faster than the sum of \nthese two queries run on their own.\n\nIf the second one doesn't change often, can you pre-calculate it, \nperhaps once a day?\n\nJoe\n", "msg_date": "Thu, 13 Nov 2008 14:06:19 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: crosstab speed" } ]
[ { "msg_contents": "I have a database in a production server (8.1.9) with to schema \ncontaining the sames table same index, same every thing, but with \ndifferent data. When I execute a query in one schema, it take much more \ntime to execute then the other schema. I've issue the query plan and \nit's different from one schema to the other. I was assuming that is was \nbecause the contents of the table where different so I've try the query \ninto a test database into another server (8.3.3) and with both schema, I \nget the same query plan and they both work fine\n\nI'm wondering where to start searching to fix this problem\n\nHere is my query:\n\n SELECT bd.component_item_id AS item_id,\n rspec('schema_name', bd.component_item_id, \nbd.component_control_id) AS control_id,\n adjustdate(m.date_due - avior.item_leadtime('schema_name', \nbd.item_id, bd.control_id, 0)*7, m.date_due) AS date_due,\n bd.item_id AS to_item_id,\n bd.control_id AS to_control_id, m.quantity * \ntotalquantity(bd.quantity,\n CASE\n \nWHEN substring(bd.component_item_id, 1, 1) = 'F' THEN bd.size1 + 1\n \nELSE bd.size1\n END,\n \nCASE WHEN substring(bd.component_item_id, 1, 1) = 'F' THEN bd.size2 + 1\n \nELSE bd.size2\n \nEND) / bd.quantity_produce * i.mfg_conv_factor AS quantity\n FROM schema_name.mrp m\n JOIN schema_name.bom_detail bd\n ON bd.item_id = m.item_id\n AND bd.control_id = rspec('schema_name', m.item_id, m.control_id)\n AND NOT bd.rework,\n schema_name.item i\n WHERE i.item_id=m.item_id\n AND NOT bd.item_supplied\n AND bd.component_item_id = 'some value' ;\n\n\nProduction server schema 1 query plan:\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=569.23..634.43 rows=1 width=121) (actual \ntime=1032.811..1032.811 rows=0 loops=1)\n -> Merge Join (cost=569.23..628.36 rows=1 width=127) (actual \ntime=1032.806..1032.806 rows=0 loops=1)\n Merge Cond: ((\"outer\".\"?column5?\" = \"inner\".item_id) AND \n(\"outer\".\"?column6?\" = \"inner\".control_id))\n -> Sort (cost=488.89..503.62 rows=5892 width=39) (actual \ntime=1032.736..1032.736 rows=1 loops=1)\n Sort Key: (m.item_id)::text, (rspec('granby'::text, \nm.item_id, m.control_id))::text\n -> Seq Scan on mrp m (cost=0.00..119.92 rows=5892 \nwidth=39) (actual time=0.343..939.462 rows=5892 loops=1)\n -> Sort (cost=80.34..80.39 rows=21 width=97) (actual \ntime=0.059..0.059 rows=0 loops=1)\n Sort Key: bd.item_id, bd.control_id\n -> Bitmap Heap Scan on bom_detail bd (cost=2.08..79.87 \nrows=21 width=97) (actual time=0.038..0.038 rows=0 loops=1)\n Recheck Cond: ((component_item_id)::text = \n'C294301-1'::text)\n Filter: ((NOT rework) AND (NOT item_supplied))\n -> Bitmap Index Scan on i_bomdetail_component \n(cost=0.00..2.08 rows=23 width=0) (actual time=0.031..0.031 rows=0 loops=1)\n Index Cond: ((component_item_id)::text = \n'C294301-1'::text)\n -> Index Scan using pkey_item on item i (cost=0.00..6.01 rows=1 \nwidth=31) (never executed)\n Index Cond: (i.item_id = (\"outer\".item_id)::text)\n Total runtime: 1034.204 ms\n(16 rows)\n\nProduction server schema 2 query plan:\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=133.42..793.12 rows=1 width=123) (actual \ntime=0.130..0.130 rows=0 loops=1)\n -> Merge Join (cost=133.42..787.05 rows=1 width=130) (actual \ntime=0.126..0.126 rows=0 loops=1)\n Merge Cond: ((\"outer\".item_id)::text = \"inner\".item_id)\n Join Filter: (\"inner\".control_id = (rspec('laval'::text, \n\"outer\".item_id, \"outer\".control_id))::text)\n -> Index Scan using pkey_mrp on mrp m (cost=0.00..634.29 \nrows=7501 width=40) (actual time=0.013..0.013 rows=1 loops=1)\n -> Sort (cost=133.42..133.51 rows=34 width=99) (actual \ntime=0.105..0.105 rows=0 loops=1)\n Sort Key: bd.item_id\n -> Bitmap Heap Scan on bom_detail bd (cost=2.13..132.56 \nrows=34 width=99) (actual time=0.099..0.099 rows=0 loops=1)\n Recheck Cond: ((component_item_id)::text = \n'C294301-1'::text)\n Filter: ((NOT rework) AND (NOT item_supplied))\n -> Bitmap Index Scan on i_bomdetail_component \n(cost=0.00..2.13 rows=37 width=0) (actual time=0.093..0.093 rows=0 loops=1)\n Index Cond: ((component_item_id)::text = \n'C294301-1'::text)\n -> Index Scan using pkey_item on item i (cost=0.00..6.01 rows=1 \nwidth=29) (never executed)\n Index Cond: (i.item_id = (\"outer\".item_id)::text)\n Total runtime: 0.305 ms\n(15 rows)\n\n\nTest server schema 1 query plan:\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=3.43..367.63 rows=1 width=92) (actual \ntime=0.248..0.248 rows=0 loops=1)\n -> Nested Loop (cost=3.43..360.30 rows=1 width=98) (actual \ntime=0.243..0.243 rows=0 loops=1)\n Join Filter: ((rspec('granby'::text, m.item_id, \nm.control_id))::text = bd.control_id)\n -> Bitmap Heap Scan on bom_detail bd (cost=3.43..62.59 \nrows=21 width=74) (actual time=0.240..0.240 rows=0 loops=1)\n Recheck Cond: ((component_item_id)::text = 'C294301-1'::text)\n Filter: ((NOT rework) AND (NOT item_supplied))\n -> Bitmap Index Scan on i_bomdetail_component \n(cost=0.00..3.43 rows=23 width=0) (actual time=0.234..0.234 rows=0 loops=1)\n Index Cond: ((component_item_id)::text = \n'C294301-1'::text)\n -> Index Scan using i_mrp_mrp_itm on mrp m (cost=0.00..9.14 \nrows=19 width=30) (never executed)\n Index Cond: ((m.item_id)::text = bd.item_id)\n -> Index Scan using pkey_item on item i (cost=0.00..6.27 rows=1 \nwidth=24) (never executed)\n Index Cond: (i.item_id = bd.item_id)\n Total runtime: 0.717 ms\n(13 rows)\n\nTest server schema 2 query plan:\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=3.54..381.94 rows=1 width=92) (actual \ntime=0.273..0.273 rows=0 loops=1)\n -> Nested Loop (cost=3.54..374.61 rows=1 width=100) (actual \ntime=0.269..0.269 rows=0 loops=1)\n Join Filter: ((rspec('laval'::text, m.item_id, \nm.control_id))::text = bd.control_id)\n -> Bitmap Heap Scan on bom_detail bd (cost=3.54..99.80 \nrows=33 width=75) (actual time=0.265..0.265 rows=0 loops=1)\n Recheck Cond: ((component_item_id)::text = 'C294301-1'::text)\n Filter: ((NOT rework) AND (NOT item_supplied))\n -> Bitmap Index Scan on i_bomdetail_component \n(cost=0.00..3.53 rows=36 width=0) (actual time=0.259..0.259 rows=0 loops=1)\n Index Cond: ((component_item_id)::text = \n'C294301-1'::text)\n -> Index Scan using i_mrp_mrp_itm on mrp m (cost=0.00..6.74 \nrows=6 width=31) (never executed)\n Index Cond: ((m.item_id)::text = bd.item_id)\n -> Index Scan using pkey_item on item i (cost=0.00..6.28 rows=1 \nwidth=21) (never executed)\n Index Cond: (i.item_id = bd.item_id)\n Total runtime: 0.498 ms\n(13 rows)\n\n\nI'm also wondering why in the production server schema 1 query plan, I'm \ngetting \"outer\".\"?column5?\" instead of \"outer\".\"item_id\"\n\nIt's also to note that schema 1 contain far less date then schema 2 in \nthe order of 1 to 4\n\nthanks", "msg_date": "Fri, 14 Nov 2008 11:14:18 -0500", "msg_from": "Patrice Beliveau <[email protected]>", "msg_from_op": true, "msg_subject": "Difference in query plan" }, { "msg_contents": "Patrice Beliveau wrote:\n> I have a database in a production server (8.1.9) with to schema\n> containing the sames table same index, same every thing, but with\n> different data. When I execute a query in one schema, it take much more\n> time to execute then the other schema.\n[snip]\n> I'm wondering where to start searching to fix this problem\n\n> Production server schema 1 query plan:\n> Nested Loop (cost=569.23..634.43 rows=1 width=121) (actual\n> time=1032.811..1032.811 rows=0 loops=1)\n[snip]\n> Total runtime: 1034.204 ms\n\n> Production server schema 2 query plan:\n> Nested Loop (cost=133.42..793.12 rows=1 width=123) (actual\n> time=0.130..0.130 rows=0 loops=1)\n[snip]\n> Total runtime: 0.305 ms\n\nWell there's something strange - the estimated costs are fairly similar\n(643.43 vs 793.12) but the times are clearly very different (1034 vs 0.3ms)\n\nThe suspicious line from the first plan is:\n> -> Seq Scan on mrp m (cost=0.00..119.92 rows=5892\n> width=39) (actual time=0.343..939.462 rows=5892 loops=1)\n\nThis is taking up almost all the time in the query and yet only seems to\nbe scanning 5892 rows.\n\nRun a vacuum verbose against table \"mrp\" and see if it's got a lot of\ndead rows. If it has, run VACUUM FULL and REINDEX against it and see if\nthat solves your problem.\n\nI'm guessing you have / had a long-running transaction interfering with\nvacuum on this table, or perhaps a bulk update/delete?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 14 Nov 2008 16:47:41 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference in query plan" } ]
[ { "msg_contents": "Thanks,\n\nI'm already doing a vacuum full every night on all database, but the REINDEX fix it and now it's working fine\n\nBut this raise a question\n\n1) This table is cleared every night and recomputed, does this mean that I should REINDEX every night also\n\n2) Why this thing didn't happen in the other schema\n\nThanks again\n\nPatrice Beliveau wrote:\n\n> > I have a database in a production server (8.1.9) with to schema\n> > containing the sames table same index, same every thing, but with\n> > different data. When I execute a query in one schema, it take much more\n> > time to execute then the other schema.\n> \n[snip]\n\n> > I'm wondering where to start searching to fix this problem\n> \n\n> > Production server schema 1 query plan:\n> > Nested Loop (cost=569.23..634.43 rows=1 width=121) (actual\n> > time=1032.811..1032.811 rows=0 loops=1)\n> \n[snip]\n\n> > Total runtime: 1034.204 ms\n> \n\n> > Production server schema 2 query plan:\n> > Nested Loop (cost=133.42..793.12 rows=1 width=123) (actual\n> > time=0.130..0.130 rows=0 loops=1)\n> \n[snip]\n\n> > Total runtime: 0.305 ms\n> \n\nWell there's something strange - the estimated costs are fairly similar\n(643.43 vs 793.12) but the times are clearly very different (1034 vs 0.3ms)\n\nThe suspicious line from the first plan is:\n\n> > -> Seq Scan on mrp m (cost=0.00..119.92 rows=5892\n> > width=39) (actual time=0.343..939.462 rows=5892 loops=1)\n> \n\nThis is taking up almost all the time in the query and yet only seems to\nbe scanning 5892 rows.\n\nRun a vacuum verbose against table \"mrp\" and see if it's got a lot of\ndead rows. If it has, run VACUUM FULL and REINDEX against it and see if\nthat solves your problem.\n\nI'm guessing you have / had a long-running transaction interfering with\nvacuum on this table, or perhaps a bulk update/delete?\n\n-- Richard Huxton Archonet Ltd\n-- Sent via pgsql-performance mailing list \n([email protected]) To make changes to your subscription: \nhttp://www.postgresql.org/mailpref/pgsql-performance .", "msg_date": "Fri, 14 Nov 2008 12:07:45 -0500", "msg_from": "Patrice Beliveau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Difference in query plan" }, { "msg_contents": "Patrice Beliveau wrote:\n> Thanks,\n> \n> I'm already doing a vacuum full every night on all database, but the\n> REINDEX fix it and now it's working fine\n\nAre you sure it was the REINDEX? The plan was using a sequential scan.\n\n> But this raise a question\n> \n> 1) This table is cleared every night and recomputed, does this mean that\n> I should REINDEX every night also\n\nLooks like you should. Or drop the indexes, load the data, re-create the\nindexes, that can be quicker.\n\n> 2) Why this thing didn't happen in the other schema\n\nHave you re-loaded schema1 more often? It might even be the particular\norder that rows are loaded - a btree can become \"unbalanced\" sometimes.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 14 Nov 2008 17:14:17 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference in query plan" } ]
[ { "msg_contents": "Hi folks,\n\nI have a simple table that keeps track of a user's access history.\nIt has a a few fields, but the important ones are:\n - ownerId: the user's ID, a int8\n - accessTS: the timestamp of the record\n\nThe table right now is small, only 1942 records.\nThe user I test with (10015) has only 89 entries.\n\nWhat I want is to get the last 5 accesses of a user:\n SELECT * FROM triphistory WHERE ownerId = 10015 ORDER BY accessTS DESC LIMIT 5\n \nIf I create a composite index *and* analyze:\n create index IDX_TRIP_HISTORY_OWNER_ACCESS_TS on tripHistory (ownerId, accessTS);\n ANALYZE triphistory;\n\nIt takes 0.091s (!):\nperpedes_db=# EXPLAIN ANALYZE SELECT * FROM triphistory WHERE ownerId = 10015 ORDER BY accessTS DESC LIMIT 5;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..7.99 rows=5 width=106) (actual time=0.024..0.042 rows=5 loops=1)\n -> Index Scan Backward using idx_trip_history_owner_access_ts on triphistory (cost=0.00..142.20 rows=89 width=106) (actual time=0.021..0.034 rows=5 loops=1)\n Index Cond: (ownerid = 10015)\n Total runtime: 0.091 ms\n(4 rows)\n\n\nBTW, this is after several runs of the query, shouldn't all this stuff be in memory?\n\nThis is not a fast machine, but this seems rather excessive, no? \n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n", "msg_date": "Mon, 17 Nov 2008 10:53:17 -0500", "msg_from": "Dimi Paun <[email protected]>", "msg_from_op": true, "msg_subject": "Bad performance on simple query" }, { "msg_contents": "On Monday 17 November 2008, Dimi Paun <[email protected]> wrote:\n>> It takes 0.091s (!):\n> perpedes_db=# EXPLAIN ANALYZE SELECT * FROM triphistory WHERE ownerId =\n> 10015 ORDER BY accessTS DESC LIMIT 5; QUERY PLAN\n> -------------------------------------------------------------------------\n>--------------------------------------------------------------------------\n>--------------- Limit (cost=0.00..7.99 rows=5 width=106) (actual\n> time=0.024..0.042 rows=5 loops=1) -> Index Scan Backward using\n> idx_trip_history_owner_access_ts on triphistory (cost=0.00..142.20\n> rows=89 width=106) (actual time=0.021..0.034 rows=5 loops=1) Index Cond:\n> (ownerid = 10015)\n> Total runtime: 0.091 ms\n\nThat's 0.091 milliseconds (0.000091 seconds).\n\n\n-- \nCorporations will ingest natural resources and defecate garbage until all \nresources are depleted, debt can no longer be repaid and our money becomes \nworthless - Jay Hanson\n", "msg_date": "Mon, 17 Nov 2008 08:28:51 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance on simple query" }, { "msg_contents": "On Mon, Nov 17, 2008 at 8:53 AM, Dimi Paun <[email protected]> wrote:\n> Hi folks,\n>\n> I have a simple table that keeps track of a user's access history.\n> It has a a few fields, but the important ones are:\n> - ownerId: the user's ID, a int8\n> - accessTS: the timestamp of the record\n>\n> The table right now is small, only 1942 records.\n> The user I test with (10015) has only 89 entries.\n>\n> What I want is to get the last 5 accesses of a user:\n> SELECT * FROM triphistory WHERE ownerId = 10015 ORDER BY accessTS DESC LIMIT 5\n>\n> If I create a composite index *and* analyze:\n> create index IDX_TRIP_HISTORY_OWNER_ACCESS_TS on tripHistory (ownerId, accessTS);\n> ANALYZE triphistory;\n>\n> It takes 0.091s (!):\n> perpedes_db=# EXPLAIN ANALYZE SELECT * FROM triphistory WHERE ownerId = 10015 ORDER BY accessTS DESC LIMIT 5;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..7.99 rows=5 width=106) (actual time=0.024..0.042 rows=5 loops=1)\n> -> Index Scan Backward using idx_trip_history_owner_access_ts on triphistory (cost=0.00..142.20 rows=89 width=106) (actual time=0.021..0.034 rows=5 loops=1)\n> Index Cond: (ownerid = 10015)\n> Total runtime: 0.091 ms\n> (4 rows)\n>\n>\n> BTW, this is after several runs of the query, shouldn't all this stuff be in memory?\n\nAre you saying it's excessive you need the compound query? Cause\nthat's running in 91microseconds as pointed out by Alan.\n", "msg_date": "Mon, 17 Nov 2008 09:53:51 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance on simple query" }, { "msg_contents": "\nOn Mon, 2008-11-17 at 09:53 -0700, Scott Marlowe wrote:\n> \n> Are you saying it's excessive you need the compound query? Cause\n> that's running in 91microseconds as pointed out by Alan.\n\nOf course, my bad. I read that as 91ms (<blush/>).\n\nConfusion came from the fact that pgadminIII reports the query\ntaking 20-40ms, so I read the 0.091 as seconds not ms.\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n", "msg_date": "Mon, 17 Nov 2008 12:07:01 -0500", "msg_from": "Dimi Paun <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad performance on simple query" }, { "msg_contents": "On Mon, Nov 17, 2008 at 10:07 AM, Dimi Paun <[email protected]> wrote:\n>\n> On Mon, 2008-11-17 at 09:53 -0700, Scott Marlowe wrote:\n>>\n>> Are you saying it's excessive you need the compound query? Cause\n>> that's running in 91microseconds as pointed out by Alan.\n>\n> Of course, my bad. I read that as 91ms (<blush/>).\n>\n> Confusion came from the fact that pgadminIII reports the query\n> taking 20-40ms, so I read the 0.091 as seconds not ms.\n\nAhhh. Keep in mind that if you just run the query, pgadminIII will\ntell you how long it took to run AND return all the data across the\nnetwork, so it will definitely take longer then. But most of that's\nnetwork io wait so it's not a real issue unless you're saturating your\nnetwork.\n", "msg_date": "Mon, 17 Nov 2008 10:16:32 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance on simple query" }, { "msg_contents": "\nOn Mon, 2008-11-17 at 10:16 -0700, Scott Marlowe wrote:\n> Ahhh. Keep in mind that if you just run the query, pgadminIII will\n> tell you how long it took to run AND return all the data across the\n> network, so it will definitely take longer then. But most of that's\n> network io wait so it's not a real issue unless you're saturating your\n> network.\n\nBut that is brutal -- there's no way it can take 20ms for a request \nacross an unloaded network.\n\nMoreover, I got something like this:\n\n pgadminIII | pgsql\nw/o index: 45ms 0.620ms\nw/ index 20ms 0.091ms\n\nHow now I try to replicate, and I get 45ms in both cases. This is\nvery misleading...\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n", "msg_date": "Mon, 17 Nov 2008 12:31:02 -0500", "msg_from": "Dimi Paun <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad performance on simple query" }, { "msg_contents": "On Mon, Nov 17, 2008 at 10:31 AM, Dimi Paun <[email protected]> wrote:\n>\n> On Mon, 2008-11-17 at 10:16 -0700, Scott Marlowe wrote:\n>> Ahhh. Keep in mind that if you just run the query, pgadminIII will\n>> tell you how long it took to run AND return all the data across the\n>> network, so it will definitely take longer then. But most of that's\n>> network io wait so it's not a real issue unless you're saturating your\n>> network.\n>\n> But that is brutal -- there's no way it can take 20ms for a request\n> across an unloaded network.\n>\n> Moreover, I got something like this:\n>\n> pgadminIII | pgsql\n> w/o index: 45ms 0.620ms\n> w/ index 20ms 0.091ms\n>\n> How now I try to replicate, and I get 45ms in both cases. This is\n> very misleading...\n\nI'm guessing a fair bit of that time is pgadminIII prettifying the\noutput for you, etc. I.e. it's not all transfer time. Hard to say\nwithout hooking some kind of profiler in pgadminIII. Is psql running\nlocal and pgadminIII remotely? Or are they both remote? If both psql\nand pgadminIII are remote (i.e. same basic circumstances) then it's\ngot to be a difference in the client causing the extra time. OR is\nthis output of explain analyze?\n", "msg_date": "Mon, 17 Nov 2008 10:40:12 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance on simple query" }, { "msg_contents": "\nOn Nov 17, 2008, at 12:40 PM, Scott Marlowe wrote:\n\n> On Mon, Nov 17, 2008 at 10:31 AM, Dimi Paun <[email protected]> wrote:\n>>\n>> On Mon, 2008-11-17 at 10:16 -0700, Scott Marlowe wrote:\n>>> Ahhh. Keep in mind that if you just run the query, pgadminIII will\n>>> tell you how long it took to run AND return all the data across the\n>>> network, so it will definitely take longer then. But most of that's\n>>> network io wait so it's not a real issue unless you're saturating \n>>> your\n>>> network.\n>>\n>> But that is brutal -- there's no way it can take 20ms for a request\n>> across an unloaded network.\n>>\n>> Moreover, I got something like this:\n>>\n>> pgadminIII | pgsql\n>> w/o index: 45ms 0.620ms\n>> w/ index 20ms 0.091ms\n>>\n>> How now I try to replicate, and I get 45ms in both cases. This is\n>> very misleading...\n>\n> I'm guessing a fair bit of that time is pgadminIII prettifying the\n> output for you, etc. I.e. it's not all transfer time. Hard to say\n> without hooking some kind of profiler in pgadminIII. Is psql running\n> local and pgadminIII remotely? Or are they both remote? If both psql\n> and pgadminIII are remote (i.e. same basic circumstances) then it's\n> got to be a difference in the client causing the extra time. OR is\n> this output of explain analyze?\n>\n\nSide note: I haven't seen pgAdminIII never show a time below 20ms (the \ntime on the bottom right corner).\n\nWhen I do a query like this : select 1; it takes according to \npgAdminIII around 20ms. (whatever that time is)\n\nwhat I normally do to find my real query time is put and explain \nanalyse in front of my query to know to real query time.\n\nRies\n\n\n\n\n\n\n", "msg_date": "Mon, 17 Nov 2008 12:45:01 -0500", "msg_from": "ries van Twisk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance on simple query" }, { "msg_contents": "\nOn Mon, 2008-11-17 at 10:40 -0700, Scott Marlowe wrote:\n> I'm guessing a fair bit of that time is pgadminIII prettifying the\n> output for you, etc. I.e. it's not all transfer time. Hard to say\n> without hooking some kind of profiler in pgadminIII. Is psql running\n> local and pgadminIII remotely? Or are they both remote? If both psql\n> and pgadminIII are remote (i.e. same basic circumstances) then it's\n> got to be a difference in the client causing the extra time. OR is\n> this output of explain analyze?\n\nWith \\timing on I get basically the same output (local vs remote)\nin psql (0.668ms vs. 0.760ms). More like it.\n\n\nWTH is pgadminIII reporting?!?\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n", "msg_date": "Mon, 17 Nov 2008 13:14:14 -0500", "msg_from": "Dimi Paun <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad performance on simple query" }, { "msg_contents": "On Mon, Nov 17, 2008 at 6:14 PM, Dimi Paun <[email protected]> wrote:\n>\n> On Mon, 2008-11-17 at 10:40 -0700, Scott Marlowe wrote:\n>> I'm guessing a fair bit of that time is pgadminIII prettifying the\n>> output for you, etc. I.e. it's not all transfer time. Hard to say\n>> without hooking some kind of profiler in pgadminIII. Is psql running\n>> local and pgadminIII remotely? Or are they both remote? If both psql\n>> and pgadminIII are remote (i.e. same basic circumstances) then it's\n>> got to be a difference in the client causing the extra time. OR is\n>> this output of explain analyze?\n>\n> With \\timing on I get basically the same output (local vs remote)\n> in psql (0.668ms vs. 0.760ms). More like it.\n>\n>\n> WTH is pgadminIII reporting?!?\n\nExactly what it's supposed to be, however it's using libpq's\nasynchronous query interface and has to pass the query result through\nthe wxWidgets event handling system, both of which seem to add a few\nmilliseconds to the overall query time from the quick testing I've\njust done. In a GUI app like pgAdmin, we need use this kind of\narchitecture to allow the UI to continue processing events (such as\nbutton clicks, redraws etc), and to allow multiple windows to work\nindependently without one query locking up the whole app.\n\nNote that the rendering time that Tom mentioned the other day which\nused to confuse things has not been an issue for a couple of years -\nthat was dependent on resultset size and could lead to much bigger\nvariations. that was fixed by having libpq act as a virtual data store\nfor the UI instead of transferring data from the PGresult to the data\ngrid's own data store.\n\nI think the bottom line is that you cannot compare psql and pgAdmin's\ntimings because the architectures of the two apps are very different.\nFurther, pgAdmin isn't the best choice for micro-optimisation of\nextremely fast queries.\n\n-- \nDave Page\nEnterpriseDB UK: http://www.enterprisedb.com\n", "msg_date": "Mon, 17 Nov 2008 20:47:48 +0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance on simple query" } ]
[ { "msg_contents": "Hi. I have a Perl script whose main loop generates thousands of SQL updates\nof the form\nUPDATE edge SET keep = true WHERE node1 IN ( $node_list ) AND node2 =\n$node_id;\n\n...where here $node_list stands for a comma-separated list of integers, and\n$node_id stands for some integer.\n\nThe list represented by $node_list can be fairly long (on average it has\naround 900 entries, and can be as long as 30K entries), and I'm concerned\nabout the performance cost of testing for inclusion in such a long list. Is\nthis done by a sequential search? If so, is there a better way to write\nthis query? (FWIW, I have two indexes on the edge table using btree( node1\n) and btree( node2 ), respectively.)\n\nAlso, assuming that the optimal way to write the query depends on the length\nof $node_list, how can I estimate the \"critical length\" at which I should\nswitch from one form of the query to the other?\n\nTIA!\n\nKynn\n\nHi.  I have a Perl script whose main loop generates thousands of SQL updates of the formUPDATE edge SET keep = true WHERE node1 IN ( $node_list ) AND node2 = $node_id;\n...where here $node_list stands for a comma-separated list of integers, and $node_id stands for some integer.\nThe list represented by $node_list can be fairly long (on average it has around 900 entries, and can be as long as 30K entries), and I'm concerned about the performance cost of testing for inclusion in such a long list.  Is this done by a sequential search?  If so, is there a better way to write this query?  (FWIW, I have two indexes on the edge table using btree( node1 ) and btree( node2 ), respectively.)\nAlso, assuming that the optimal way to write the query depends on the length of $node_list, how can I estimate the \"critical length\" at which I should switch from one form of the query to the other?\nTIA!Kynn", "msg_date": "Tue, 18 Nov 2008 10:53:19 -0500", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance and IN clauses" }, { "msg_contents": "On Tue, 18 Nov 2008, Kynn Jones wrote:\n> Also, assuming that the optimal way to write the query depends on the length of $node_list, how can I estimate the\n> \"critical length\" at which I should switch from one form of the query to the other?\n\nIn the past, I have found the fastest way to do this was to operate on \ngroups of a bit less than a thousand values, and issue one query per \ngroup. Of course, Postgres may have improved since then, so I'll let more \nknowledgable people cover that for me.\n\nMatthew\n\n-- \n Heat is work, and work's a curse. All the heat in the universe, it's\n going to cool down, because it can't increase, then there'll be no\n more work, and there'll be perfect peace. -- Michael Flanders\n", "msg_date": "Tue, 18 Nov 2008 16:12:24 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance and IN clauses" }, { "msg_contents": "I bet there is no 'critical' length - this is just another case of index\nscan vs. seqscan. The efficiency depends on the size of the table / row,\namount of data in the table, variability of the column used in the IN\nclause, etc.\n\nSplitting the query with 1000 items into 10 separate queries, the smaller\nqueries may be faster but the total time consumed may be actually higher.\nSomething like\n\n10 * (time of small query) + (time to combine them) > (time of large query)\n\nIf the performance of the 'split' solution is actually better than the\noriginal query, it just means that the planner does not use index scan\nwhen it actually should. That means that either\n\n(a) the planner is not smart enough\n(b) it has not current statistics of the table (run ANALYZE on the table)\n(c) the statistics are not detailed enough (ALTER TABLE ... SET STATICTICS)\n(d) the cost variables are not set properly (do not match the hardware -\ndecreate index scan cost / increase seq scan cost)\n\nregards\nTomas\n\n> On Tue, 18 Nov 2008, Kynn Jones wrote:\n>> Also, assuming that the optimal way to write the query depends on the\n>> length of $node_list, how can I estimate the\n>> \"critical length\" at which I should switch from one form of the query to\n>> the other?\n>\n> In the past, I have found the fastest way to do this was to operate on\n> groups of a bit less than a thousand values, and issue one query per\n> group. Of course, Postgres may have improved since then, so I'll let more\n> knowledgable people cover that for me.\n>\n> Matthew\n>\n> --\n> Heat is work, and work's a curse. All the heat in the universe, it's\n> going to cool down, because it can't increase, then there'll be no\n> more work, and there'll be perfect peace. -- Michael Flanders\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Tue, 18 Nov 2008 17:38:45 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance and IN clauses" }, { "msg_contents": "\nOn Tue, 2008-11-18 at 17:38 +0100, [email protected] wrote:\n> I bet there is no 'critical' length - this is just another case of\n> index\n> scan vs. seqscan. The efficiency depends on the size of the table /\n> row,\n> amount of data in the table, variability of the column used in the IN\n> clause, etc.\n> \n> Splitting the query with 1000 items into 10 separate queries, the\n> smaller\n> queries may be faster but the total time consumed may be actually\n> higher.\n> Something like\n> \n> 10 * (time of small query) + (time to combine them) > (time of large\n> query)\n> \n> If the performance of the 'split' solution is actually better than the\n> original query, it just means that the planner does not use index scan\n> when it actually should. That means that either\n> \n> (a) the planner is not smart enough\n> (b) it has not current statistics of the table (run ANALYZE on the\n> table)\n> (c) the statistics are not detailed enough (ALTER TABLE ... SET\n> STATICTICS)\n> (d) the cost variables are not set properly (do not match the hardware\n> -\n> decreate index scan cost / increase seq scan cost)\n> \n> regards\n> Tomas\n\nI know that it's much faster (for us) to run many smaller queries than\none large query, and I think that it's primarily because of your reason\na. Most of our problems come from Pg misunderstanding the results of a\njoin and making a bad plan decision. Batching dramatically reduces the\nliklihood of this.\n\n-Mark\n\n", "msg_date": "Tue, 18 Nov 2008 15:29:25 -0800", "msg_from": "Mark Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance and IN clauses" }, { "msg_contents": "> I know that it's much faster (for us) to run many smaller queries than\n> one large query, and I think that it's primarily because of your reason\n> a. Most of our problems come from Pg misunderstanding the results of a\n> join and making a bad plan decision. Batching dramatically reduces the\n> liklihood of this.\n> \n> -Mark\n\nShow us the plan (output of EXPLAIN ANALYZE), along with detailed info \nabout the table (number of rows / pages) and environment (amount of RAM, \netc.). Without these information it's impossible to tell which of the \nchoices is right.\n\nIn my experience the planner is a very good piece of software, but if \nyou feed him with bad information about the environment / data in the \nbeginning, you can hardly expect good plans.\n\nOccasionally I provide consultancy to developers having problems with \nPostgreSQL, and while most of the time (say 70% of the time) the \nproblems are caused by mistakes in SQL or incorrect design of the \nsystem, problems with proper settings of the PostgreSQL are quite often. \nI really don't know if you use the default values or if you have tuned \nthe settings (and how), but it's hard to tell from your original post.\n\nFor example if you don't set the work_mem according to your settings, \nthis may result in on-disk sorting even if you have plenty of free \nmemory. Or if you have fast drives, the default cost settings may \nproduce bad plans (index scan may seem too expensive), etc. And of \ncourse you may have data with complicated statistical properties, and \nthe default level of details may not be sufficient (try increase it with \nSET STATISTICS for the column).\n\nAnyway - there may be glitch / corner case in the planner of course, but \n it's hard to tell without the EXPLAIN ANALYZE output.\n\nregards\nTomas\n", "msg_date": "Wed, 19 Nov 2008 01:49:12 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance and IN clauses" }, { "msg_contents": "Mark Roberts napsal(a):\n> On Tue, 2008-11-18 at 17:38 +0100, [email protected] wrote:\n>> I bet there is no 'critical' length - this is just another case of\n>> index\n>> scan vs. seqscan. The efficiency depends on the size of the table /\n>> row,\n>> amount of data in the table, variability of the column used in the IN\n>> clause, etc.\n>>\n>> Splitting the query with 1000 items into 10 separate queries, the\n>> smaller\n>> queries may be faster but the total time consumed may be actually\n>> higher.\n>> Something like\n>>\n>> 10 * (time of small query) + (time to combine them) > (time of large\n>> query)\n>>\n>> If the performance of the 'split' solution is actually better than the\n>> original query, it just means that the planner does not use index scan\n>> when it actually should. That means that either\n>>\n>> (a) the planner is not smart enough\n>> (b) it has not current statistics of the table (run ANALYZE on the\n>> table)\n>> (c) the statistics are not detailed enough (ALTER TABLE ... SET\n>> STATICTICS)\n>> (d) the cost variables are not set properly (do not match the hardware\n>> -\n>> decreate index scan cost / increase seq scan cost)\n>>\n>> regards\n>> Tomas\n> \n> I know that it's much faster (for us) to run many smaller queries than\n> one large query, and I think that it's primarily because of your reason\n> a. Most of our problems come from Pg misunderstanding the results of a\n> join and making a bad plan decision. Batching dramatically reduces the\n> liklihood of this.\n\nAs I already said - even the smartest planner won't work without correct \ninput data. Have you tried fixing the points (b), (c) and (d)?\n\nFixing them might improve the planner performance so that you don't need \nthe batchning at all.\n\nregards\nTomas\n", "msg_date": "Fri, 21 Nov 2008 02:09:36 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance and IN clauses" } ]
[ { "msg_contents": "Hello.\n\nIt's second query rewrite postgresql seems not to handle - making EXCEPT\nfrom NOT IT.\nHere is an example:\nPreparation:\n\ndrop table if exists t1;\ndrop table if exists t2;\ncreate temporary table t1(id) as\nselect\n(random()*100000)::int from generate_series(1,200000) a(id);\n\ncreate temporary table t2(id) as\nselect\n(random()*100000)::int from generate_series(1,100000) a(id);\nanalyze t1;\nanalyze t2;\n\nQuery 1:\nselect * from t1 where id not in (select id from t2);\nPlan:\n\"Seq Scan on t1 (cost=1934.00..164105319.00 rows=100000 width=4)\"\n\" Filter: (NOT (subplan))\"\n\" SubPlan\"\n\" -> Materialize (cost=1934.00..3325.00 rows=100000 width=4)\"\n\" -> Seq Scan on t2 (cost=0.00..1443.00 rows=100000 width=4)\"\n\nQuery 2 (gives same result as Q1):\nselect * from t1 except all (select id from t2);\nPlan:\n\"SetOp Except All (cost=38721.90..40221.90 rows=30000 width=4)\"\n\" -> Sort (cost=38721.90..39471.90 rows=300000 width=4)\"\n\" Sort Key: \"*SELECT* 1\".id\"\n\" -> Append (cost=0.00..7328.00 rows=300000 width=4)\"\n\" -> Subquery Scan \"*SELECT* 1\" (cost=0.00..4885.00\nrows=200000 width=4)\"\n\" -> Seq Scan on t1 (cost=0.00..2885.00 rows=200000\nwidth=4)\"\n\" -> Subquery Scan \"*SELECT* 2\" (cost=0.00..2443.00\nrows=100000 width=4)\"\n\" -> Seq Scan on t2 (cost=0.00..1443.00 rows=100000\nwidth=4)\"\n\nIf I am correct, planner simply do not know that he can rewrite NOT IN as\n\"EXCEPT ALL\" operator, so all NOT INs when list of values to remove is long\ntakes very much time.\nSo the question is: I am willing to participate in postgresql development\nbecause it may be easier to fix planner then to rewrite all my queries :).\nHow can I? (I mean to work on query planner enhancements by providing new\noptions of query rewrite, not to work on other thing nor on enhancing\nplanner in other ways, like better estimations of known plans).\n\nHello.It's second query rewrite postgresql seems not to handle - making EXCEPT from NOT IT.Here is an example:Preparation:drop table if exists t1;drop table if exists t2;create temporary table t1(id) as \nselect (random()*100000)::int from generate_series(1,200000) a(id);create temporary table t2(id) as select (random()*100000)::int from generate_series(1,100000) a(id);analyze t1;analyze t2;\nQuery 1:select * from t1 where id not in (select id from t2);Plan:\"Seq Scan on t1  (cost=1934.00..164105319.00 rows=100000 width=4)\"\"  Filter: (NOT (subplan))\"\"  SubPlan\"\n\"    ->  Materialize  (cost=1934.00..3325.00 rows=100000 width=4)\"\"          ->  Seq Scan on t2  (cost=0.00..1443.00 rows=100000 width=4)\"Query 2 (gives same result as Q1):select * from t1 except all (select id from t2);\nPlan:\"SetOp Except All  (cost=38721.90..40221.90 rows=30000 width=4)\"\"  ->  Sort  (cost=38721.90..39471.90 rows=300000 width=4)\"\"        Sort Key: \"*SELECT* 1\".id\"\n\"        ->  Append  (cost=0.00..7328.00 rows=300000 width=4)\"\"              ->  Subquery Scan \"*SELECT* 1\"  (cost=0.00..4885.00 rows=200000 width=4)\"\"                    ->  Seq Scan on t1  (cost=0.00..2885.00 rows=200000 width=4)\"\n\"              ->  Subquery Scan \"*SELECT* 2\"  (cost=0.00..2443.00 rows=100000 width=4)\"\"                    ->  Seq Scan on t2  (cost=0.00..1443.00 rows=100000 width=4)\"If I am correct, planner simply do not know that he can rewrite NOT IN as \"EXCEPT ALL\" operator, so all NOT INs when list of values to remove is long takes very much time.\nSo the question is: I am willing to participate in postgresql development because it may be easier to fix planner then to rewrite all my queries :). How can I? (I mean to work on query planner enhancements by providing new options of query rewrite, not to work on other thing nor on enhancing planner in other ways, like better estimations of known plans).", "msg_date": "Wed, 19 Nov 2008 13:51:47 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL NOT IN performance" }, { "msg_contents": "Something weird with your example which doesn't have the same result, see\nrow count with explain analyze:\n\n\n\ncruz=# SELECT version();\n version \n \n--------------------------------------------------------------------------------------------\n PostgreSQL 8.3.5 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real\n(Debian 4.3.2-1) 4.3.2\n(1 registro)\n\ncruz=# EXPLAIN ANALYZE select * from t1 where id not in (select id from\nt2);\n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------\n Seq Scan on t1 (cost=1643.00..4928.00 rows=100000 width=4) (actual\ntime=256.687..585.774 rows=73653 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on t2 (cost=0.00..1393.00 rows=100000 width=4) (actual\ntime=0.052..86.867 rows=100000 loops=1)\n Total runtime: 625.471 ms\n(5 registros)\n\ncruz=# EXPLAIN ANALYZE select * from t1 except all (select id from t2);\n QUERY PLAN \n \n-----------------------------------------------------------------------------------------------------------------------------------------\n SetOp Except All (cost=34469.90..35969.90 rows=30000 width=4) (actual\ntime=2598.574..3663.712 rows=126733 loops=1)\n -> Sort (cost=34469.90..35219.90 rows=300000 width=4) (actual\ntime=2598.550..3178.387 rows=300000 loops=1)\n�������� Sort Key: \"*SELECT* 1\".id\n�������� Sort Method:� external merge� Disk: 5864kB\n�������� ->� Append� (cost=0.00..7178.00 rows=300000 width=4) (actual\ntime=0.037..1026.367 rows=300000 loops=1)\n�������������� ->� Subquery Scan \"*SELECT* 1\"� (cost=0.00..4785.00\nrows=200000 width=4) (actual time=0.035..439.507 rows=200000 loops=1)\n�������������������� ->� Seq Scan on t1� (cost=0.00..2785.00 rows=200000\nwidth=4) (actual time=0.029..161.355 rows=200000 loops=1)\n�������������� ->� Subquery Scan \"*SELECT* 2\"� (cost=0.00..2393.00\nrows=100000 width=4) (actual time=0.107..255.160 rows=100000 loops=1)\n�������������������� ->� Seq Scan on t2� (cost=0.00..1393.00 rows=100000\nwidth=4) (actual time=0.097..110.639 rows=100000 loops=1)\n�Total runtime: 3790.831 ms\n(10 registros)\n</pre>\nSometimes I got a better result (on older versions) with this kind of\nquery, but in this case it doesn't:\n\n\n\ncruz=# EXPLAIN ANALYZE SELECT * FROM t1 LEFT JOIN t2 ON t1.id = t2.id WHERE\nt2.id IS NULL;\n����������������������������������������������������� QUERY\nPLAN������������������������������������������������������\n-----------------------------------------------------------------------------------------------------------------------\n�Merge Right Join� (cost=30092.86..35251.53 rows=155304 width=8) (actual\ntime=850.232..1671.091 rows=73653 loops=1)\n�� Merge Cond: (t2.id = t1.id)\n�� Filter: (t2.id IS NULL)\n�� ->� Sort� (cost=9697.82..9947.82 rows=100000 width=4) (actual\ntime=266.501..372.560 rows=100000 loops=1)\n�������� Sort Key: t2.id\n�������� Sort Method:� quicksort� Memory: 4392kB\n�������� ->� Seq Scan on t2� (cost=0.00..1393.00 rows=100000 width=4)\n(actual time=0.029..78.087 rows=100000 loops=1)\n�� ->� Sort� (cost=20394.64..20894.64 rows=200000 width=4) (actual\ntime=583.699..855.427 rows=273364 loops=1)\n�������� Sort Key: t1.id\n�������� Sort Method:� quicksort� Memory: 8784kB\n�������� ->� Seq Scan on t1� (cost=0.00..2785.00 rows=200000 width=4)\n(actual time=0.087..155.665 rows=200000 loops=1)\n�Total runtime: 1717.062 ms\n(12 registros)\n</pre>\nRegards,\n\n\n\"??????? ????????\" <[email protected]> escreveu:\n\n\n\n>Hello.\n>\n>It's second query rewrite postgresql seems not to handle - making EXCEPT\n>from NOT IT.\n>Here is an example:\n>Preparation:\n>\n>drop table if exists t1;\n>drop table if exists t2;\n>create temporary table t1(id) as \n>select \n>(random()*100000)::int from generate_series(1,200000) a(id);\n>\n>create temporary table t2(id) as \n>select \n>(random()*100000)::int from generate_series(1,100000) a(id);\n>analyze t1;\n>analyze t2;\n>\n>Query 1:\n>select * from t1 where id not in (select id from t2);\n>Plan:\n>\"Seq Scan on t1� (cost=1934.00..164105319.00 rows=100000 width=4)\"\n>\"� Filter: (NOT (subplan))\"\n>\"� SubPlan\"\n>\"��� ->� Materialize� (cost=1934.00..3325.00 rows=100000 width=4)\"\n>\"��������� ->� Seq Scan on t2� (cost=0.00..1443.00 rows=100000 width=4)\"\n>\n>Query 2 (gives same result as Q1):\n>select * from t1 except all (select id from t2);\n>Plan:\n>\"SetOp Except All� (cost=38721.90..40221.90 rows=30000 width=4)\"\n>\"� ->� Sort� (cost=38721.90..39471.90 rows=300000 width=4)\"\n>\"������� Sort Key: \"*SELECT* 1\".id\"\n>\"������� ->� Append� (cost=0.00..7328.00 rows=300000 width=4)\"\n>\"������������� ->� Subquery Scan \"*SELECT* 1\"� (cost=0.00..4885.00\n>rows=200000 width=4)\"\n>\"������������������� ->� Seq Scan on t1� (cost=0.00..2885.00 rows=200000\n>width=4)\"\n>\"������������� ->� Subquery Scan \"*SELECT* 2\"� (cost=0.00..2443.00\n>rows=100000 width=4)\"\n>\"������������������� ->� Seq Scan on t2� (cost=0.00..1443.00 rows=100000\n>width=4)\"\n>\n>If I am correct, planner simply do not know that he can rewrite NOT IN as\n>\"EXCEPT ALL\" operator, so all NOT INs when list of values to remove is long\n>takes very much time.\n>So the question is: I am willing to participate in postgresql development\n>because it may be easier to fix planner then to rewrite all my queries :).\n>How can I? (I mean to work on query planner enhancements by providing new\n>options of query rewrite, not to work on other thing nor on enhancing\n>planner in other ways, like better estimations of known plans).\n>\n>\n>\n>\n\n�\n\n\n\n--\n<span style=\"color: #000080\">Daniel Cristian Cruz\n</span>Administrador de Banco de Dados\nDire��o Regional�- N�cleo de Tecnologia da Informa��o\nSENAI - SC\nTelefone: 48-3239-1422 (ramal 1422)\n\n\n\nSomething weird with your example which doesn't have the same result, see row count with explain analyze:\n\ncruz=# SELECT version();\n version \n--------------------------------------------------------------------------------------------\n PostgreSQL 8.3.5 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real (Debian 4.3.2-1) 4.3.2\n(1 registro)\n\ncruz=# EXPLAIN ANALYZE select * from t1 where id not in (select id from t2);\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------\n Seq Scan on t1 (cost=1643.00..4928.00 rows=100000 width=4) (actual time=256.687..585.774 rows=73653 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on t2 (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.052..86.867 rows=100000 loops=1)\n Total runtime: 625.471 ms\n(5 registros)\n\ncruz=# EXPLAIN ANALYZE select * from t1 except all (select id from t2);\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------\n SetOp Except All (cost=34469.90..35969.90 rows=30000 width=4) (actual time=2598.574..3663.712 rows=126733 loops=1)\n -> Sort (cost=34469.90..35219.90 rows=300000 width=4) (actual time=2598.550..3178.387 rows=300000 loops=1)\n         Sort Key: \"*SELECT* 1\".id\n         Sort Method:  external merge  Disk: 5864kB\n         ->  Append  (cost=0.00..7178.00 rows=300000 width=4) (actual time=0.037..1026.367 rows=300000 loops=1)\n               ->  Subquery Scan \"*SELECT* 1\"  (cost=0.00..4785.00 rows=200000 width=4) (actual time=0.035..439.507 rows=200000 loops=1)\n                     ->  Seq Scan on t1  (cost=0.00..2785.00 rows=200000 width=4) (actual time=0.029..161.355 rows=200000 loops=1)\n               ->  Subquery Scan \"*SELECT* 2\"  (cost=0.00..2393.00 rows=100000 width=4) (actual time=0.107..255.160 rows=100000 loops=1)\n                     ->  Seq Scan on t2  (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.097..110.639 rows=100000 loops=1)\n Total runtime: 3790.831 ms\n(10 registros)\n\nSometimes I got a better result (on older versions) with this kind of query, but in this case it doesn't:\n\ncruz=# EXPLAIN ANALYZE SELECT * FROM t1 LEFT JOIN t2 ON t1.id = t2.id WHERE t2.id IS NULL;\n                                                      QUERY PLAN                                                      \n-----------------------------------------------------------------------------------------------------------------------\n Merge Right Join  (cost=30092.86..35251.53 rows=155304 width=8) (actual time=850.232..1671.091 rows=73653 loops=1)\n   Merge Cond: (t2.id = t1.id)\n   Filter: (t2.id IS NULL)\n   ->  Sort  (cost=9697.82..9947.82 rows=100000 width=4) (actual time=266.501..372.560 rows=100000 loops=1)\n         Sort Key: t2.id\n         Sort Method:  quicksort  Memory: 4392kB\n         ->  Seq Scan on t2  (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.029..78.087 rows=100000 loops=1)\n   ->  Sort  (cost=20394.64..20894.64 rows=200000 width=4) (actual time=583.699..855.427 rows=273364 loops=1)\n         Sort Key: t1.id\n         Sort Method:  quicksort  Memory: 8784kB\n         ->  Seq Scan on t1  (cost=0.00..2785.00 rows=200000 width=4) (actual time=0.087..155.665 rows=200000 loops=1)\n Total runtime: 1717.062 ms\n(12 registros)\n\nRegards,\n\"??????? ????????\" <[email protected]> escreveu:\nHello.\n\nIt's second query rewrite postgresql seems not to handle - making EXCEPT from NOT IT.\nHere is an example:\nPreparation:\n\ndrop table if exists t1;\ndrop table if exists t2;\ncreate temporary table t1(id) as \nselect \n(random()*100000)::int from generate_series(1,200000) a(id);\n\ncreate temporary table t2(id) as \nselect \n(random()*100000)::int from generate_series(1,100000) a(id);\nanalyze t1;\nanalyze t2;\n\nQuery 1:\nselect * from t1 where id not in (select id from t2);\nPlan:\n\"Seq Scan on t1  (cost=1934.00..164105319.00 rows=100000 width=4)\"\n\"  Filter: (NOT (subplan))\"\n\"  SubPlan\"\n\"    ->  Materialize  (cost=1934.00..3325.00 rows=100000 width=4)\"\n\"          ->  Seq Scan on t2  (cost=0.00..1443.00 rows=100000 width=4)\"\n\nQuery 2 (gives same result as Q1):\nselect * from t1 except all (select id from t2);\nPlan:\n\"SetOp Except All  (cost=38721.90..40221.90 rows=30000 width=4)\"\n\"  ->  Sort  (cost=38721.90..39471.90 rows=300000 width=4)\"\n\"        Sort Key: \"*SELECT* 1\".id\"\n\"        ->  Append  (cost=0.00..7328.00 rows=300000 width=4)\"\n\"              ->  Subquery Scan \"*SELECT* 1\"  (cost=0.00..4885.00 rows=200000 width=4)\"\n\"                    ->  Seq Scan on t1  (cost=0.00..2885.00 rows=200000 width=4)\"\n\"              ->  Subquery Scan \"*SELECT* 2\"  (cost=0.00..2443.00 rows=100000 width=4)\"\n\"                    ->  Seq Scan on t2  (cost=0.00..1443.00 rows=100000 width=4)\"\n\nIf I am correct, planner simply do not know that he can rewrite NOT IN as \"EXCEPT ALL\" operator, so all NOT INs when list of values to remove is long takes very much time.\nSo the question is: I am willing to participate in postgresql development because it may be easier to fix planner then to rewrite all my queries :). How can I? (I mean to work on query planner enhancements by providing new options of query rewrite, not to work on other thing nor on enhancing planner in other ways, like better estimations of known plans).\n\n\n\n\n Daniel Cristian Cruz\nAdministrador de Banco de Dados\nDire��o Regional - N�cleo de Tecnologia da Informa��o\nSENAI - SC\nTelefone: 48-3239-1422 (ramal 1422)", "msg_date": "Wed, 19 Nov 2008 10:22:06 -0200", "msg_from": "DANIEL CRISTIAN CRUZ <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NOT IN performance" }, { "msg_contents": "Віталій Тимчишин escribió:\n\n> So the question is: I am willing to participate in postgresql development\n> because it may be easier to fix planner then to rewrite all my queries :).\n> How can I? (I mean to work on query planner enhancements by providing new\n> options of query rewrite, not to work on other thing nor on enhancing\n> planner in other ways, like better estimations of known plans).\n\nhttp://wiki.postgresql.org/wiki/Submitting_a_Patch\nhttp://wiki.postgresql.org/wiki/Developer_FAQ\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 19 Nov 2008 10:11:43 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NOT IN performance" }, { "msg_contents": "\nOn Wed, 19 Nov 2008, [ISO-8859-5] ������� �������� wrote:\n\n> Query 1:\n> select * from t1 where id not in (select id from t2);\n>\n> Query 2 (gives same result as Q1):\n> select * from t1 except all (select id from t2);\n\nIt gives the same result as long as no nulls are in either table. If\neither table can have a null, the conversion changes the results.\n\nIn addition, a conversion like the above only happens to work because t1\nonly has an id column. If t1 had two columns you'd get an error because\nthe two sides of except all must have the same number of columns.\n\n", "msg_date": "Wed, 19 Nov 2008 06:44:58 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL NOT IN performance" }, { "msg_contents": "2008/11/19 Stephan Szabo <[email protected]>\n\n>\n> On Wed, 19 Nov 2008, [ISO-8859-5] Віталій Тимчишин wrote:\n>\n> > Query 1:\n> > select * from t1 where id not in (select id from t2);\n> >\n> > Query 2 (gives same result as Q1):\n> > select * from t1 except all (select id from t2);\n>\n> It gives the same result as long as no nulls are in either table. If\n> either table can have a null, the conversion changes the results.\n>\n> In addition, a conversion like the above only happens to work because t1\n> only has an id column. If t1 had two columns you'd get an error because\n> the two sides of except all must have the same number of columns.\n>\n\nActually It can be done even for multi-column mode if the selection is done\non unique key. It would look like:\n\nselect * from t1 inner join (\nselect id from t1 except select id from t2) talias on t1.id = talias.id\n\nAnd it would produce better results then \"not in\" for large counts in t1 and\nt2.\n\n2008/11/19 Stephan Szabo <[email protected]>\n\nOn Wed, 19 Nov 2008, [ISO-8859-5] Віталій Тимчишин wrote:\n\n> Query 1:\n> select * from t1 where id not in (select id from t2);\n>\n> Query 2 (gives same result as Q1):\n> select * from t1 except all (select id from t2);\n\nIt gives the same result as long as no nulls are in either table. If\neither table can have a null, the conversion changes the results.\n\nIn addition, a conversion like the above only happens to work because t1\nonly has an id column. If t1 had two columns you'd get an error because\nthe two sides of except all must have the same number of columns.\nActually It can be done even for multi-column mode if the selection is done on unique key. It would look like:select * from t1 inner join (select id from t1 except select id from t2) talias on t1.id = talias.id\nAnd it would produce better results then \"not in\" for large counts in t1 and t2.", "msg_date": "Wed, 19 Nov 2008 16:55:18 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL NOT IN performance" }, { "msg_contents": "2008/11/19 DANIEL CRISTIAN CRUZ <[email protected]>\n\n> Something weird with your example which doesn't have the same result, see\n> row count with explain analyze:\n>\nMy fault. EXCEPT ALL would not work here, so this method with EXCEPT can be\nused only when either operation is done on unique key on t1 or result is\ngoing to be made unique.\n\n> cruz=# SELECT version();\n> version\n> --------------------------------------------------------------------------------------------\n> PostgreSQL 8.3.5 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real (Debian 4.3.2-1) 4.3.2\n> (1 registro)\n>\n> cruz=# EXPLAIN ANALYZE select * from t1 where id not in (select id from t2);\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------\n> Seq Scan on t1 (cost=1643.00..4928.00 rows=100000 width=4) (actual time=256.687..585.774 rows=73653 loops=1)\n> Filter: (NOT (hashed subplan))\n> SubPlan\n> -> Seq Scan on t2 (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.052..86.867 rows=100000 loops=1)\n> Total runtime: 625.471 ms\n> (5 registros)\n>\n> cruz=# EXPLAIN ANALYZE select * from t1 except all (select id from t2);\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------\n> SetOp Except All (cost=34469.90..35969.90 rows=30000 width=4) (actual time=2598.574..3663.712 rows=126733 loops=1)\n> -> Sort (cost=34469.90..35219.90 rows=300000 width=4) (actual time=2598.550..3178.387 rows=300000 loops=1)\n> Sort Key: \"*SELECT* 1\".id\n> Sort Method: external merge Disk: 5864kB\n> -> Append (cost=0.00..7178.00 rows=300000 width=4) (actual time=0.037..1026.367 rows=300000 loops=1)\n> -> Subquery Scan \"*SELECT* 1\" (cost=0.00..4785.00 rows=200000 width=4) (actual time=0.035..439.507 rows=200000 loops=1)\n> -> Seq Scan on t1 (cost=0.00..2785.00 rows=200000 width=4) (actual time=0.029..161.355 rows=200000 loops=1)\n> -> Subquery Scan \"*SELECT* 2\" (cost=0.00..2393.00 rows=100000 width=4) (actual time=0.107..255.160 rows=100000 loops=1)\n> -> Seq Scan on t2 (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.097..110.639 rows=100000 loops=1)\n> Total runtime: 3790.831 ms\n> (10 registros)\n>\n> Sometimes I got a better result (on older versions) with this kind of\n> query, but in this case it doesn't:\n>\n> cruz=# EXPLAIN ANALYZE SELECT * FROM t1 LEFT JOIN t2 ON t1.id = t2.id WHERE t2.id IS NULL;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------\n> Merge Right Join (cost=30092.86..35251.53 rows=155304 width=8) (actual time=850.232..1671.091 rows=73653 loops=1)\n> Merge Cond: (t2.id = t1.id)\n> Filter: (t2.id IS NULL)\n> -> Sort (cost=9697.82..9947.82 rows=100000 width=4) (actual time=266.501..372.560 rows=100000 loops=1)\n> Sort Key: t2.id\n> Sort Method: quicksort Memory: 4392kB\n> -> Seq Scan on t2 (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.029..78.087 rows=100000 loops=1)\n> -> Sort (cost=20394.64..20894.64 rows=200000 width=4) (actual time=583.699..855.427 rows=273364 loops=1)\n> Sort Key: t1.id\n> Sort Method: quicksort Memory: 8784kB\n> -> Seq Scan on t1 (cost=0.00..2785.00 rows=200000 width=4) (actual time=0.087..155.665 rows=200000 loops=1)\n> Total runtime: 1717.062 ms\n> (12 registros)\n>\n>\nYes, your method is even better on 8.3.3 I have. I will try to update to\n8.3.5 to see if there was optimizer improvements. You could try increasing\nvalues, say, by 10 in table filling to see if NOT IT will switch to \"slow\"\nversion (for me it starts being slow from some magic row count in t2). I\nsuppose it is the moment it switches from \"hashed subplan\" to \"subplan\". For\nme for 10000 values it is \"hashed subplan\" (and it is momentary fast), for\n100000 - it is \"subplan\" and it is sloow.\nBTW: Which (memory?) configuration variable can affect such a switch?\n\n2008/11/19 DANIEL CRISTIAN CRUZ <[email protected]>\nSomething weird with your example which doesn't have the same result, see row count with explain analyze:My fault. EXCEPT ALL would not work here, so this method with EXCEPT can be used only when either operation is done on unique key on t1 or result is going to be made unique. \n\ncruz=# SELECT version();\n version \n--------------------------------------------------------------------------------------------\n PostgreSQL 8.3.5 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real (Debian 4.3.2-1) 4.3.2\n(1 registro)\n\ncruz=# EXPLAIN ANALYZE select * from t1 where id not in (select id from t2);\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------\n Seq Scan on t1 (cost=1643.00..4928.00 rows=100000 width=4) (actual time=256.687..585.774 rows=73653 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on t2 (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.052..86.867 rows=100000 loops=1)\n Total runtime: 625.471 ms\n(5 registros)\n\ncruz=# EXPLAIN ANALYZE select * from t1 except all (select id from t2);\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------\n SetOp Except All (cost=34469.90..35969.90 rows=30000 width=4) (actual time=2598.574..3663.712 rows=126733 loops=1)\n -> Sort (cost=34469.90..35219.90 rows=300000 width=4) (actual time=2598.550..3178.387 rows=300000 loops=1)\n         Sort Key: \"*SELECT* 1\".id\n         Sort Method:  external merge  Disk: 5864kB\n         ->  Append  (cost=0.00..7178.00 rows=300000 width=4) (actual time=0.037..1026.367 rows=300000 loops=1)\n               ->  Subquery Scan \"*SELECT* 1\"  (cost=0.00..4785.00 rows=200000 width=4) (actual time=0.035..439.507 rows=200000 loops=1)\n                     ->  Seq Scan on t1  (cost=0.00..2785.00 rows=200000 width=4) (actual time=0.029..161.355 rows=200000 loops=1)\n               ->  Subquery Scan \"*SELECT* 2\"  (cost=0.00..2393.00 rows=100000 width=4) (actual time=0.107..255.160 rows=100000 loops=1)\n                     ->  Seq Scan on t2  (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.097..110.639 rows=100000 loops=1)\n Total runtime: 3790.831 ms\n(10 registros)\n\nSometimes I got a better result (on older versions) with this kind of query, but in this case it doesn't:\ncruz=# EXPLAIN ANALYZE SELECT * FROM t1 LEFT JOIN t2 ON t1.id = t2.id WHERE t2.id IS NULL;\n                                                      QUERY PLAN                                                      \n-----------------------------------------------------------------------------------------------------------------------\n Merge Right Join  (cost=30092.86..35251.53 rows=155304 width=8) (actual time=850.232..1671.091 rows=73653 loops=1)\n   Merge Cond: (t2.id = t1.id)\n   Filter: (t2.id IS NULL)\n   ->  Sort  (cost=9697.82..9947.82 rows=100000 width=4) (actual time=266.501..372.560 rows=100000 loops=1)\n         Sort Key: t2.id\n         Sort Method:  quicksort  Memory: 4392kB\n         ->  Seq Scan on t2  (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.029..78.087 rows=100000 loops=1)\n   ->  Sort  (cost=20394.64..20894.64 rows=200000 width=4) (actual time=583.699..855.427 rows=273364 loops=1)\n         Sort Key: t1.id\n         Sort Method:  quicksort  Memory: 8784kB\n         ->  Seq Scan on t1  (cost=0.00..2785.00 rows=200000 width=4) (actual time=0.087..155.665 rows=200000 loops=1)\n Total runtime: 1717.062 ms\n(12 registros)\nYes, your method is even better on 8.3.3 I have. I will try to update to 8.3.5 to see if there was optimizer improvements. You could try increasing values, say, by 10 in table filling to see if NOT IT will switch to \"slow\" version (for me it starts being slow from some magic row count in t2). I suppose it is the moment it switches from \"hashed subplan\" to \"subplan\". For me for 10000 values it is \"hashed subplan\" (and it is momentary fast), for 100000 - it is \"subplan\" and it is sloow.\nBTW: Which (memory?) configuration variable can affect such a switch?", "msg_date": "Wed, 19 Nov 2008 17:12:43 +0200", "msg_from": "\"=?ISO-8859-5?B?svbi0Nv22SDC2Nzn2OjY3Q==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL NOT IN performance" } ]
[ { "msg_contents": "Hi,\n\nI have defined sequence on a table something like this\n\n\nCREATE SEQUENCE items_unqid_seq\n INCREMENT 1\n MINVALUE 0\n MAXVALUE 9223372036854775807\n START 7659\n CACHE 1;\n\nthis is on a table called items. where i have currently the max(unq_id) as\n7659.\n\nand in the stored procedure when i am inserting values into the items table\nfor the unq_id column i am using the sequence as follows:\n\nnextval('items_unqid_seq'::text)\n\n\nit seems to be working some times. and the sequences are not getting updated\nsometime. which is casuing primary key exceptions.\n\nplease advise as soon as possible.\n\nis there any trivial problem with sequences in postgresql??\n\nthanks in advance.\n\n-- \nKranti\n\nHi,I have defined sequence on a table something like thisCREATE SEQUENCE items_unqid_seq  INCREMENT 1  MINVALUE 0  MAXVALUE 9223372036854775807  START 7659  CACHE 1;\nthis is on a table called items. where i have currently the max(unq_id) as 7659.and in the stored procedure when i am inserting values into the items table for the unq_id column i am using the sequence as follows:\nnextval('items_unqid_seq'::text)it seems to be working some times. and the sequences are not getting updated sometime. which is casuing primary key exceptions.please advise as soon as possible.\nis there any trivial problem with sequences in postgresql??thanks in advance.-- Kranti", "msg_date": "Wed, 19 Nov 2008 21:24:13 +0530", "msg_from": "\"=?WINDOWS-1252?Q?Kranti=99_K_K_Parisa?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Very Urgent : Sequences Problem" }, { "msg_contents": "Hi,\n\nKranti™ K K Parisa wrote:\n> Hi,\n> \n> I have defined sequence on a table something like this\n> \n> \n> CREATE SEQUENCE items_unqid_seq\n> INCREMENT 1\n> MINVALUE 0\n> MAXVALUE 9223372036854775807\n> START 7659\n> CACHE 1;\n> \n> this is on a table called items. where i have currently the max(unq_id) \n> as 7659.\n> \n> and in the stored procedure when i am inserting values into the items \n> table for the unq_id column i am using the sequence as follows:\n> \n> nextval('items_unqid_seq'::text)\n> \n> \n> it seems to be working some times. and the sequences are not getting \n> updated sometime. which is casuing primary key exceptions.\n\nThats actually not possible. Sequences are never rolled\nback. The only way would be using stale values from\nyour function call or some other inserts trying to\noverwrite or insert into the table twice with the same\nid.\n\n> please advise as soon as possible.\n> \n> is there any trivial problem with sequences in postgresql??\n\nnope.\n\n> thanks in advance.\n\nThis is neither admin nor performance related so you better\nsend such questions to psql-general.\n\nThank you\nTino", "msg_date": "Wed, 19 Nov 2008 17:31:22 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very Urgent : Sequences Problem" }, { "msg_contents": "Kranti (tm),\n\nIf you problem is very urgent, I suggest that you get a paid support \ncontract with a PostgreSQL support company. You can find a list of \nsupport companies here:\n\nhttp://www.postgresql.org/support/professional_support\n\nThese mailing lists are made up of other PostgreSQL users and \ndevelopers, none of whom are paid to help anyone with support issues.\n\nCross-posting two PostgreSQL mailing lists for a problem which is very \nurgent to you, but not to us, is a guarenteed way not to get a useful \nanswer. It suggests that you think you are more important than anyone \nelse in the community.\n\n--Josh Berkus\n", "msg_date": "Wed, 19 Nov 2008 08:31:24 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very Urgent : Sequences Problem" }, { "msg_contents": "On Wed, Nov 19, 2008 at 10:54 AM, Kranti™ K K Parisa\n<[email protected]> wrote:\n> Hi,\n>\n> I have defined sequence on a table something like this\n>\n>\n> CREATE SEQUENCE items_unqid_seq\n> INCREMENT 1\n> MINVALUE 0\n> MAXVALUE 9223372036854775807\n> START 7659\n> CACHE 1;\n>\n> this is on a table called items. where i have currently the max(unq_id) as\n> 7659.\n>\n> and in the stored procedure when i am inserting values into the items table\n> for the unq_id column i am using the sequence as follows:\n>\n> nextval('items_unqid_seq'::text)\n>\n>\n> it seems to be working some times. and the sequences are not getting updated\n> sometime. which is casuing primary key exceptions.\n>\n> please advise as soon as possible.\n>\n> is there any trivial problem with sequences in postgresql??\n\nno (at least none that I know of).\n\nmaybe if you posted the source of your procedure? I bet your error is\ncoming form some other source.\n\nmerlin\n", "msg_date": "Wed, 19 Nov 2008 11:40:52 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very Urgent : Sequences Problem" }, { "msg_contents": "> On Wed, Nov 19, 2008 at 10:54 AM, Kranti&#65533; K K Parisa\n> <[email protected]> wrote:\n>> Hi,\n>>\n>> I have defined sequence on a table something like this\n>>\n>>\n>> CREATE SEQUENCE items_unqid_seq\n>> INCREMENT 1\n>> MINVALUE 0\n>> MAXVALUE 9223372036854775807\n>> START 7659\n>> CACHE 1;\n>>\n>> this is on a table called items. where i have currently the max(unq_id)\n>> as\n>> 7659.\n>>\n>> and in the stored procedure when i am inserting values into the items\n>> table\n>> for the unq_id column i am using the sequence as follows:\n>>\n>> nextval('items_unqid_seq'::text)\n>>\n>>\n>> it seems to be working some times. and the sequences are not getting\n>> updated\n>> sometime. which is casuing primary key exceptions.\n>>\n>> please advise as soon as possible.\n>>\n>> is there any trivial problem with sequences in postgresql??\n>\n> no (at least none that I know of).\n>\n> maybe if you posted the source of your procedure? I bet your error is\n> coming form some other source.\n\nAre you sure you're using the nextval() properly whenever you insert data\ninto the table? This usually happens when a developer does not use it\nproperly, i.e. he just uses a (select max(id) + 1 from ...) something like\nthat. One of the more creative ways of breaking sequences was calling\nnextval() only for the first insert, and then adding 1 to the ID.\n\nBTW. do you have RULEs defined on the table? Some time ago I run into a\nproblem with RULEs defined on the table, as all the rules are evaluated -\nI've used nextval() in all the rules so it was incremented for each rule\nand it was not clear which value was actually used. So it was not sure\nwhich value to use in a following insert (as a FK value).\n\nregards\nTomas\n\n", "msg_date": "Wed, 19 Nov 2008 17:53:09 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very Urgent : Sequences Problem" }, { "msg_contents": "On Wed, 19 Nov 2008, Josh Berkus wrote:\n\n> Cross-posting two PostgreSQL mailing lists for a problem which is very urgent \n> to you, but not to us, is a guarenteed way not to get a useful answer.\n\nPosting to the performance list like this, with a question that in no way \nwhatsoever has anything to do with database performance, is another good \nway to get your question ignored.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 20 Nov 2008 00:04:31 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very Urgent : Sequences Problem" } ]
[ { "msg_contents": "Query below seems to use indexes everywhere in most optimal way.\ndokumnr column is of type int\n\nSpeed of this query varies rapidly:\n\nIn live db fastest response I have got is 8 seconds.\nRe-running same query after 10 seconds may take 60 seconds.\nRe-running it again after 10 seconds may take 114 seconds.\n\nAny idea how to speed it up ?\n\nIs it possible to optimize it, will upgrading to 8.3.5 help or should I\nrequire to add more RAM, disk or CPU speed ?\n\nReal query contains column list instead of sum(1) used in test below.\n\nAndrus.\n\n\nexplain analyze\nSELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n LEFT JOIN artliik using(grupp,liik)\n WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n\nlongest response time:\n\n\"Aggregate (cost=234278.53..234278.54 rows=1 width=0) (actual\ntime=114479.933..114479.936 rows=1 loops=1)\"\n\" -> Hash Left Join (cost=52111.20..234218.21 rows=24126 width=0) (actual\ntime=100435.523..114403.293 rows=20588 loops=1)\"\n\" Hash Cond: ((\"outer\".grupp = \"inner\".grupp) AND (\"outer\".liik =\n\"inner\".liik))\"\n\" -> Nested Loop (cost=52103.94..233735.35 rows=24126 width=19)\n(actual time=100405.258..114207.387 rows=20588 loops=1)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01\nrows=1 width=43) (actual time=18.312..18.325 rows=1 loops=1)\"\n\" Index Cond: ('X05'::bpchar = toode)\"\n\" -> Hash Join (cost=52103.94..233488.08 rows=24126 width=24)\n(actual time=100386.921..114037.986 rows=20588 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=4127.51..175020.84\nrows=317003 width=28) (actual time=11119.932..76225.918 rows=277294\nloops=1)\"\n\" Recheck Cond: (toode = 'X05'::bpchar)\"\n\" -> Bitmap Index Scan on rid_toode_idx\n(cost=0.00..4127.51 rows=317003 width=0) (actual time=11105.807..11105.807\nrows=280599 loops=1)\"\n\" Index Cond: (toode = 'X05'::bpchar)\"\n\" -> Hash (cost=47376.82..47376.82 rows=93444 width=4)\n(actual time=35082.427..35082.427 rows=105202 loops=1)\"\n\" -> Index Scan using dok_kuupaev_idx on dok\n(cost=0.00..47376.82 rows=93444 width=4) (actual time=42.110..34586.331\nrows=105202 loops=1)\"\n\" Index Cond: (kuupaev >=\n'2008-09-01'::date)\"\n\" -> Hash (cost=6.84..6.84 rows=84 width=19) (actual\ntime=30.220..30.220 rows=84 loops=1)\"\n\" -> Seq Scan on artliik (cost=0.00..6.84 rows=84 width=19)\n(actual time=20.104..29.845 rows=84 loops=1)\"\n\"Total runtime: 114480.373 ms\"\n\nSame query in other runs:\n\n\n\"Aggregate (cost=234278.53..234278.54 rows=1 width=0) (actual\ntime=62164.496..62164.500 rows=1 loops=1)\"\n\" -> Hash Left Join (cost=52111.20..234218.21 rows=24126 width=0) (actual\ntime=46988.005..62088.379 rows=20588 loops=1)\"\n\" Hash Cond: ((\"outer\".grupp = \"inner\".grupp) AND (\"outer\".liik =\n\"inner\".liik))\"\n\" -> Nested Loop (cost=52103.94..233735.35 rows=24126 width=19)\n(actual time=46957.750..61893.613 rows=20588 loops=1)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01\nrows=1 width=43) (actual time=146.530..146.543 rows=1 loops=1)\"\n\" Index Cond: ('X05'::bpchar = toode)\"\n\" -> Hash Join (cost=52103.94..233488.08 rows=24126 width=24)\n(actual time=46811.194..61595.560 rows=20588 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=4127.51..175020.84\nrows=317003 width=28) (actual time=1870.209..55864.237 rows=277294 loops=1)\"\n\" Recheck Cond: (toode = 'X05'::bpchar)\"\n\" -> Bitmap Index Scan on rid_toode_idx\n(cost=0.00..4127.51 rows=317003 width=0) (actual time=1863.713..1863.713\nrows=280599 loops=1)\"\n\" Index Cond: (toode = 'X05'::bpchar)\"\n\" -> Hash (cost=47376.82..47376.82 rows=93444 width=4)\n(actual time=1650.823..1650.823 rows=105202 loops=1)\"\n\" -> Index Scan using dok_kuupaev_idx on dok\n(cost=0.00..47376.82 rows=93444 width=4) (actual time=0.091..1190.962\nrows=105202 loops=1)\"\n\" Index Cond: (kuupaev >=\n'2008-09-01'::date)\"\n\" -> Hash (cost=6.84..6.84 rows=84 width=19) (actual\ntime=30.210..30.210 rows=84 loops=1)\"\n\" -> Seq Scan on artliik (cost=0.00..6.84 rows=84 width=19)\n(actual time=20.069..29.836 rows=84 loops=1)\"\n\"Total runtime: 62164.789 ms\"\n\n\n\n\"Aggregate (cost=234278.53..234278.54 rows=1 width=0) (actual\ntime=40185.499..40185.503 rows=1 loops=1)\"\n\" -> Hash Left Join (cost=52111.20..234218.21 rows=24126 width=0) (actual\ntime=32646.761..40109.470 rows=20585 loops=1)\"\n\" Hash Cond: ((\"outer\".grupp = \"inner\".grupp) AND (\"outer\".liik =\n\"inner\".liik))\"\n\" -> Nested Loop (cost=52103.94..233735.35 rows=24126 width=19)\n(actual time=32645.933..39944.242 rows=20585 loops=1)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01\nrows=1 width=43) (actual time=0.072..0.085 rows=1 loops=1)\"\n\" Index Cond: ('X05'::bpchar = toode)\"\n\" -> Hash Join (cost=52103.94..233488.08 rows=24126 width=24)\n(actual time=32645.839..39793.180 rows=20585 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=4127.51..175020.84\nrows=317003 width=28) (actual time=1823.542..36318.419 rows=277291 loops=1)\"\n\" Recheck Cond: (toode = 'X05'::bpchar)\"\n\" -> Bitmap Index Scan on rid_toode_idx\n(cost=0.00..4127.51 rows=317003 width=0) (actual time=1817.053..1817.053\nrows=280596 loops=1)\"\n\" Index Cond: (toode = 'X05'::bpchar)\"\n\" -> Hash (cost=47376.82..47376.82 rows=93444 width=4)\n(actual time=1242.785..1242.785 rows=105195 loops=1)\"\n\" -> Index Scan using dok_kuupaev_idx on dok\n(cost=0.00..47376.82 rows=93444 width=4) (actual time=0.088..788.399\nrows=105195 loops=1)\"\n\" Index Cond: (kuupaev >=\n'2008-09-01'::date)\"\n\" -> Hash (cost=6.84..6.84 rows=84 width=19) (actual\ntime=0.786..0.786 rows=84 loops=1)\"\n\" -> Seq Scan on artliik (cost=0.00..6.84 rows=84 width=19)\n(actual time=0.019..0.419 rows=84 loops=1)\"\n\"Total runtime: 40186.102 ms\"\n\n\n\"Aggregate (cost=234278.53..234278.54 rows=1 width=0) (actual\ntime=29650.398..29650.402 rows=1 loops=1)\"\n\" -> Hash Left Join (cost=52111.20..234218.21 rows=24126 width=0) (actual\ntime=23513.713..29569.448 rows=20588 loops=1)\"\n\" Hash Cond: ((\"outer\".grupp = \"inner\".grupp) AND (\"outer\".liik =\n\"inner\".liik))\"\n\" -> Nested Loop (cost=52103.94..233735.35 rows=24126 width=19)\n(actual time=23512.808..29388.712 rows=20588 loops=1)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01\nrows=1 width=43) (actual time=61.813..61.825 rows=1 loops=1)\"\n\" Index Cond: ('X05'::bpchar = toode)\"\n\" -> Hash Join (cost=52103.94..233488.08 rows=24126 width=24)\n(actual time=23450.970..29163.774 rows=20588 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=4127.51..175020.84\nrows=317003 width=28) (actual time=2098.630..24362.104 rows=277294 loops=1)\"\n\" Recheck Cond: (toode = 'X05'::bpchar)\"\n\" -> Bitmap Index Scan on rid_toode_idx\n(cost=0.00..4127.51 rows=317003 width=0) (actual time=2086.772..2086.772\nrows=280599 loops=1)\"\n\" Index Cond: (toode = 'X05'::bpchar)\"\n\" -> Hash (cost=47376.82..47376.82 rows=93444 width=4)\n(actual time=2860.088..2860.088 rows=105202 loops=1)\"\n\" -> Index Scan using dok_kuupaev_idx on dok\n(cost=0.00..47376.82 rows=93444 width=4) (actual time=0.088..2365.063\nrows=105202 loops=1)\"\n\" Index Cond: (kuupaev >=\n'2008-09-01'::date)\"\n\" -> Hash (cost=6.84..6.84 rows=84 width=19) (actual\ntime=0.861..0.861 rows=84 loops=1)\"\n\" -> Seq Scan on artliik (cost=0.00..6.84 rows=84 width=19)\n(actual time=0.039..0.458 rows=84 loops=1)\"\n\"Total runtime: 29650.696 ms\"\n\n\n\"Aggregate (cost=234278.53..234278.54 rows=1 width=0) (actual\ntime=11131.392..11131.396 rows=1 loops=1)\"\n\" -> Hash Left Join (cost=52111.20..234218.21 rows=24126 width=0) (actual\ntime=9179.703..11043.906 rows=20588 loops=1)\"\n\" Hash Cond: ((\"outer\".grupp = \"inner\".grupp) AND (\"outer\".liik =\n\"inner\".liik))\"\n\" -> Nested Loop (cost=52103.94..233735.35 rows=24126 width=19)\n(actual time=9178.827..10858.383 rows=20588 loops=1)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01\nrows=1 width=43) (actual time=10.252..10.264 rows=1 loops=1)\"\n\" Index Cond: ('X05'::bpchar = toode)\"\n\" -> Hash Join (cost=52103.94..233488.08 rows=24126 width=24)\n(actual time=9168.552..10675.424 rows=20588 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=4127.51..175020.84\nrows=317003 width=28) (actual time=2129.814..7152.618 rows=277294 loops=1)\"\n\" Recheck Cond: (toode = 'X05'::bpchar)\"\n\" -> Bitmap Index Scan on rid_toode_idx\n(cost=0.00..4127.51 rows=317003 width=0) (actual time=2123.223..2123.223\nrows=280599 loops=1)\"\n\" Index Cond: (toode = 'X05'::bpchar)\"\n\" -> Hash (cost=47376.82..47376.82 rows=93444 width=4)\n(actual time=1414.254..1414.254 rows=105202 loops=1)\"\n\" -> Index Scan using dok_kuupaev_idx on dok\n(cost=0.00..47376.82 rows=93444 width=4) (actual time=0.092..895.533\nrows=105202 loops=1)\"\n\" Index Cond: (kuupaev >=\n'2008-09-01'::date)\"\n\" -> Hash (cost=6.84..6.84 rows=84 width=19) (actual\ntime=0.833..0.833 rows=84 loops=1)\"\n\" -> Seq Scan on artliik (cost=0.00..6.84 rows=84 width=19)\n(actual time=0.043..0.465 rows=84 loops=1)\"\n\"Total runtime: 11131.694 ms\"\n\n\nEnvironment:\n\n3-8 concurrent users\n\n\"PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC\ni686-pc-linux-gnu-gcc (GCC) 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,\npie-8.7.9)\"\n\n# free\n total used free shared buffers cached\nMem: 2075828 2008228 67600 0 0 1904552\n-/+ buffers/cache: 103676 1972152\nSwap: 3911816 76 3911740\n\n", "msg_date": "Wed, 19 Nov 2008 22:29:00 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Hash join on int takes 8..114 seconds" }, { "msg_contents": "\n> Query below seems to use indexes everywhere in most optimal way.\n> dokumnr column is of type int\n>\n> Speed of this query varies rapidly:\n>\n> In live db fastest response I have got is 8 seconds.\n> Re-running same query after 10 seconds may take 60 seconds.\n> Re-running it again after 10 seconds may take 114 seconds.\n>\n> Any idea how to speed it up ?\n>\n> Is it possible to optimize it, will upgrading to 8.3.5 help or should I\n> require to add more RAM, disk or CPU speed ?\n>\n> Real query contains column list instead of sum(1) used in test below.\n>\n> Andrus.\n\n\tJust a question, what are you doing with the 20.000 result rows ?\n", "msg_date": "Thu, 20 Nov 2008 00:28:25 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "> Just a question, what are you doing with the 20.000 result rows ?\n\nThose rows represent monthly sales data of one item.\nThey are used as following:\n\n1. Detailed sales report for month. This report can browsed in screen for \nmontly sales and ordering analysis.\n\n2. Total reports. In those reports, sum( sales), sum(quantity) is used to \nget total sales in day, week, month, time for item and resulting rows are \nsummed.\n\nAndrus. \n\n", "msg_date": "Thu, 20 Nov 2008 11:12:17 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Andrus wrote:\n> Query below seems to use indexes everywhere in most optimal way.\n> dokumnr column is of type int\n> \n> Speed of this query varies rapidly:\n> \n> In live db fastest response I have got is 8 seconds.\n> Re-running same query after 10 seconds may take 60 seconds.\n> Re-running it again after 10 seconds may take 114 seconds.\n> \n> Any idea how to speed it up ?\n> \n> Is it possible to optimize it, will upgrading to 8.3.5 help or should I\n> require to add more RAM, disk or CPU speed ?\n\nAt a quick glance, the plans look the same to me. The overall costs are\ncertainly identical. That means whatever is affecting the query times it\nisn't the query plan.\n\n> \"Aggregate (cost=234278.53..234278.54 rows=1 width=0) (actual\n> time=62164.496..62164.500 rows=1 loops=1)\"\n> \"Total runtime: 62164.789 ms\"\n\n> \"Aggregate (cost=234278.53..234278.54 rows=1 width=0) (actual\n> time=40185.499..40185.503 rows=1 loops=1)\"\n> \"Total runtime: 40186.102 ms\"\n\n> \"Aggregate (cost=234278.53..234278.54 rows=1 width=0) (actual\n> time=29650.398..29650.402 rows=1 loops=1)\"\n> \"Total runtime: 29650.696 ms\"\n\n> \"Aggregate (cost=234278.53..234278.54 rows=1 width=0) (actual\n> time=11131.392..11131.396 rows=1 loops=1)\"\n> \"Total runtime: 11131.694 ms\"\n\nSo - what other activity is happening on this machine? Either other\nqueries are taking up noticeable resources, or some other process is (it\nmight be disk activity from checkpointing, logging some other application).\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 20 Nov 2008 10:45:53 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Richard,\n\n> At a quick glance, the plans look the same to me. The overall costs are\n> certainly identical. That means whatever is affecting the query times it\n> isn't the query plan.\n>\n> So - what other activity is happening on this machine? Either other\n> queries are taking up noticeable resources, or some other process is (it\n> might be disk activity from checkpointing, logging some other \n> application).\n\nThank you.\nThis is dedicated server running only PostgreSql which serves approx 6 point \nof sales at this time.\n\nMaybe those other clients make queries which invalidate lot of data loaded \ninto server cache.\nIn next time server must read it again from disk which causes those \nperfomance differences.\n\ntop output is currently:\n\ntop - 13:13:10 up 22 days, 18:25, 1 user, load average: 0.19, 0.12, 0.19\nTasks: 53 total, 2 running, 51 sleeping, 0 stopped, 0 zombie\nCpu(s): 13.7% us, 2.0% sy, 0.0% ni, 78.3% id, 6.0% wa, 0.0% hi, 0.0% si\nMem: 2075828k total, 2022808k used, 53020k free, 0k buffers\nSwap: 3911816k total, 88k used, 3911728k free, 1908536k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 5382 postgres 15 0 144m 43m 40m S 15.0 2.2 0:00.45 postmaster\n 5358 postgres 15 0 152m 87m 75m S 0.3 4.3 0:00.97 postmaster\n 1 root 16 0 1480 508 444 S 0.0 0.0 0:01.35 init\n 2 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/0\n 3 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/0\n 4 root 10 -5 0 0 0 S 0.0 0.0 0:00.42 khelper\n 5 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kthread\n 7 root 10 -5 0 0 0 S 0.0 0.0 2:03.91 kblockd/0\n 8 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 kacpid\n 115 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 aio/0\n 114 root 15 0 0 0 0 S 0.0 0.0 8:49.67 kswapd0\n 116 root 10 -5 0 0 0 S 0.0 0.0 0:10.32 xfslogd/0\n 117 root 10 -5 0 0 0 S 0.0 0.0 0:39.96 xfsdatad/0\n 706 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kseriod\n 723 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kpsmoused\n 738 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 ata/0\n 740 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_0\n 741 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_1\n 742 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_2\n 743 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_3\n 762 root 10 -5 0 0 0 S 0.0 0.0 0:17.54 xfsbufd\n 763 root 10 -5 0 0 0 S 0.0 0.0 0:00.68 xfssyncd\n 963 root 16 -4 1712 528 336 S 0.0 0.0 0:00.24 udevd\n 6677 root 15 0 1728 572 400 S 0.0 0.0 0:04.99 syslog-ng\n 7128 postgres 16 0 140m 10m 9900 S 0.0 0.5 0:05.60 postmaster\n\nin few seconds later:\n\ntop - 13:14:01 up 22 days, 18:26, 1 user, load average: 1.72, 0.53, 0.32\nTasks: 52 total, 2 running, 50 sleeping, 0 stopped, 0 zombie\nCpu(s): 5.3% us, 3.0% sy, 0.0% ni, 0.0% id, 91.0% wa, 0.0% hi, 0.7% si\nMem: 2075828k total, 2022692k used, 53136k free, 0k buffers\nSwap: 3911816k total, 88k used, 3911728k free, 1905028k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1179 postgres 18 0 155m 136m 122m D 6.7 6.7 1:32.52 postmaster\n 4748 postgres 15 0 145m 126m 122m D 1.3 6.2 0:14.38 postmaster\n 5358 postgres 16 0 160m 98m 81m D 0.7 4.9 0:01.21 postmaster\n 1 root 16 0 1480 508 444 S 0.0 0.0 0:01.35 init\n 2 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/0\n 3 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/0\n 4 root 10 -5 0 0 0 S 0.0 0.0 0:00.42 khelper\n 5 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kthread\n 7 root 10 -5 0 0 0 S 0.0 0.0 2:03.97 kblockd/0\n 8 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 kacpid\n 115 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 aio/0\n 114 root 15 0 0 0 0 S 0.0 0.0 8:49.79 kswapd0\n 116 root 10 -5 0 0 0 S 0.0 0.0 0:10.32 xfslogd/0\n 117 root 10 -5 0 0 0 S 0.0 0.0 0:39.96 xfsdatad/0\n 706 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kseriod\n 723 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kpsmoused\n 738 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 ata/0\n 740 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_0\n 741 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_1\n 742 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_2\n 743 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_3\n 762 root 10 -5 0 0 0 S 0.0 0.0 0:17.54 xfsbufd\n 763 root 10 -5 0 0 0 S 0.0 0.0 0:00.68 xfssyncd\n 963 root 16 -4 1712 528 336 S 0.0 0.0 0:00.24 udevd\n 6677 root 15 0 1728 572 400 S 0.0 0.0 0:04.99 syslog-ng\n\n\nAndrus. \n\n", "msg_date": "Thu, 20 Nov 2008 13:14:34 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Andrus wrote:\n> Richard,\n> \n>> At a quick glance, the plans look the same to me. The overall costs are\n>> certainly identical. That means whatever is affecting the query times it\n>> isn't the query plan.\n>>\n>> So - what other activity is happening on this machine? Either other\n>> queries are taking up noticeable resources, or some other process is (it\n>> might be disk activity from checkpointing, logging some other\n>> application).\n> \n> Thank you.\n> This is dedicated server running only PostgreSql which serves approx 6\n> point of sales at this time.\n> \n> Maybe those other clients make queries which invalidate lot of data\n> loaded into server cache.\n> In next time server must read it again from disk which causes those\n> perfomance differences.\n\nIn addition to \"top\" below, you'll probably find \"vmstat 5\" useful.\n\n> top output is currently:\n> \n> top - 13:13:10 up 22 days, 18:25, 1 user, load average: 0.19, 0.12, 0.19\n> Tasks: 53 total, 2 running, 51 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 13.7% us, 2.0% sy, 0.0% ni, 78.3% id, 6.0% wa, 0.0% hi, \n> 0.0% si\n> Mem: 2075828k total, 2022808k used, 53020k free, 0k buffers\n> Swap: 3911816k total, 88k used, 3911728k free, 1908536k cached\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 5382 postgres 15 0 144m 43m 40m S 15.0 2.2 0:00.45 postmaster\n> 5358 postgres 15 0 152m 87m 75m S 0.3 4.3 0:00.97 postmaster\n> 1 root 16 0 1480 508 444 S 0.0 0.0 0:01.35 init\n\nLooks pretty quiet.\n\n> in few seconds later:\n> \n> top - 13:14:01 up 22 days, 18:26, 1 user, load average: 1.72, 0.53, 0.32\n> Tasks: 52 total, 2 running, 50 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 5.3% us, 3.0% sy, 0.0% ni, 0.0% id, 91.0% wa, 0.0% hi, \n> 0.7% si\n> Mem: 2075828k total, 2022692k used, 53136k free, 0k buffers\n> Swap: 3911816k total, 88k used, 3911728k free, 1905028k cached\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 1179 postgres 18 0 155m 136m 122m D 6.7 6.7 1:32.52 postmaster\n> 4748 postgres 15 0 145m 126m 122m D 1.3 6.2 0:14.38 postmaster\n> 5358 postgres 16 0 160m 98m 81m D 0.7 4.9 0:01.21 postmaster\n> 1 root 16 0 1480 508 444 S 0.0 0.0 0:01.35 init\n\nHere you're stuck waiting for disks (91.0% wa). Check out vmstat and\niostat to see what's happening.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 20 Nov 2008 11:22:31 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "\n\tOK so vmstat says you are IO-bound, this seems logical if the same plan \nhas widely varying timings...\n\n\tLet's look at the usual suspects :\n\n\t- how many dead rows in your tables ? are your tables data, or bloat ? \n(check vacuum verbose, etc)\n\t- what's the size of the dataset relative to the RAM ?\n\n\tNow let's look more closely at the query :\n\nexplain analyze\nSELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n LEFT JOIN artliik using(grupp,liik)\n WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n\n\n\tOK, so artliik is a very small table (84 rows) :\n\nSeq Scan on artliik (cost=0.00..6.84 rows=84 width=19)\n(actual time=20.104..29.845 rows=84 loops=1)\n\n\tI presume doing the query without artliik changes nothing to the runtime, \nyes ?\n\tLet's look at the main part of the query :\n\n FROM dok JOIN rid USING (dokumnr) JOIN toode USING (toode)\n WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n\n\tPostgres's plan is logical. It starts by joining rid and dok since your \nWHERE is on those :\n\n-> Hash Join (cost=52103.94..233488.08 rows=24126 width=24) (actual \ntime=100386.921..114037.986 rows=20588 loops=1)\"\n\tHash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\t-> Bitmap Heap Scan on rid (cost=4127.51..175020.84 rows=317003 \nwidth=28) (actual time=11119.932..76225.918 rows=277294 loops=1)\"\n\t\t Recheck Cond: (toode = 'X05'::bpchar)\"\n\t\t -> Bitmap Index Scan on rid_toode_idx (cost=0.00..4127.51 rows=317003 \nwidth=0) (actual time=11105.807..11105.807 rows=280599 loops=1)\"\n\t\t\t\tIndex Cond: (toode = 'X05'::bpchar)\"\n\t-> Hash (cost=47376.82..47376.82 rows=93444 width=4) (actual \ntime=35082.427..35082.427 rows=105202 loops=1)\"\n\t\t -> Index Scan using dok_kuupaev_idx on dok (cost=0.00..47376.82 \nrows=93444 width=4) (actual time=42.110..34586.331 rows=105202 loops=1)\"\n\t\t\t\tIndex Cond: (kuupaev >= '2008-09-01'::date)\"\n\n\tYour problem here is that, no matter what, postgres will have to examine\n\t- all rows where dok.kuupaev>='2008-09-01',\n\t- and all rows where rid.toode = 'X05'.\n\tIf you use dok.kuupaev>='2007-09-01' (note : 2007) it will probably have \nto scan many, many more rows.\n\n\tIf you perform this query often you could CLUSTER rid on (toode) and dok \non (kuupaev), but this can screw other queries.\n\n\tWhat is the meaning of the columns ?\n\n\tTo make this type of query faster I would tend to think about :\n\n\t- materialized views\n\t- denormalization (ie adding a column in one of your tables and a \nmulticolumn index)\n\t- materialized summary tables (ie. summary of sales for last month, for \ninstance)\n\n\n\"Aggregate (cost=234278.53..234278.54 rows=1 width=0) (actual \ntime=114479.933..114479.936 rows=1 loops=1)\"\n\" -> Hash Left Join (cost=52111.20..234218.21 rows=24126 width=0) \n(actual time=100435.523..114403.293 rows=20588 loops=1)\"\n\" Hash Cond: ((\"outer\".grupp = \"inner\".grupp) AND (\"outer\".liik = \n\"inner\".liik))\"\n\" -> Nested Loop (cost=52103.94..233735.35 rows=24126 width=19) \n(actual time=100405.258..114207.387 rows=20588 loops=1)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 \nrows=1 width=43) (actual time=18.312..18.325 rows=1 loops=1)\"\n\" Index Cond: ('X05'::bpchar = toode)\"\n\" -> Hash Join (cost=52103.94..233488.08 rows=24126 \nwidth=24) (actual time=100386.921..114037.986 rows=20588 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=4127.51..175020.84 \nrows=317003 width=28) (actual time=11119.932..76225.918 rows=277294 \nloops=1)\"\n\" Recheck Cond: (toode = 'X05'::bpchar)\"\n\" -> Bitmap Index Scan on rid_toode_idx \n(cost=0.00..4127.51 rows=317003 width=0) (actual time=11105.807..11105.807 \nrows=280599 loops=1)\"\n\" Index Cond: (toode = 'X05'::bpchar)\"\n\" -> Hash (cost=47376.82..47376.82 rows=93444 \nwidth=4) (actual time=35082.427..35082.427 rows=105202 loops=1)\"\n\" -> Index Scan using dok_kuupaev_idx on dok \n(cost=0.00..47376.82 rows=93444 width=4) (actual time=42.110..34586.331 \nrows=105202 loops=1)\"\n\" Index Cond: (kuupaev >= \n'2008-09-01'::date)\"\n\" -> Hash (cost=6.84..6.84 rows=84 width=19) (actual\ntime=30.220..30.220 rows=84 loops=1)\"\n\" -> Seq Scan on artliik (cost=0.00..6.84 rows=84 width=19)\n(actual time=20.104..29.845 rows=84 loops=1)\"\n\"Total runtime: 114480.373 ms\"\n", "msg_date": "Thu, 20 Nov 2008 14:46:11 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Richard,\n\n> In addition to \"top\" below, you'll probably find \"vmstat 5\" useful.\n\nThank you.\nDuring this query run (65 sec), vmstat 5 shows big values in bi,cs and wa \ncolumns:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id \nwa\n 1 1 88 51444 0 1854404 0 0 17 6 15 13 5 1 83 \n10\n 0 1 88 52140 0 1854304 0 0 3626 95 562 784 15 38 0 \n47\n 0 1 92 51608 0 1855668 0 0 14116 103 1382 2294 4 8 0 \n88\n 0 1 92 51620 0 1857256 0 1 15258 31 1210 1975 4 8 0 \n88\n 0 2 92 50448 0 1859188 0 0 13118 19 1227 1982 3 7 0 \n90\n 0 1 92 51272 0 1859088 0 0 7691 53 828 1284 14 4 0 \n82\n 0 1 92 52472 0 1858792 0 0 10691 9 758 968 3 7 0 \n89\n 0 2 92 51240 0 1858596 0 0 8204 7407 717 1063 2 5 0 \n93\n 0 1 92 51648 0 1860388 0 0 20622 121 1118 2229 12 9 0 \n79\n 2 1 92 50564 0 1861396 0 0 20994 3277 969 1681 15 8 0 \n76\n 1 0 92 52180 0 1860192 0 0 18542 36 802 1276 36 12 0 \n51\n 0 0 92 91872 0 1820948 0 1 15285 47 588 774 9 12 32 \n47\n 0 0 92 91904 0 1820948 0 0 0 4 251 18 0 0 \n100 0\n 0 0 92 92044 0 1820948 0 0 0 0 250 17 0 0 \n100 0\n 0 0 92 91668 0 1821156 0 0 27 93 272 66 5 0 92 \n3\n 0 0 92 91668 0 1821156 0 0 0 64 260 38 0 0 \n100 0\n 0 0 92 91636 0 1821156 0 0 0 226 277 71 0 0 \n100 0\n 0 0 92 91676 0 1821156 0 0 0 26 255 23 0 0 \n100 0\n\n> Here you're stuck waiting for disks (91.0% wa). Check out vmstat and\n> iostat to see what's happening.\n\ntyping iostat returns\n\nbash: iostat: command not found\n\nIt seems that this is not installed in this gentoo.\n\nAndrus. \n\n", "msg_date": "Fri, 21 Nov 2008 15:49:24 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "PFC,\n\nthank you.\n\n> OK so vmstat says you are IO-bound, this seems logical if the same plan\n> has widely varying timings...\n>\n> Let's look at the usual suspects :\n>\n> - how many dead rows in your tables ? are your tables data, or bloat ?\n> (check vacuum verbose, etc)\n\nset search_path to firma2,public;\nvacuum verbose dok; vacuum verbose rid\n\nINFO: vacuuming \"firma2.dok\"\nINFO: index \"dok_pkey\" now contains 1235086 row versions in 9454 pages\nDETAIL: 100 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.16s/0.38u sec elapsed 0.77 sec.\nINFO: index \"dok_dokumnr_idx\" now contains 1235086 row versions in 9454 \npages\nDETAIL: 100 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.14s/0.40u sec elapsed 0.78 sec.\nINFO: index \"dok_klient_idx\" now contains 1235086 row versions in 18147 \npages\nDETAIL: 887 index row versions were removed.\n3265 index pages have been deleted, 3033 are currently reusable.\nCPU 0.36s/0.46u sec elapsed 31.87 sec.\nINFO: index \"dok_krdokumnr_idx\" now contains 1235086 row versions in 11387 \npages\nDETAIL: 119436 index row versions were removed.\n1716 index pages have been deleted, 1582 are currently reusable.\nCPU 0.47s/0.55u sec elapsed 63.38 sec.\nINFO: index \"dok_kuupaev_idx\" now contains 1235101 row versions in 10766 \npages\nDETAIL: 119436 index row versions were removed.\n659 index pages have been deleted, 625 are currently reusable.\nCPU 0.62s/0.53u sec elapsed 40.20 sec.\nINFO: index \"dok_tasudok_idx\" now contains 1235104 row versions in 31348 \npages\nDETAIL: 119436 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.18s/1.08u sec elapsed 118.97 sec.\nINFO: index \"dok_tasudok_unique_idx\" now contains 99 row versions in 97 \npages\nDETAIL: 98 index row versions were removed.\n80 index pages have been deleted, 80 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.48 sec.\nINFO: index \"dok_tasumata_idx\" now contains 1235116 row versions in 11663 \npages\nDETAIL: 119436 index row versions were removed.\n5340 index pages have been deleted, 5131 are currently reusable.\nCPU 0.43s/0.56u sec elapsed 53.96 sec.\nINFO: index \"dok_tellimus_idx\" now contains 1235122 row versions in 11442 \npages\nDETAIL: 119436 index row versions were removed.\n1704 index pages have been deleted, 1569 are currently reusable.\nCPU 0.45s/0.59u sec elapsed 76.91 sec.\nINFO: index \"dok_yksus_pattern_idx\" now contains 1235143 row versions in \n5549 pages\nDETAIL: 119436 index row versions were removed.\n529 index pages have been deleted, 129 are currently reusable.\nCPU 0.19s/0.46u sec elapsed 2.72 sec.\nINFO: index \"dok_doktyyp\" now contains 1235143 row versions in 3899 pages\nDETAIL: 119436 index row versions were removed.\n188 index pages have been deleted, 13 are currently reusable.\nCPU 0.14s/0.44u sec elapsed 1.40 sec.\nINFO: index \"dok_sihtyksus_pattern_idx\" now contains 1235143 row versions \nin 5353 pages\nDETAIL: 119436 index row versions were removed.\n286 index pages have been deleted, 5 are currently reusable.\nCPU 0.13s/0.45u sec elapsed 3.01 sec.\nINFO: \"dok\": removed 119436 row versions in 13707 pages\nDETAIL: CPU 0.80s/0.37u sec elapsed 14.15 sec.\nINFO: \"dok\": found 119436 removable, 1235085 nonremovable row versions in \n171641 pages\nDETAIL: 2 dead row versions cannot be removed yet.\nThere were 1834279 unused item pointers.\n0 pages are entirely empty.\nCPU 6.56s/6.88u sec elapsed 450.54 sec.\nINFO: vacuuming \"pg_toast.pg_toast_40595\"\nINFO: index \"pg_toast_40595_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_40595\": found 0 removable, 0 nonremovable row versions in 0 \npages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: vacuuming \"firma2.rid\"\nINFO: index \"rid_pkey\" now contains 3275197 row versions in 13959 pages\nDETAIL: 38331 index row versions were removed.\n262 index pages have been deleted, 262 are currently reusable.\nCPU 0.42s/1.05u sec elapsed 58.56 sec.\nINFO: index \"rid_dokumnr_idx\" now contains 3275200 row versions in 14125 \npages\nDETAIL: 38331 index row versions were removed.\n572 index pages have been deleted, 571 are currently reusable.\nCPU 0.49s/1.14u sec elapsed 71.57 sec.\nINFO: index \"rid_inpdokumnr_idx\" now contains 3275200 row versions in 15103 \npages\nDETAIL: 38331 index row versions were removed.\n579 index pages have been deleted, 579 are currently reusable.\nCPU 0.66s/1.03u sec elapsed 68.38 sec.\nINFO: index \"rid_toode_idx\" now contains 3275224 row versions in 31094 \npages\nDETAIL: 38331 index row versions were removed.\n2290 index pages have been deleted, 2290 are currently reusable.\nCPU 1.39s/1.58u sec elapsed 333.82 sec.\nINFO: index \"rid_rtellimus_idx\" now contains 3275230 row versions in 7390 \npages\nDETAIL: 18591 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.18s/0.66u sec elapsed 1.78 sec.\nINFO: index \"rid_toode_pattern_idx\" now contains 3275230 row versions in \n16310 pages\nDETAIL: 17800 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.44s/1.04u sec elapsed 6.55 sec.\nINFO: \"rid\": removed 38331 row versions in 3090 pages\nDETAIL: CPU 0.20s/0.10u sec elapsed 5.49 sec.\nINFO: \"rid\": found 38331 removable, 3275189 nonremovable row versions in \n165282 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 1878923 unused item pointers.\n0 pages are entirely empty.\nCPU 5.06s/7.27u sec elapsed 607.59 sec.\n\nQuery returned successfully with no result in 1058319 ms.\n\n> - what's the size of the dataset relative to the RAM ?\n\nDb size is 7417 MB\nrelevant table sizes in desc by size order:\n\n 1 40595 dok 2345 MB\n 2 1214 pg_shdepend 2259 MB\n 3 40926 rid 2057 MB\n 6 1232 pg_shdepend_depender_index 795 MB\n 7 1233 pg_shdepend_reference_index 438 MB\n 8 44286 dok_tasudok_idx 245 MB\n 9 44299 rid_toode_idx 243 MB\n 10 44283 dok_klient_idx 142 MB\n 11 19103791 rid_toode_pattern_idx 127 MB\n 14 44298 rid_inpdokumnr_idx 118 MB\n 15 44297 rid_dokumnr_idx 110 MB\n 16 43573 rid_pkey 109 MB\n 18 44288 dok_tasumata_idx 91 MB\n 19 44289 dok_tellimus_idx 89 MB\n 20 44284 dok_krdokumnr_idx 89 MB\n 21 44285 dok_kuupaev_idx 84 MB\n 23 43479 dok_pkey 74 MB\n 24 44282 dok_dokumnr_idx 74 MB\n 25 19076304 rid_rtellimus_idx 58 MB\n 26 18663923 dok_yksus_pattern_idx 43 MB\n 27 18801591 dok_sihtyksus_pattern_idx 42 MB\n 29 18774881 dok_doktyyp 30 MB\n 46 40967 toode 13 MB\n\nserver is HP Proliant DL320 G3\nhttp://h18000.www1.hp.com/products/quickspecs/12169_ca/12169_ca.HTML\nCPU is 2.93Ghz Celeron 256kb cache.\n\nServer has 2 GB RAM.\nIt has SATA RAID 0,1 integrated controller (1.5Gbps) and SAMSUNG HD160JJ\nmirrored disks.\n\n> Now let's look more closely at the query :\n>\n> explain analyze\n> SELECT sum(1)\n> FROM dok JOIN rid USING (dokumnr)\n> JOIN toode USING (toode)\n> LEFT JOIN artliik using(grupp,liik)\n> WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n>\n>\n> I presume doing the query without artliik changes nothing to the runtime,\n> yes ?\n\nYes. After removing artkliik from join I got response times 65 and 50\nseconds, so this does not make difference.\n\n> Your problem here is that, no matter what, postgres will have to examine\n> - all rows where dok.kuupaev>='2008-09-01',\n> - and all rows where rid.toode = 'X05'.\n> If you use dok.kuupaev>='2007-09-01' (note : 2007) it will probably have\n> to scan many, many more rows.\n\nProbably yes, since then it reads one year more sales data.\n\n> If you perform this query often you could CLUSTER rid on (toode) and dok\n> on (kuupaev), but this can screw other queries.\n\nSome reports are by sales date (dok.kuupaev) and customers.\nCLUSTER rid on (toode) slows them down. Also autovacuum cannot do \nclustering.\n\n> What is the meaning of the columns ?\n\nThis is typical sales data:\n\n-- Receipt headers:\nDOK ( dokumnr INT SERIAL PRIMARY KEY,\n kuupaev DATE --- sales date\n)\n\n-- Receipt details\nRID ( dokumnr INT,\n toode CHAR(20), -- item code\n CONSTRAINT rid_dokumnr_fkey FOREIGN KEY (dokumnr) REFERENCES dok\n(dokumnr),\n CONSTRAINT rid_toode_fkey FOREIGN KEY (toode)\n REFERENCES firma2.toode (toode)\n)\n\n-- Products\nTOODE (\n toode CHAR(20) PRIMARY KEY\n)\n\n> To make this type of query faster I would tend to think about :\n\n> - denormalization (ie adding a column in one of your tables and a\n> multicolumn index)\n\nFor this query it is possible to duplicate kuupaev column to rid table.\nHowever most of the this seems to go to scanning rid table, so I suspect\nthat this will help.\n\n> - materialized views\n> - materialized summary tables (ie. summary of sales for last month, for\n> instance)\n\nThere are about 1000 items and reports are different.\n\nAndrus. \n\n", "msg_date": "Fri, 21 Nov 2008 17:12:27 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Just the most important points:\n\n1) \"dok\" table contains 1235086 row versions in 171641 pages (with 8kB\npages this means 1.4GB MB of data), but there are 1834279 unused item\npointers (i.e. about 60% of the space is wasted)\n\n2) \"rid\" table contains 3275189 roiws in 165282 (with 8kB pages this means\nabout 1.3GB of data), but there are 1878923 unused item pointers (i.e.\nabout 30% of the space is wasted)\n\n3) don't forget to execute analyze after vacuuming (or vacuum analyze)\n\n4) I'm not sure why the sizes reported by you (for example 2.3GB vs 1.5GB\nfor \"doc\" table) - the difference seems too large for me.\n\nAnyway the amount of wasted rows seems significant to me - I'd try to\nsolve this first. Either by VACUUM FULL or by CLUSTER. The CLUSTER will\nlock the table exclusively, but the results may be better (when sorting by\na well chosen index). Don't forget to run ANALYZE afterwards.\n\nSeveral other things to consider:\n\n1) Regarding the toode column - why are you using CHAR(20) when the values\nare actually shorter? This may significantly increase the amount of space\nrequired.\n\n2) I've noticed the CPU used is Celeron, which may negatively affect the\nspeed of hash computation. I'd try to replace it by something faster - say\nINTEGER as an artificial primary key of the \"toode\" table and using it as\na FK in other tables. This might improve the \"Bitmap Heap Scan on rid\"\npart, but yes - it's just a minor improvement compared to the \"Hash Join\"\npart of the query.\n\nMaterialized views seem like a good idea to me, but maybe I'm not seeing\nsomething. What do you mean by \"reports are different\"? If there is a lot\nof rows for a given product / day, then creating an aggregated table with\n(product code / day) as a primary key is quite simple. It may require a\nlot of disk space, but it'll remove the hash join overhead. But if the\nqueries are very different, then it may be difficult to build such\nmaterialized view(s).\n\nregards\nTomas\n\n> PFC,\n>\n> thank you.\n>\n>> OK so vmstat says you are IO-bound, this seems logical if the same plan\n>> has widely varying timings...\n>>\n>> Let's look at the usual suspects :\n>>\n>> - how many dead rows in your tables ? are your tables data, or bloat ?\n>> (check vacuum verbose, etc)\n>\n> set search_path to firma2,public;\n> vacuum verbose dok; vacuum verbose rid\n>\n> INFO: vacuuming \"firma2.dok\"\n> INFO: index \"dok_pkey\" now contains 1235086 row versions in 9454 pages\n> DETAIL: 100 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.16s/0.38u sec elapsed 0.77 sec.\n> INFO: index \"dok_dokumnr_idx\" now contains 1235086 row versions in 9454\n> pages\n> DETAIL: 100 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.14s/0.40u sec elapsed 0.78 sec.\n> INFO: index \"dok_klient_idx\" now contains 1235086 row versions in 18147\n> pages\n> DETAIL: 887 index row versions were removed.\n> 3265 index pages have been deleted, 3033 are currently reusable.\n> CPU 0.36s/0.46u sec elapsed 31.87 sec.\n> INFO: index \"dok_krdokumnr_idx\" now contains 1235086 row versions in\n> 11387\n> pages\n> DETAIL: 119436 index row versions were removed.\n> 1716 index pages have been deleted, 1582 are currently reusable.\n> CPU 0.47s/0.55u sec elapsed 63.38 sec.\n> INFO: index \"dok_kuupaev_idx\" now contains 1235101 row versions in 10766\n> pages\n> DETAIL: 119436 index row versions were removed.\n> 659 index pages have been deleted, 625 are currently reusable.\n> CPU 0.62s/0.53u sec elapsed 40.20 sec.\n> INFO: index \"dok_tasudok_idx\" now contains 1235104 row versions in 31348\n> pages\n> DETAIL: 119436 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 1.18s/1.08u sec elapsed 118.97 sec.\n> INFO: index \"dok_tasudok_unique_idx\" now contains 99 row versions in 97\n> pages\n> DETAIL: 98 index row versions were removed.\n> 80 index pages have been deleted, 80 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.48 sec.\n> INFO: index \"dok_tasumata_idx\" now contains 1235116 row versions in 11663\n> pages\n> DETAIL: 119436 index row versions were removed.\n> 5340 index pages have been deleted, 5131 are currently reusable.\n> CPU 0.43s/0.56u sec elapsed 53.96 sec.\n> INFO: index \"dok_tellimus_idx\" now contains 1235122 row versions in 11442\n> pages\n> DETAIL: 119436 index row versions were removed.\n> 1704 index pages have been deleted, 1569 are currently reusable.\n> CPU 0.45s/0.59u sec elapsed 76.91 sec.\n> INFO: index \"dok_yksus_pattern_idx\" now contains 1235143 row versions in\n> 5549 pages\n> DETAIL: 119436 index row versions were removed.\n> 529 index pages have been deleted, 129 are currently reusable.\n> CPU 0.19s/0.46u sec elapsed 2.72 sec.\n> INFO: index \"dok_doktyyp\" now contains 1235143 row versions in 3899 pages\n> DETAIL: 119436 index row versions were removed.\n> 188 index pages have been deleted, 13 are currently reusable.\n> CPU 0.14s/0.44u sec elapsed 1.40 sec.\n> INFO: index \"dok_sihtyksus_pattern_idx\" now contains 1235143 row versions\n> in 5353 pages\n> DETAIL: 119436 index row versions were removed.\n> 286 index pages have been deleted, 5 are currently reusable.\n> CPU 0.13s/0.45u sec elapsed 3.01 sec.\n> INFO: \"dok\": removed 119436 row versions in 13707 pages\n> DETAIL: CPU 0.80s/0.37u sec elapsed 14.15 sec.\n> INFO: \"dok\": found 119436 removable, 1235085 nonremovable row versions in\n> 171641 pages\n> DETAIL: 2 dead row versions cannot be removed yet.\n> There were 1834279 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 6.56s/6.88u sec elapsed 450.54 sec.\n> INFO: vacuuming \"pg_toast.pg_toast_40595\"\n> INFO: index \"pg_toast_40595_index\" now contains 0 row versions in 1 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"pg_toast_40595\": found 0 removable, 0 nonremovable row versions in\n> 0\n> pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: vacuuming \"firma2.rid\"\n> INFO: index \"rid_pkey\" now contains 3275197 row versions in 13959 pages\n> DETAIL: 38331 index row versions were removed.\n> 262 index pages have been deleted, 262 are currently reusable.\n> CPU 0.42s/1.05u sec elapsed 58.56 sec.\n> INFO: index \"rid_dokumnr_idx\" now contains 3275200 row versions in 14125\n> pages\n> DETAIL: 38331 index row versions were removed.\n> 572 index pages have been deleted, 571 are currently reusable.\n> CPU 0.49s/1.14u sec elapsed 71.57 sec.\n> INFO: index \"rid_inpdokumnr_idx\" now contains 3275200 row versions in\n> 15103\n> pages\n> DETAIL: 38331 index row versions were removed.\n> 579 index pages have been deleted, 579 are currently reusable.\n> CPU 0.66s/1.03u sec elapsed 68.38 sec.\n> INFO: index \"rid_toode_idx\" now contains 3275224 row versions in 31094\n> pages\n> DETAIL: 38331 index row versions were removed.\n> 2290 index pages have been deleted, 2290 are currently reusable.\n> CPU 1.39s/1.58u sec elapsed 333.82 sec.\n> INFO: index \"rid_rtellimus_idx\" now contains 3275230 row versions in 7390\n> pages\n> DETAIL: 18591 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.18s/0.66u sec elapsed 1.78 sec.\n> INFO: index \"rid_toode_pattern_idx\" now contains 3275230 row versions in\n> 16310 pages\n> DETAIL: 17800 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.44s/1.04u sec elapsed 6.55 sec.\n> INFO: \"rid\": removed 38331 row versions in 3090 pages\n> DETAIL: CPU 0.20s/0.10u sec elapsed 5.49 sec.\n> INFO: \"rid\": found 38331 removable, 3275189 nonremovable row versions in\n> 165282 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 1878923 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 5.06s/7.27u sec elapsed 607.59 sec.\n>\n> Query returned successfully with no result in 1058319 ms.\n>\n>> - what's the size of the dataset relative to the RAM ?\n>\n> Db size is 7417 MB\n> relevant table sizes in desc by size order:\n>\n> 1 40595 dok 2345 MB\n> 2 1214 pg_shdepend 2259 MB\n> 3 40926 rid 2057 MB\n> 6 1232 pg_shdepend_depender_index 795 MB\n> 7 1233 pg_shdepend_reference_index 438 MB\n> 8 44286 dok_tasudok_idx 245 MB\n> 9 44299 rid_toode_idx 243 MB\n> 10 44283 dok_klient_idx 142 MB\n> 11 19103791 rid_toode_pattern_idx 127 MB\n> 14 44298 rid_inpdokumnr_idx 118 MB\n> 15 44297 rid_dokumnr_idx 110 MB\n> 16 43573 rid_pkey 109 MB\n> 18 44288 dok_tasumata_idx 91 MB\n> 19 44289 dok_tellimus_idx 89 MB\n> 20 44284 dok_krdokumnr_idx 89 MB\n> 21 44285 dok_kuupaev_idx 84 MB\n> 23 43479 dok_pkey 74 MB\n> 24 44282 dok_dokumnr_idx 74 MB\n> 25 19076304 rid_rtellimus_idx 58 MB\n> 26 18663923 dok_yksus_pattern_idx 43 MB\n> 27 18801591 dok_sihtyksus_pattern_idx 42 MB\n> 29 18774881 dok_doktyyp 30 MB\n> 46 40967 toode 13 MB\n>\n> server is HP Proliant DL320 G3\n> http://h18000.www1.hp.com/products/quickspecs/12169_ca/12169_ca.HTML\n> CPU is 2.93Ghz Celeron 256kb cache.\n>\n> Server has 2 GB RAM.\n> It has SATA RAID 0,1 integrated controller (1.5Gbps) and SAMSUNG HD160JJ\n> mirrored disks.\n>\n>> Now let's look more closely at the query :\n>>\n>> explain analyze\n>> SELECT sum(1)\n>> FROM dok JOIN rid USING (dokumnr)\n>> JOIN toode USING (toode)\n>> LEFT JOIN artliik using(grupp,liik)\n>> WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n>>\n>>\n>> I presume doing the query without artliik changes nothing to the\n>> runtime,\n>> yes ?\n>\n> Yes. After removing artkliik from join I got response times 65 and 50\n> seconds, so this does not make difference.\n>\n>> Your problem here is that, no matter what, postgres will have to examine\n>> - all rows where dok.kuupaev>='2008-09-01',\n>> - and all rows where rid.toode = 'X05'.\n>> If you use dok.kuupaev>='2007-09-01' (note : 2007) it will probably have\n>> to scan many, many more rows.\n>\n> Probably yes, since then it reads one year more sales data.\n>\n>> If you perform this query often you could CLUSTER rid on (toode) and dok\n>> on (kuupaev), but this can screw other queries.\n>\n> Some reports are by sales date (dok.kuupaev) and customers.\n> CLUSTER rid on (toode) slows them down. Also autovacuum cannot do\n> clustering.\n>\n>> What is the meaning of the columns ?\n>\n> This is typical sales data:\n>\n> -- Receipt headers:\n> DOK ( dokumnr INT SERIAL PRIMARY KEY,\n> kuupaev DATE --- sales date\n> )\n>\n> -- Receipt details\n> RID ( dokumnr INT,\n> toode CHAR(20), -- item code\n> CONSTRAINT rid_dokumnr_fkey FOREIGN KEY (dokumnr) REFERENCES dok\n> (dokumnr),\n> CONSTRAINT rid_toode_fkey FOREIGN KEY (toode)\n> REFERENCES firma2.toode (toode)\n> )\n>\n> -- Products\n> TOODE (\n> toode CHAR(20) PRIMARY KEY\n> )\n>\n>> To make this type of query faster I would tend to think about :\n>\n>> - denormalization (ie adding a column in one of your tables and a\n>> multicolumn index)\n>\n> For this query it is possible to duplicate kuupaev column to rid table.\n> However most of the this seems to go to scanning rid table, so I suspect\n> that this will help.\n>\n>> - materialized views\n>> - materialized summary tables (ie. summary of sales for last month, for\n>> instance)\n>\n> There are about 1000 items and reports are different.\n>\n> Andrus.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Fri, 21 Nov 2008 17:15:10 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Andrus wrote:\n>> - what's the size of the dataset relative to the RAM ?\n> \n> Db size is 7417 MB\n> relevant table sizes in desc by size order:\n> \n> 1 40595 dok 2345 MB\n\n\n> 2 1214 pg_shdepend 2259 MB\n> 6 1232 pg_shdepend_depender_index 795 MB\n> 7 1233 pg_shdepend_reference_index 438 MB\n\nThese three are highly suspicious. They track dependencies between\nsystem object (so you can't drop function F because trigger T depends on\nit).\n\nhttp://www.postgresql.org/docs/8.3/static/catalog-pg-shdepend.html\n\nYou've got 3.5GB of data there, which is a *lot* of dependencies.\n\nTry \"SELECT count(*) FROM pg_shdepend\".\n\nIf it's not a million rows, then the table is bloated. Try (as postgres\nor some other db superuser) \"vacuum full pg_shdepend\" and a \"reindex\npg_shdepend\".\n\nIf it is a million rows, you'll need to find out why. Do you have a lot\nof temporary tables that aren't being dropped or something similar?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 21 Nov 2008 16:32:50 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Richard,\n\nThank you.\n\n> Try \"SELECT count(*) FROM pg_shdepend\".\n\nThis query returns 3625 and takes 35 seconds to run.\n\n> If it's not a million rows, then the table is bloated. Try (as postgres\n> or some other db superuser) \"vacuum full pg_shdepend\" and a \"reindex\n> pg_shdepend\".\n\nvacuum full verbose pg_shdepend\nINFO: vacuuming \"pg_catalog.pg_shdepend\"\nINFO: \"pg_shdepend\": found 16103561 removable, 3629 nonremovable row \nversions in 131425 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 49 to 49 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 1009387632 bytes.\n131363 pages are or will become empty, including 0 at the end of the table.\n131425 pages containing 1009387632 free bytes are potential move \ndestinations.\nCPU 2.12s/1.69u sec elapsed 52.66 sec.\nINFO: index \"pg_shdepend_depender_index\" now contains 3629 row versions in \n101794 pages\nDETAIL: 16103561 index row versions were removed.\n101311 index pages have been deleted, 20000 are currently reusable.\nCPU 20.12s/14.52u sec elapsed 220.66 sec.\n\nAfter 400 seconds of run I got phone calls that server does not respond to \nother clients. So I was forced to cancel \"vacuum full verbose pg_shdepend\n\" command.\n\nHow to run it so that other users can use database at same time ?\n\n> If it is a million rows, you'll need to find out why. Do you have a lot\n> of temporary tables that aren't being dropped or something similar?\n\nApplication creates temporary tables in many places. Every sales operation \nprobably creates some temporary tables.\nShould I change something in configuration or change application (Only \nsingle POS application which is used to access this db) or is only solution \nto manully run\n\nvacuum full pg_shdepend\nreindex pg_shdepend\n\nperiodically ?\nHow to vacuum full pg_shdepend automatically so that other users can work at \nsame time ?\n\nHopefully this table size does not affect to query speed.\n\nAndrus. \n\n", "msg_date": "Fri, 21 Nov 2008 19:51:01 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "\n> How to vacuum full pg_shdepend automatically so that other users can \n> work at same time ?\n\n\tYour table is horribly bloated.\n\tYou must use VACUUM FULL + REINDEX (as superuser) on it, however \nunfortunately, it is blocking.\n\tTherefore, you should wait for sunday night to do this, when noone will \nnotice.\n\tMeanwhile, you can always VACUUM it (as superuser) and REINDEX it.\n\tAnd while you're at it, VACUUM FULL + reindex the entire database.\n\n\tTo avoid such annoyances in the future, you should ensure that autovacuum \nruns properly ; you should investigate this. If you use a cron'ed VACUUM \nthat does not run as superuser, then it will not be able to VACUUM the \nsystem catalogs, and the problem will come back.\n\n", "msg_date": "Fri, 21 Nov 2008 18:57:50 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "\n> Server has 2 GB RAM.\n> It has SATA RAID 0,1 integrated controller (1.5Gbps) and SAMSUNG HD160JJ\n> mirrored disks.\n\n\tYou could perhaps run a little check on the performance of the RAID, is \nit better than linux software RAID ?\n\tDoes it leverage NCQ appropriately when running queries in parallel ?\n\n> -- Receipt headers:\n> DOK ( dokumnr INT SERIAL PRIMARY KEY,\n> kuupaev DATE --- sales date\n> )\n> -- Receipt details\n> RID ( dokumnr INT,\n> toode CHAR(20), -- item code\n> CONSTRAINT rid_dokumnr_fkey FOREIGN KEY (dokumnr) REFERENCES dok\n> (dokumnr),\n> CONSTRAINT rid_toode_fkey FOREIGN KEY (toode)\n> REFERENCES firma2.toode (toode)\n> )\n> -- Products\n> TOODE (\n> toode CHAR(20) PRIMARY KEY\n> )\n\n\tOK so pretty straightforward :\n\n\tdok <-(dokumnr)-> rid <-(toode)-> toode\n\n\ttoode.toode should really be an INT though.\n\n>> explain analyze\n>> SELECT sum(1)\n>> FROM dok JOIN rid USING (dokumnr)\n>> JOIN toode USING (toode)\n>> LEFT JOIN artliik using(grupp,liik)\n>> WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n\n\tBy the way, note that the presence of the toode table in the query above \nis not required at all, unless you use columns of toode in your aggregates.\n\n\tLet's play with that, after all, it's friday night.\n\nBEGIN;\nCREATE TABLE orders (order_id INTEGER NOT NULL, order_date DATE NOT NULL);\nCREATE TABLE products (product_id INTEGER NOT NULL, product_name TEXT NOT \nNULL);\nCREATE TABLE orders_products (order_id INTEGER NOT NULL, product_id \nINTEGER NOT NULL, padding1 TEXT, padding2 TEXT);\n\nINSERT INTO products SELECT n, 'product number ' || n::TEXT FROM \ngenerate_series(1,40000) AS n;\nINSERT INTO orders SELECT n,'2000-01-01'::date + (n/1000 * '1 \nDAY'::interval) FROM generate_series(1,1000000) AS n;\n\nSET work_mem TO '1GB';\nINSERT INTO orders_products SELECT \na,b,'aibaifbaurgbyioubyfazierugybfoaybofauez', \n'hfohbdsqbhjhqsvdfiuazvfgiurvgazrhbazboifhaoifh'\n FROM (SELECT DISTINCT (1+(n/10))::INTEGER AS a, \n(1+(random()*39999))::INTEGER AS b FROM generate_series( 1,9999999 ) AS n) \nAS x;\n\nDELETE FROM orders_products WHERE product_id NOT IN (SELECT product_id \n FROM products);\nDELETE FROM orders_products WHERE order_id NOT IN (SELECT order_id FROM \norders);\nALTER TABLE orders ADD PRIMARY KEY (order_id);\nALTER TABLE products ADD PRIMARY KEY (product_id);\nALTER TABLE orders_products ADD PRIMARY KEY (order_id,product_id);\nALTER TABLE orders_products ADD FOREIGN KEY (product_id) REFERENCES \nproducts( product_id ) ON DELETE CASCADE;\nALTER TABLE orders_products ADD FOREIGN KEY (order_id) REFERENCES orders( \norder_id ) ON DELETE CASCADE;\nCREATE INDEX orders_date ON orders( order_date );\nCOMMIT;\nSET work_mem TO DEFAULT;\nANALYZE;\n\nWith the following query :\n\nEXPLAIN ANALYZE SELECT sum(1)\n FROM orders\nJOIN orders_products USING (order_id)\nJOIN products USING (product_id)\nWHERE orders.order_date BETWEEN '2000-01-01' AND '2000-02-01'\nAND products.product_id = 12345;\n\nI get the following results :\n\norders_products has a PK index on (order_id, product_id). I dropped it.\n\nNo index on orders_products :\n\t=> Big seq scan (16 seconds)\n\nIndex on orders_products( product_id ) :\n Aggregate (cost=2227.22..2227.23 rows=1 width=0) (actual \ntime=108.204..108.205 rows=1 loops=1)\n -> Nested Loop (cost=1312.30..2227.20 rows=7 width=0) (actual \ntime=105.929..108.191 rows=6 loops=1)\n -> Index Scan using products_pkey on products (cost=0.00..8.27 \nrows=1 width=4) (actual time=0.010..0.014 rows=1 loops=1)\n Index Cond: (product_id = 12345)\n -> Hash Join (cost=1312.30..2218.85 rows=7 width=4) (actual \ntime=105.914..108.167 rows=6 loops=1)\n Hash Cond: (orders_products.order_id = orders.order_id)\n -> Bitmap Heap Scan on orders_products (cost=6.93..910.80 \nrows=232 width=8) (actual time=0.194..2.175 rows=246 loops=1)\n Recheck Cond: (product_id = 12345)\n -> Bitmap Index Scan on orders_products_product_id \n(cost=0.00..6.87 rows=232 width=0) (actual time=0.129..0.129 rows=246 \nloops=1)\n Index Cond: (product_id = 12345)\n -> Hash (cost=949.98..949.98 rows=28432 width=4) (actual \ntime=105.696..105.696 rows=31999 loops=1)\n -> Index Scan using orders_date on orders \n(cost=0.00..949.98 rows=28432 width=4) (actual time=0.059..64.443 \nrows=31999 loops=1)\n Index Cond: ((order_date >= '2000-01-01'::date) \nAND (order_date <= '2000-02-01'::date))\n Total runtime: 108.357 ms\n(don't trust this timing, it's a bit cached, this is the same plan as you \nget)\n\nIndex on orders_products( product_id ) and orders_products( order_id ):\n\t=> Same plan\n\n\tNote that in this case, a smarter planner would use the new index to \nperform a BitmapAnd before hitting the heap to get the rows.\n\nIndex on ( order_id, product_id ), orders_products( product_id ):\nIndex on ( order_id, product_id ):\n\t=> Different plan, slower (especially in second case).\n\nIf a \"order_date\" column is added to the \"orders_products\" table to make \nit into some kind of materialized view :\n\nCREATE TABLE orders_products2 AS SELECT orders.order_id, \norders.order_date, product_id FROM orders JOIN orders_products USING \n(order_id);\n\nAnd an index is created on (product_id, order_date) we get this :\n\n Aggregate (cost=100.44..100.45 rows=1 width=0) (actual time=0.176..0.177 \nrows=1 loops=1)\n -> Nested Loop (cost=0.00..100.42 rows=7 width=0) (actual \ntime=0.083..0.168 rows=6 loops=1)\n -> Index Scan using products_pkey on products (cost=0.00..8.27 \nrows=1 width=4) (actual time=0.012..0.013 rows=1 loops=1)\n Index Cond: (product_id = 12345)\n -> Nested Loop (cost=0.00..92.08 rows=7 width=4) (actual \ntime=0.068..0.147 rows=6 loops=1)\n -> Index Scan using orders_products2_pid_date on \norders_products2 (cost=0.00..33.50 rows=7 width=8) (actual \ntime=0.053..0.076 rows=6 loops=1)\n Index Cond: ((product_id = 12345) AND (order_date >= \n'2000-01-01'::date) AND (order_date <= '2000-02-01'::date))\n -> Index Scan using orders_pkey on orders \n(cost=0.00..8.36 rows=1 width=4) (actual time=0.008..0.009 rows=1 loops=6)\n Index Cond: (orders.order_id = \norders_products2.order_id)\n Total runtime: 0.246 ms\n\nAn index on (order_date,product_id) produces the same effect ; the index \nscan is slower, but the heap scan uses the same amount of IO.\n\nTwo indexes, (order_date) and (product_id), strangely, do not produce a \nBitmapAnd ; instead a plan with more IO is chosen.\n\n\n>> - denormalization (ie adding a column in one of your tables and a\n>> multicolumn index)\n>\n> For this query it is possible to duplicate kuupaev column to rid table.\n> However most of the this seems to go to scanning rid table, so I suspect\n> that this will help.\n\n\tYes, most of the time goes to scanning rid table, and this is the time \nthat should be reduced.\n\tAdding a date column in \"rid\" would allow you to create a multicolumn \nindex on rid (dokumnr,date) which would massively speed up the particular \nquery above.\n\tIf you don't create a multicolumn index, this denormalization is useless.\n\n\tBasically instead of scanning all rows in \"rid\" where\n\n>\n>> - materialized views\n>> - materialized summary tables (ie. summary of sales for last month, for\n>> instance)\n>\n> There are about 1000 items and reports are different.\n\n\tIt all depends on what you put in your summary table...\n\n\n", "msg_date": "Fri, 21 Nov 2008 19:31:42 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Thomas,\n\nThank you.\n\n> Just the most important points:\n>\n> 1) \"dok\" table contains 1235086 row versions in 171641 pages (with 8kB\n> pages this means 1.4GB MB of data), but there are 1834279 unused item\n> pointers (i.e. about 60% of the space is wasted)\n>\n> 2) \"rid\" table contains 3275189 roiws in 165282 (with 8kB pages this means\n> about 1.3GB of data), but there are 1878923 unused item pointers (i.e.\n> about 30% of the space is wasted)\n>\n> 3) don't forget to execute analyze after vacuuming (or vacuum analyze)\n\nautovacuum is running.\nSo if I understand properly, I must ran\nVACUUM FULL ANALYZE dok;\nVACUUM FULL ANALYZE rid;\n\nThose commands cause server probably to stop responding to other client like\nvacuum full pg_shdepend\ndid.\n\nShould vacuum_cost_delay = 2000 allow other users to work when running those\ncommands ?\n\n> 4) I'm not sure why the sizes reported by you (for example 2.3GB vs 1.5GB\n> for \"doc\" table) - the difference seems too large for me.\n\nI used pg_total_relation_size(). So 2.3 GB includes indexes also:\n\n 8 44286 dok_tasudok_idx 245 MB\n 10 44283 dok_klient_idx 142 MB\n 18 44288 dok_tasumata_idx 91 MB\n 19 44289 dok_tellimus_idx 89 MB\n 20 44284dok_krdokumnr_idx 89 MB\n 21 44285 dok_kuupaev_idx 84 MB\n 22 43531 makse_pkey 77 MB\n 23 43479 dok_pkey 74 MB\n 24 44282 dok_dokumnr_idx 74 MB\n 26 18663923 dok_yksus_pattern_idx 43 MB\n 27 18801591 dok_sihtyksus_pattern_idx 42 MB\n\n> Anyway the amount of wasted rows seems significant to me - I'd try to\n> solve this first. Either by VACUUM FULL or by CLUSTER. The CLUSTER will\n> lock the table exclusively, but the results may be better (when sorting by\n> a well chosen index). Don't forget to run ANALYZE afterwards.\n\nHow to invoke those commands so that other clients can continue work?\nI'm using 8.1.4.\nLog files show that autovacuum is running.\n\nI'm planning the following solution:\n\n1. Set\n\nvacuum_cost_delay=2000\n\n2. Run the following commands periodically in this order:\n\nVACUUM FULL;\nvacuum full pg_shdepend;\nCLUSTER rid on (toode);\nCLUSTER dok on (kuupaev);\nREINDEX DATABASE mydb;\nREINDEX SYSTEM mydb;\nANALYZE;\n\nAre all those command required or can something leaved out ?\n\n> Several other things to consider:\n>\n> 1) Regarding the toode column - why are you using CHAR(20) when the values\n> are actually shorter? This may significantly increase the amount of space\n> required.\n\nThere may be some products whose codes may be up to 20 characters.\nPostgreSQL does not hold trailing spaces in db, so this does *not* affect to\nspace.\n\n> 2) I've noticed the CPU used is Celeron, which may negatively affect the\n> speed of hash computation. I'd try to replace it by something faster - say\n> INTEGER as an artificial primary key of the \"toode\" table and using it as\n> a FK in other tables. This might improve the \"Bitmap Heap Scan on rid\"\n> part, but yes - it's just a minor improvement compared to the \"Hash Join\"\n> part of the query.\n\nNatural key Toode CHAR(20) is used widely in different queries. Replacing it\nwith\nINT surrogate key requires major application rewrite.\n\nShould I add surrogate index INT columns to toode and rid table and measure\ntest query speed in this case?\n\n> Materialized views seem like a good idea to me, but maybe I'm not seeing\n> something. What do you mean by \"reports are different\"? If there is a lot\n> of rows for a given product / day, then creating an aggregated table with\n> (product code / day) as a primary key is quite simple. It may require a\n> lot of disk space, but it'll remove the hash join overhead. But if the\n> queries are very different, then it may be difficult to build such\n> materialized view(s).\n\nlog file seems that mostly only those queries are slow:\n\nSELECT ...\n FROM dok JOIN rid USING (dokumnr)\n JOIN ProductId USING (ProductId)\n WHERE rid.ProductId LIKE :p1 || '%' AND dok.SaleDate>=:p2\n\n:p1 and :p2 are parameters different for different queries.\n\ndok contains several years of data. :p2 is usually only few previous months\nor last year ago.\nSELECT column list contains fixed list of known columns from all tables.\n\nHow to create index or materialized view to optimize this types of queries ?\n\nAndrus.\n\n", "msg_date": "Fri, 21 Nov 2008 21:00:09 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "\n\n> log file seems that mostly only those queries are slow:\n>\n> SELECT ...\n> FROM dok JOIN rid USING (dokumnr)\n> JOIN ProductId USING (ProductId)\n> WHERE rid.ProductId LIKE :p1 || '%' AND dok.SaleDate>=:p2\n>\n> :p1 and :p2 are parameters different for different queries.\n>\n> dok contains several years of data. :p2 is usually only few previous \n> months\n> or last year ago.\n> SELECT column list contains fixed list of known columns from all tables.\n>\n> How to create index or materialized view to optimize this types of \n> queries ?\n>\n\n\tI would remove some granularity, for instance create a summary table \n(materialized view) by month :\n\n- date (contains the first day of the month)\n- product_id\n- total quantity, total price sold in given month\n\n\tYou get the idea.\n\tIf your products belong to categories, and you make queries on all the \nproducts in a category, it could be worth making a summary table for \ncategories also.\n", "msg_date": "Fri, 21 Nov 2008 20:08:27 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": ">> How to vacuum full pg_shdepend automatically so that other users can \n>> work at same time ?\n>\n> Your table is horribly bloated.\n> You must use VACUUM FULL + REINDEX (as superuser) on it, however \n> unfortunately, it is blocking.\n> Therefore, you should wait for sunday night to do this, when noone will \n> notice.\n\nShops are closed late night for a short time, including sunday night.\nThis time may be shorter than time required to complete VACUUM command.\n\nI discovered vacuum_cost_delay=2000 option. Will this remove blocking issue \nand allow vacuum full to work ?\n\n> Meanwhile, you can always VACUUM it (as superuser) and REINDEX it.\n\nI expect that autovacuum does this automatically.\n\n> And while you're at it, VACUUM FULL + reindex the entire database.\n> To avoid such annoyances in the future, you should ensure that autovacuum \n> runs properly ; you should investigate this. If you use a cron'ed VACUUM \n> that does not run as superuser, then it will not be able to VACUUM the \n> system catalogs, and the problem will come back.\n\nautovacuum is turned on in postgresql.conf file\nlog file shows a lot of messages every day that database is vacuumed.\nI assume that it is running as user postgres.\n\nI do'nt understand how autovacuum can avoid this: it does not perform vacuum \nfull so pg_shdepend ja my tables become\nbloated again and again.\n\nAndrus. \n\n", "msg_date": "Fri, 21 Nov 2008 21:10:21 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "On Friday 21 November 2008, \"Andrus\" <[email protected]> wrote:\n> Those commands cause server probably to stop responding to other client\n> like vacuum full pg_shdepend\n> did.\n>\n> Should vacuum_cost_delay = 2000 allow other users to work when running\n> those commands ?\n\nAny vacuum full or cluster will lock out other clients. A high \nvacuum_cost_delay will just make the vacuum run slower.\n\n-- \nCorporations will ingest natural resources and defecate garbage until all \nresources are depleted, debt can no longer be repaid and our money becomes \nworthless - Jay Hanson\n", "msg_date": "Fri, 21 Nov 2008 11:13:46 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Andrus wrote:\n\n> I discovered vacuum_cost_delay=2000 option. Will this remove blocking \n> issue and allow vacuum full to work ?\n\nNo.\n\nAre you really using vacuum_cost_delay=2000? If so, therein lies your\nproblem. That's a silly value to use for that variable. Useful values\nare in the 20-40 range probably, or maybe 10-100 being extremely\ngenerous.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 21 Nov 2008 16:16:46 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Alvaro,\n\n> Are you really using vacuum_cost_delay=2000? If so, therein lies your\n> problem. That's a silly value to use for that variable. Useful values\n> are in the 20-40 range probably, or maybe 10-100 being extremely\n> generous.\n\nThank you.\nMy 8.1.4 postgresql.conf does not contain such option. So vacuum_cost_delay \nis off probably.\nSince doc does not recommend any value, I planned to use 2000\n\nWill value of 30 allow other clients to work when VACUUM FULL is running ?\n\nUncommented relevant values in postgresql.conf file are:\n\nshared_buffers = 15000\nwork_mem = 512\nmaintenance_work_mem = 131072\nfsync = on\neffective_cache_size= 70000\nlog_min_duration_statement= 30000\n\nAndrus. \n\n", "msg_date": "Fri, 21 Nov 2008 21:45:11 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Andrus wrote:\n\n> Will value of 30 allow other clients to work when VACUUM FULL is running ?\n\n1. vacuum_cost_delay does not affect vacuum full\n2. vacuum full is always blocking, regardless of settings\n\nSo I gather you're not doing any vacuuming, eh?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 21 Nov 2008 16:48:05 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "PFC <[email protected]> writes:\n> Index on orders_products( product_id ) and orders_products( order_id ):\n> \t=> Same plan\n\n> \tNote that in this case, a smarter planner would use the new index to \n> perform a BitmapAnd before hitting the heap to get the rows.\n\nConsidering that the query has no constraint on\norders_products.order_id, I'm not sure what you think the extra index is\nsupposed to be used *for*.\n\n(Well, we could put orders as the outside of a nestloop and then we'd\nhave such a constraint, but with 30000 orders rows to process that plan\nwould lose big.)\n\n(And yes, the planner did consider such a plan along the way.\nSee choose_bitmap_and.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Nov 2008 15:07:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds " }, { "msg_contents": "Alvaro,\n\n> 1. vacuum_cost_delay does not affect vacuum full\n> 2. vacuum full is always blocking, regardless of settings\n\nSo only way is to disable other database acces if vacuum full is required.\n\n> So I gather you're not doing any vacuuming, eh?\n\nLog files for every day are full of garbage messages below.\nSo I hope that vacuum is running well, isn't it ?\n\nAndrus.\n\n2008-11-19 00:00:48 EET 11728 1 LOG: autovacuum: processing database \n\"postgres\"\n2008-11-19 00:01:48 EET 11729 1 LOG: autovacuum: processing database \n\"mydb1\"\n2008-11-19 00:02:48 EET 11730 1 LOG: autovacuum: processing database \n\"emydb1\"\n2008-11-19 00:03:48 EET 11731 1 LOG: autovacuum: processing database \n\"template1\"\n2008-11-19 00:04:48 EET 11732 1 LOG: autovacuum: processing database \n\"testmydb1\"\n2008-11-19 00:05:48 EET 11733 1 LOG: autovacuum: processing database \n\"mydb3\"\n2008-11-19 00:06:48 EET 11734 1 LOG: autovacuum: processing database \n\"postgres\"\n2008-11-19 00:07:48 EET 11735 1 LOG: autovacuum: processing database \n\"mydb1\"\n2008-11-19 00:08:48 EET 11736 1 LOG: autovacuum: processing database \n\"emydb1\"\n2008-11-19 00:09:48 EET 11737 1 LOG: autovacuum: processing database \n\"template1\"\n2008-11-19 00:10:48 EET 11750 1 LOG: autovacuum: processing database \n\"testmydb1\"\n2008-11-19 00:11:48 EET 11751 1 LOG: autovacuum: processing database \n\"mydb3\"\n2008-11-19 00:12:48 EET 11752 1 LOG: autovacuum: processing database \n\"postgres\"\n2008-11-19 00:13:48 EET 11753 1 LOG: autovacuum: processing database \n\"mydb1\"\n2008-11-19 00:14:48 EET 11754 1 LOG: autovacuum: processing database \n\"emydb1\"\n2008-11-19 00:15:48 EET 11755 1 LOG: autovacuum: processing database \n\"template1\"\n2008-11-19 00:16:48 EET 11756 1 LOG: autovacuum: processing database \n\"testmydb1\"\n2008-11-19 00:17:48 EET 11757 1 LOG: autovacuum: processing database \n\"mydb3\"\n... \n\n", "msg_date": "Fri, 21 Nov 2008 22:08:45 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Andrus wrote:\n\n>> So I gather you're not doing any vacuuming, eh?\n>\n> Log files for every day are full of garbage messages below.\n> So I hope that vacuum is running well, isn't it ?\n\nThis does not really mean that autovacuum has done anything in the\ndatabases. If the times are consistently separated by 1 min, then it's\npossible that it always exits without doing anything.\n\nIn such old a release we didn't have any decent logging mechanism in\nautovacuum :-( You can change log_min_messages to debug2 to see if it\nactually does anything or not.\n\nI suggest you connect to the problem database (and then to all others,\njust to be sure) and run \"vacuum\" (no full).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 21 Nov 2008 17:20:36 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "> Thank you.\n> My 8.1.4 postgresql.conf does not contain such option. So \n> vacuum_cost_delay is off probably.\n> Since doc does not recommend any value, I planned to use 2000\n> \n> Will value of 30 allow other clients to work when VACUUM FULL is running ?\n\nNo, as someone already noted the VACUUM FULL is blocking anyway (and \ndoes not use this value at all).\n\n> Uncommented relevant values in postgresql.conf file are:\n> \n> shared_buffers = 15000\n> work_mem = 512\n\nI'd consider increasing this value a little - 0.5 MB seems too low to me \n(but not necessarily).\n\n> maintenance_work_mem = 131072\n> fsync = on\n> effective_cache_size= 70000\n\nWell, your server has 2GB of RAM and usually it's recommended to set \nthis value to about 60-70% of your RAM, so using 540MB (25%) seems quite \nlow. Anyway this is just a hint to PostgreSQL, it does not increase \nmemory consumption or so - it's just an estimate of how much data are \ncached by kernel.\n\nAnyway, I don't expect these values have significant effect in case of \nthe issue solved in this thread.\n\nregards\nTomas\n", "msg_date": "Fri, 21 Nov 2008 21:48:21 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "> 2. Run the following commands periodically in this order:\n> \n> VACUUM FULL;\n> vacuum full pg_shdepend;\n> CLUSTER rid on (toode);\n> CLUSTER dok on (kuupaev);\n> REINDEX DATABASE mydb;\n> REINDEX SYSTEM mydb;\n> ANALYZE;\n> \n> Are all those command required or can something leaved out ?\n\nRunning CLUSTER after VACUUM FULL is just a waste of time. In my \nexperience CLUSTER is actually faster in case of such heavily bloated \ntables - I think this is caused by the fact that it creates indexes from \nthe beginning instead of updating them (as VACUUM FULL does).\n\nSo CLUSTER actually performs REINDEX, so I'd just run\n\nCLUSTER rid ON rid_pkey;\nCLUSTER dok ON dok_pkey;\nANALYZE rid;\nANALYZE dok;\n\nClustering by other indexes might give better performance, using primary \nkeys is just a safe guess here. This should improve the performance of \nyour query and it seems these two tables are the most bloated ones.\n\nI wouldn't do the same maintenance on the other tables now - it's just a \nwaste of time.\n\n> \n>> Several other things to consider:\n>>\n>> 1) Regarding the toode column - why are you using CHAR(20) when the \n>> values\n>> are actually shorter? This may significantly increase the amount of space\n>> required.\n> \n> There may be some products whose codes may be up to 20 characters.\n> PostgreSQL does not hold trailing spaces in db, so this does *not* \n> affect to\n> space.\n\nOK, I haven't realized this. You're right.\n\n>> 2) I've noticed the CPU used is Celeron, which may negatively affect the\n>> speed of hash computation. I'd try to replace it by something faster - \n>> say\n>> INTEGER as an artificial primary key of the \"toode\" table and using it as\n>> a FK in other tables. This might improve the \"Bitmap Heap Scan on rid\"\n>> part, but yes - it's just a minor improvement compared to the \"Hash Join\"\n>> part of the query.\n> \n> Natural key Toode CHAR(20) is used widely in different queries. \n> Replacing it with INT surrogate key requires major application rewrite.\n> \n> Should I add surrogate index INT columns to toode and rid table and measure\n> test query speed in this case?\n\nTest it. Create tables with fake data, and compare the performance with \nand without the surrogate keys. Using a simple data type instead of text \n gave me huge performance boost. For example one of my colleagues used \nVARCHAR(15) to store IP addresses, and then used them to join tables \n(and suffered by the poor perfomance). Replacing them by INET improved \nthe speed by several orders of magnitude.\n\n>> Materialized views seem like a good idea to me, but maybe I'm not seeing\n>> something. What do you mean by \"reports are different\"? If there is a lot\n>> of rows for a given product / day, then creating an aggregated table with\n>> (product code / day) as a primary key is quite simple. It may require a\n>> lot of disk space, but it'll remove the hash join overhead. But if the\n>> queries are very different, then it may be difficult to build such\n>> materialized view(s).\n> \n> log file seems that mostly only those queries are slow:\n> \n> SELECT ...\n> FROM dok JOIN rid USING (dokumnr)\n> JOIN ProductId USING (ProductId)\n> WHERE rid.ProductId LIKE :p1 || '%' AND dok.SaleDate>=:p2\n> \n> :p1 and :p2 are parameters different for different queries.\n> \n> dok contains several years of data. :p2 is usually only few previous months\n> or last year ago.\n> SELECT column list contains fixed list of known columns from all tables.\n> \n> How to create index or materialized view to optimize this types of \n> queries ?\n\nWell, difficult to answer without detailed information about the queries \nyou want to run, aggregated values, etc. Materialized views is a world \non it's own, and the best solution depends on (for example):\n\n1) what aggregated values are you interested in (additive values are the\n most primitive ones, while VARIANCE etc. make it difficult)\n\n2) do you need current data, or is it OK that today's data are not\n available till midnight (for example)?\n\nAnother thing you have to consider is whether you want to create \nmaterialized view with final or intermediary data and then compute the \nfinal data somehow (for example monthly totals from daily totals).\n\nThe most primitive (but often sufficient) solution is recreating the \nmaterialized view periodically (for example every midnight). In your \ncase it'd mean running something like\n\nCREATE TABLE materialized_view AS SELECT ... your query here ...\nGROUP BY productId, saleDate\n\nThis gives you daily totals for each product - the clients then can run \nanother query to compute the final data.\n\nBut of course, if you need to maintain 'current' data, you may create a \nset of triggers to update the materialized view. Either after each \nmodification or (more sophisticated) when it's needed.\n\nSee for example a great presentation from this year's PGCon:\n\nhttp://www.pgcon.org/2008/schedule/events/69.en.html\n\n\nregards\nTomas\n", "msg_date": "Fri, 21 Nov 2008 22:22:55 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "> If it's not a million rows, then the table is bloated. Try (as postgres\n> or some other db superuser) \"vacuum full pg_shdepend\" and a \"reindex\n> pg_shdepend\".\n\nreindex table pg_shdepend causes error\n\nERROR: shared table \"pg_shdepend\" can only be reindexed in stand-alone mode\n\nvacuum full verbose pg_shdepend seems to work but indexes are still bloated.\nHow to remove index bloat ?\n\nsizes after vacuum full are below.\npg_shdepend size 1234 MB includes its index sizes, so indexes are 100% \nbloated.\n\n 4 1214 pg_catalog.pg_shdepend 1234 MB\n 6 1232 pg_catalog.pg_shdepend_depender_index 795 MB\n 7 1233 pg_catalog.pg_shdepend_reference_index 439 MB\n\nAndrus.\n\n\nvacuum full verbose pg_shdepend;\n\nINFO: vacuuming \"pg_catalog.pg_shdepend\"\nINFO: \"pg_shdepend\": found 254 removable, 3625 nonremovable row versions in \n131517 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 49 to 49 bytes long.\nThere were 16115259 unused item pointers.\nTotal free space (including removable row versions) is 1010091872 bytes.\n131456 pages are or will become empty, including 8 at the end of the table.\n131509 pages containing 1010029072 free bytes are potential move \ndestinations.\nCPU 2.08s/0.92u sec elapsed 63.51 sec.\nINFO: index \"pg_shdepend_depender_index\" now contains 3625 row versions in \n101794 pages\nDETAIL: 254 index row versions were removed.\n101611 index pages have been deleted, 20000 are currently reusable.\nCPU 0.87s/0.28u sec elapsed 25.44 sec.\nINFO: index \"pg_shdepend_reference_index\" now contains 3625 row versions in \n56139 pages\nDETAIL: 254 index row versions were removed.\n56076 index pages have been deleted, 20000 are currently reusable.\nCPU 0.51s/0.15u sec elapsed 23.10 sec.\nINFO: \"pg_shdepend\": moved 1518 row versions, truncated 131517 to 25 pages\nDETAIL: CPU 5.26s/2.39u sec elapsed 89.93 sec.\nINFO: index \"pg_shdepend_depender_index\" now contains 3625 row versions in \n101794 pages\nDETAIL: 1518 index row versions were removed.\n101609 index pages have been deleted, 20000 are currently reusable.\nCPU 0.94s/0.28u sec elapsed 24.61 sec.\nINFO: index \"pg_shdepend_reference_index\" now contains 3625 row versions in \n56139 pages\nDETAIL: 1518 index row versions were removed.\n56088 index pages have been deleted, 20000 are currently reusable.\nCPU 0.54s/0.14u sec elapsed 21.11 sec.\n\nQuery returned successfully with no result in 253356 ms\n\n\n\n", "msg_date": "Fri, 21 Nov 2008 23:51:24 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "If Autovacuum was working, and your tables still got very bloated, it may be because your free space map is not configured large enough.\r\nWhat is your value for max_fsm_pages?\r\n\r\nThe effect of having max_fsm_pages or max_fsm_relations too small is bloating of tables and indexes.\r\n\r\nIncreasing it too large will not use a lot of memory. Since you had over 130000 free pages in just one table below, make sure it is set to at least 150000. This may be overkill with regular vacuum or autovacuum, however its much better to have this too large than too small.\r\n\r\nYour server has 2GB of RAM? You should make sure your shared_buffers is between 100MB and 400MB if this is a dedicated server, or more in some cases.\r\n\r\nIf you can, plan to migrate to 8.3 (or 8.4 early next year). Vacuum, bloating, and configuration related to these have improved a great deal since 8.1.\r\n\r\nAlthough your Indexes below remain bloated, the fact that they have been vacuumed, combined with a large enough value set in max_fsm_pages, means that they should not get any larger. I am not sure how to foce these to be smaller.\r\nThe larger size, after a vacuum and large enough max_fsm_pages value will also not cause a performance problem. It will waste some disk space and be a bit slower to access, but only very slightly.\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Andrus\r\nSent: Friday, November 21, 2008 1:51 PM\r\nTo: Richard Huxton\r\nCc: PFC; [email protected]\r\nSubject: Re: [PERFORM] Hash join on int takes 8..114 seconds\r\n\r\n> If it's not a million rows, then the table is bloated. Try (as postgres\r\n> or some other db superuser) \"vacuum full pg_shdepend\" and a \"reindex\r\n> pg_shdepend\".\r\n\r\nreindex table pg_shdepend causes error\r\n\r\nERROR: shared table \"pg_shdepend\" can only be reindexed in stand-alone mode\r\n\r\nvacuum full verbose pg_shdepend seems to work but indexes are still bloated.\r\nHow to remove index bloat ?\r\n\r\nsizes after vacuum full are below.\r\npg_shdepend size 1234 MB includes its index sizes, so indexes are 100%\r\nbloated.\r\n\r\n 4 1214 pg_catalog.pg_shdepend 1234 MB\r\n 6 1232 pg_catalog.pg_shdepend_depender_index 795 MB\r\n 7 1233 pg_catalog.pg_shdepend_reference_index 439 MB\r\n\r\nAndrus.\r\n\r\n\r\nvacuum full verbose pg_shdepend;\r\n\r\nINFO: vacuuming \"pg_catalog.pg_shdepend\"\r\nINFO: \"pg_shdepend\": found 254 removable, 3625 nonremovable row versions in\r\n131517 pages\r\nDETAIL: 0 dead row versions cannot be removed yet.\r\nNonremovable row versions range from 49 to 49 bytes long.\r\nThere were 16115259 unused item pointers.\r\nTotal free space (including removable row versions) is 1010091872 bytes.\r\n131456 pages are or will become empty, including 8 at the end of the table.\r\n131509 pages containing 1010029072 free bytes are potential move\r\ndestinations.\r\nCPU 2.08s/0.92u sec elapsed 63.51 sec.\r\nINFO: index \"pg_shdepend_depender_index\" now contains 3625 row versions in\r\n101794 pages\r\nDETAIL: 254 index row versions were removed.\r\n101611 index pages have been deleted, 20000 are currently reusable.\r\nCPU 0.87s/0.28u sec elapsed 25.44 sec.\r\nINFO: index \"pg_shdepend_reference_index\" now contains 3625 row versions in\r\n56139 pages\r\nDETAIL: 254 index row versions were removed.\r\n56076 index pages have been deleted, 20000 are currently reusable.\r\nCPU 0.51s/0.15u sec elapsed 23.10 sec.\r\nINFO: \"pg_shdepend\": moved 1518 row versions, truncated 131517 to 25 pages\r\nDETAIL: CPU 5.26s/2.39u sec elapsed 89.93 sec.\r\nINFO: index \"pg_shdepend_depender_index\" now contains 3625 row versions in\r\n101794 pages\r\nDETAIL: 1518 index row versions were removed.\r\n101609 index pages have been deleted, 20000 are currently reusable.\r\nCPU 0.94s/0.28u sec elapsed 24.61 sec.\r\nINFO: index \"pg_shdepend_reference_index\" now contains 3625 row versions in\r\n56139 pages\r\nDETAIL: 1518 index row versions were removed.\r\n56088 index pages have been deleted, 20000 are currently reusable.\r\nCPU 0.54s/0.14u sec elapsed 21.11 sec.\r\n\r\nQuery returned successfully with no result in 253356 ms\r\n\r\n\r\n\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n", "msg_date": "Fri, 21 Nov 2008 15:01:41 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "On Fri, 21 Nov 2008 21:07:02 +0100, Tom Lane <[email protected]> wrote:\n\n> PFC <[email protected]> writes:\n>> Index on orders_products( product_id ) and orders_products( order_id ):\n>> \t=> Same plan\n>\n>> \tNote that in this case, a smarter planner would use the new index to\n>> perform a BitmapAnd before hitting the heap to get the rows.\n>\n> Considering that the query has no constraint on\n> orders_products.order_id, I'm not sure what you think the extra index is\n> supposed to be used *for*.\n>\n> (Well, we could put orders as the outside of a nestloop and then we'd\n> have such a constraint, but with 30000 orders rows to process that plan\n> would lose big.)\n>\n> (And yes, the planner did consider such a plan along the way.\n> See choose_bitmap_and.)\n>\n> \t\t\tregards, tom lane\n\n\n\tI think I didn't express myself correctly...\n\n\tHere the indexes are small (therefore well cached) but the \norders_products table is large (and not cached).\n\tTo reproduce this, I put this table on a crummy slow external USB drive.\n\tBetween each of the following queries, pg was stopped, the USB drive \nunmounted, remounted, and pg restarted, to purge orders_products table out \nof all caches.\n\tI also modified the statistical distribution (see init script at bottom \nof message).\n\nEXPLAIN ANALYZE SELECT count(*)\n FROM orders\nJOIN orders_products USING (order_id)\nWHERE orders.order_date BETWEEN '2000-01-01' AND '2000-02-01'\nAND orders_products.product_id = 2345;\n QUERY \nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=5431.93..5431.94 rows=1 width=0) (actual \ntime=5176.382..5176.382 rows=1 loops=1)\n -> Hash Join (cost=1575.13..5431.84 rows=35 width=0) (actual \ntime=62.634..5176.332 rows=36 loops=1)\n Hash Cond: (orders_products.order_id = orders.order_id)\n -> Bitmap Heap Scan on orders_products (cost=21.27..3864.85 \nrows=1023 width=4) (actual time=7.041..5118.512 rows=1004 loops=1)\n Recheck Cond: (product_id = 2345)\n -> Bitmap Index Scan on orders_products_product_order \n(cost=0.00..21.02 rows=1023 width=0) (actual time=0.531..0.531 rows=1004 \nloops=1)\n Index Cond: (product_id = 2345)\n -> Hash (cost=1130.58..1130.58 rows=33862 width=4) (actual \ntime=55.526..55.526 rows=31999 loops=1)\n -> Index Scan using orders_date on orders \n(cost=0.00..1130.58 rows=33862 width=4) (actual time=0.139..33.466 \nrows=31999 loops=1)\n Index Cond: ((order_date >= '2000-01-01'::date) AND \n(order_date <= '2000-02-01'::date))\n Total runtime: 5176.659 ms\n\n\tThis is the original query ; what I don't like about it is that it \nbitmapscans orders_products way too much, because it reads all orders for \nthe specified product, not just orders in the date period we want.\n\n\tHowever, since Postgres scanned all order_id's corresponding to the date \nrange already, to build the hash, the list of order_ids of interest is \nknown at no extra cost. In this case, additionnally, correlation is 100% \nbetween order_id and date, so I can do :\n\ntest=> SELECT max(order_id), min(order_id) FROM orders WHERE order_date \nBETWEEN '2000-01-01' AND '2000-02-01';\n max | min\n-------+-----\n 31999 | 1\n\n\tAnd I can add an extra condition to the query, like this :\n\nEXPLAIN ANALYZE SELECT count(*)\n FROM orders\nJOIN orders_products USING (order_id)\nWHERE orders.order_date BETWEEN '2000-01-01' AND '2000-02-01'\nAND orders_products.order_id BETWEEN 1 AND 31999\nAND orders_products.product_id = 2345;\n QUERY \nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=426.80..426.81 rows=1 width=0) (actual \ntime=179.233..179.233 rows=1 loops=1)\n -> Nested Loop (cost=0.00..426.79 rows=1 width=0) (actual \ntime=6.667..179.190 rows=36 loops=1)\n -> Index Scan using orders_products_product_order on \norders_products (cost=0.00..142.11 rows=34 width=4) (actual \ntime=6.559..177.597 rows=36 loops=1)\n Index Cond: ((product_id = 2345) AND (order_id >= 1) AND \n(order_id <= 31999))\n -> Index Scan using orders_pkey on orders (cost=0.00..8.36 \nrows=1 width=4) (actual time=0.039..0.041 rows=1 loops=36)\n Index Cond: (orders.order_id = orders_products.order_id)\n Filter: ((orders.order_date >= '2000-01-01'::date) AND \n(orders.order_date <= '2000-02-01'::date))\n Total runtime: 179.392 ms\n\n\tThis is with no cache on orders_products table. About 30X faster.\n\tInterestingly, when everything is cached, it's even faster (about 100X)...\n\n\tThe plan I was thinking about was not a nested loop with 30K loops... \nthis would be bad as you said. It would have been something like this :\n\n- There is an index on (product_id, order_id)\n\n- Build the hash from orders table (can't avoid it)\n\n-> Hash\n -> Index Scan using orders_date on orders\n Index Cond: ((order_date >= '2000-01-01'::date) AND (order_date <= \n'2000-02-01'::date))\n\n- A slightly twisted bitmap scan form :\n\n-> Bitmap Heap Scan on orders_products\n Recheck Cond: (product_id = 2345) AND order_id IN (hash created above))\n -> Bitmap Index Scan on orders_products_product_order\n Index Cond: (product_id = 2345 AND order_id IN (hash created \nabove))\n\n\tThe Bitmap Index Scan sees the order_ids in the index it is scanning... \nthey could be checked before checking the visibility in the heap for the \nbig table.\n\n\n\n\nTest script:\n\n\nBEGIN;\nCREATE TABLE orders (order_id INTEGER NOT NULL, order_date DATE NOT NULL);\nCREATE TABLE products (product_id INTEGER NOT NULL, product_name TEXT NOT \nNULL);\nCREATE TABLE orders_products (order_id INTEGER NOT NULL, product_id \nINTEGER NOT NULL, padding1 TEXT, padding2 TEXT) TABLESPACE usb;\n\nINSERT INTO products SELECT n, 'product number ' || n::TEXT FROM \ngenerate_series(1,10001) AS n;\nINSERT INTO orders SELECT n,'2000-01-01'::date + (n/1000 * '1 \nDAY'::interval) FROM generate_series(1,1000000) AS n;\n\nSET work_mem TO '1GB';\nINSERT INTO orders_products SELECT \na,b,'aibaifbaurgbyioubyfazierugybfoaybofauez', \n'hfohbdsqbhjhqsvdfiuazvfgiurvgazrhbazboifhaoifh'\n FROM (SELECT DISTINCT (1+(n/10))::INTEGER AS a, \n(1+(random()*10000))::INTEGER AS b FROM generate_series( 1,9999999 ) AS n) \nAS x;\n\nDELETE FROM orders_products WHERE product_id NOT IN (SELECT product_id \n FROM products);\nDELETE FROM orders_products WHERE order_id NOT IN (SELECT order_id FROM \norders);\nALTER TABLE orders ADD PRIMARY KEY (order_id);\nALTER TABLE products ADD PRIMARY KEY (product_id);\nALTER TABLE orders_products ADD PRIMARY KEY (order_id,product_id);\nALTER TABLE orders_products ADD FOREIGN KEY (product_id) REFERENCES \nproducts( product_id ) ON DELETE CASCADE;\nALTER TABLE orders_products ADD FOREIGN KEY (order_id) REFERENCES orders( \norder_id ) ON DELETE CASCADE;\nCREATE INDEX orders_date ON orders( order_date );\nCREATE INDEX orders_products_product_order ON orders_products( product_id, \norder_id );\nCOMMIT;\nSET work_mem TO DEFAULT;\nANALYZE;\n", "msg_date": "Sat, 22 Nov 2008 15:13:53 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "> You could perhaps run a little check on the performance of the RAID, is\n> it better than linux software RAID ?\n> Does it leverage NCQ appropriately when running queries in parallel ?\n\nI was told that this RAID is software RAID.\nI have no experience what to check.\nThis HP server was installed 3 years ago and in this time it was not\nhigh perfomance server.\n\n>>> explain analyze\n>>> SELECT sum(1)\n>>> FROM dok JOIN rid USING (dokumnr)\n>>> JOIN toode USING (toode)\n>>> LEFT JOIN artliik using(grupp,liik)\n>>> WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n>\n> By the way, note that the presence of the toode table in the query above\n> is not required at all, unless you use columns of toode in your\n> aggregates.\n\nIn real query, SELECT column list contains data form sales table dok (sale\ndate and time)\nand sales detail table rid (quantity, price)\nWHERE clause may contain additional filters from product table (product\ncategory, supplier).\n\n> Let's play with that, after all, it's friday night.\n\nThank you very much for great sample.\nI tried to create testcase from this to match production db:\n\n1.2 million orders\n3.5 million order details\n13400 products with char(20) as primary keys containing ean-13 codes mostly\n3 last year data\nevery order has usually 1..3 detail lines\nsame product can appear multiple times in order\nproducts are queried by start of code\n\nThis sample does not distribute products randomly between orders.\nHow to change this so that every order contains 3 (or 1..6 ) random \nproducts?\nI tried to use random row sample from\n http://www.pgsql.cz/index.php/PostgreSQL_SQL_Tricks-i\n\nbut in this case constant product is returned always. It seems than query \ncontaining randon() is executed only once.\n\n\nAndrus.\n\nbegin;\nCREATE TEMP TABLE orders (order_id INTEGER NOT NULL, order_date DATE NOT \nNULL);\nCREATE TEMP TABLE products (product_id CHAR(20) NOT NULL, product_name \nchar(70) NOT NULL, quantity numeric(12,2) default 1);\nCREATE TEMP TABLE orders_products (order_id INTEGER NOT NULL, product_id \nCHAR(20),padding1 char(70),\n id serial, price numeric(12,2) default 1 );\n\nINSERT INTO products SELECT (n*power( 10,13))::INT8::CHAR(20),\n 'product number ' || n::TEXT FROM generate_series(0,13410) AS n;\n\nINSERT INTO orders\nSELECT n,'2005-01-01'::date + (4000.0 * n/3500000.0 * '1 DAY'::interval) \nFROM generate_series(0,3500000/3) AS n;\n\nSET work_mem TO 2097151; -- 1048576;\n\nINSERT INTO orders_products SELECT\n generate_series/3 as order_id,\n ( ((generate_series/3500000.0)*13410.0)::int*power( \n10,13))::INT8::CHAR(20)\nFROM generate_series(1,3500000)\nwhere generate_series/3>0;\n\nALTER TABLE orders ADD PRIMARY KEY (order_id);\nALTER TABLE products ADD PRIMARY KEY (product_id);\nALTER TABLE orders_products ADD PRIMARY KEY (id);\n\nALTER TABLE orders_products ADD FOREIGN KEY (product_id) REFERENCES \nproducts(product_id);\nALTER TABLE orders_products ADD FOREIGN KEY (order_id) REFERENCES \norders(order_id) ON DELETE CASCADE;\n\nCREATE INDEX orders_date ON orders( order_date );\nCREATE INDEX order_product_pattern_idx ON orders_products( product_id \nbpchar_pattern_ops );\n\nCOMMIT;\nSET work_mem TO DEFAULT;\nANALYZE;\n\n SELECT sum(quantity*price)\n FROM orders\nJOIN orders_products USING (order_id)\nJOIN products USING (product_id)\nWHERE orders.order_date>='2008-01-17'\nand orders_products.product_id like '130%' \n\n", "msg_date": "Sat, 22 Nov 2008 19:58:04 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "\n> Thank you very much for great sample.\n> I tried to create testcase from this to match production db:\n>\n> 1.2 million orders\n> 3.5 million order details\n> 13400 products with char(20) as primary keys containing ean-13 codes \n> mostly\n> 3 last year data\n> every order has usually 1..3 detail lines\n> same product can appear multiple times in order\n> products are queried by start of code\n>\n> This sample does not distribute products randomly between orders.\n> How to change this so that every order contains 3 (or 1..6 ) random \n> products?\n> I tried to use random row sample from\n> http://www.pgsql.cz/index.php/PostgreSQL_SQL_Tricks-i\n>\n> but in this case constant product is returned always. It seems than \n> query containing randon() is executed only once.\n\n\tYou could try writing a plpgsql function which would generate the data \nset.\n\tOr you could use your existing data set.\n\n\tBy the way, a simple way to de-bloat your big table without blocking \nwould be this :\n\n- stop all inserts and updates\n- begin\n- create table new like old table\n- insert into new select * from old (order by perhaps)\n- create indexes\n- rename new into old\n- commit\n\n\tIf this is just a reporting database where you insert a batch of new data \nevery day, for instance, that's very easy to do. If it's OLTP, then, no.\n", "msg_date": "Sun, 23 Nov 2008 00:24:52 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "> You could try writing a plpgsql function which would generate the data\n> set.\n> Or you could use your existing data set.\n\nCreating 3.5 mln rows using stored proc is probably slow.\nProbably it would be better and faster to use some random() and\ngenerate_series() trick.\nIn this case others can try it and dataset generation is faster.\n\n> By the way, a simple way to de-bloat your big table without blocking\n> would be this :\n>\n> - stop all inserts and updates\n> - begin\n> - create table new like old table\n> - insert into new select * from old (order by perhaps)\n> - create indexes\n> - rename new into old\n> - commit\n>\n> If this is just a reporting database where you insert a batch of new data\n> every day, for instance, that's very easy to do. If it's OLTP, then, no.\n\nThose are orders and order_products tables.\nI ran vacuum full analyze verbose last night.\nNow database has 4832 MB size, including 1 GB\npg_shdepend bloated indexes.\nI added max_fsm_pages=150000 and re-booted.\n\nQuery below and other queries are still too slow\n\nset search_path to firma2,public;\nexplain analyze\nSELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n\n\"Aggregate (cost=181795.13..181795.14 rows=1 width=0) (actual\ntime=23678.265..23678.268 rows=1 loops=1)\"\n\" -> Nested Loop (cost=73999.44..181733.74 rows=24555 width=0) (actual\ntime=18459.230..23598.956 rows=21476 loops=1)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 rows=1\nwidth=24) (actual time=0.134..0.145 rows=1 loops=1)\"\n\" Index Cond: ('X05'::bpchar = toode)\"\n\" -> Hash Join (cost=73999.44..181482.18 rows=24555 width=24)\n(actual time=18459.076..23441.098 rows=21476 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=4082.88..101779.03\nrows=270252 width=28) (actual time=9337.782..12720.365 rows=278182 loops=1)\"\n\" Recheck Cond: (toode = 'X05'::bpchar)\"\n\" -> Bitmap Index Scan on rid_toode_idx\n(cost=0.00..4082.88 rows=270252 width=0) (actual time=9330.634..9330.634\nrows=278183 loops=1)\"\n\" Index Cond: (toode = 'X05'::bpchar)\"\n\" -> Hash (cost=69195.13..69195.13 rows=112573 width=4)\n(actual time=8894.465..8894.465 rows=109890 loops=1)\"\n\" -> Bitmap Heap Scan on dok (cost=1492.00..69195.13\nrows=112573 width=4) (actual time=1618.763..8404.847 rows=109890 loops=1)\"\n\" Recheck Cond: (kuupaev >= '2008-09-01'::date)\"\n\" -> Bitmap Index Scan on dok_kuupaev_idx\n(cost=0.00..1492.00 rows=112573 width=0) (actual time=1612.177..1612.177\nrows=110484 loops=1)\"\n\" Index Cond: (kuupaev >=\n'2008-09-01'::date)\"\n\"Total runtime: 23678.790 ms\"\n\n\nHere is a list of untried recommendations from this thread:\n\n1. CLUSTER rid ON rid_toode_pkey ; CLUSTER dok ON dok_kuupaev_idx\n- In 8.1.4 provided form of CLUSTER causes syntax error, no idea what\nsyntax to use.\nRisky to try in prod server. Requires creating randomly distributed\nproduct_id testcase to measure\ndifference.\n\n2. Change CHAR(20) product index to int index by adding update trigger.\nRisky to try in prod server. Requires creating randomly distributed\nproduct_id testcase to measure\ndifference.\n\n3. Denormalization of sale date to order_producs table by adding update\ntrigger.\nRisky to try in prod server. Requires creating randomly distributed\nproduct_id testcase to measure\ndifference.\n\n4. Check on the performance of the RAID: Does it leverage NCQ appropriately\nwhen running queries in parallel ?\n No idea how.\n\n5. Materialized views. I need date granularity so it is possible to sum only\none days sales.\nhttp://www.pgcon.org/2008/schedule/events/69.en.html\nSeems to be major appl re-write, no idea how.\n\nAppoaches which probably does not change perfomance:\n\n6. Upgrade to 8.4 or to 8.3.5\n\n7. run server on standalone mode and recover 1 GB pg_shdepend bloated index.\n\n8. tune some conf file parameters:\n> work_mem = 512\nI'd consider increasing this value a little - 0.5 MB seems too low to me\n(but not necessarily).\n\n> effective_cache_size= 70000\nWell, your server has 2GB of RAM and usually it's recommended to set\nthis value to about 60-70% of your RAM, so using 540MB (25%) seems quite\nlow.\n\nData size is nearly the same as RAM size. It is unpleasant surprise that\nqueries take so long time.\n\nWhat should I do next?\n\n\nAndrus.\n\n 1 40926 firma2.rid 1737 MB\n 2 40595 firma2.dok 1632 MB\n 3 1214 pg_catalog.pg_shdepend 1235 MB\n 4 1232 pg_catalog.pg_shdepend_depender_index 795 MB\n 7 1233 pg_catalog.pg_shdepend_reference_index 439 MB\n 8 44299 firma2.rid_toode_idx 298 MB\n 9 44286 firma2.dok_tasudok_idx 245 MB\n 10 19103791 firma2.rid_toode_pattern_idx 202 MB\n 11 44283 firma2.dok_klient_idx 160 MB\n 12 44298 firma2.rid_inpdokumnr_idx 148 MB\n 13 44297 firma2.rid_dokumnr_idx 132 MB\n 14 43573 firma2.rid_pkey 130 MB\n 17 40556 pg_toast.pg_toast_40552 112 MB\n 18 44288 firma2.dok_tasumata_idx 103 MB\n 19 44289 firma2.dok_tellimus_idx 101 MB\n 20 44284 firma2.dok_krdokumnr_idx 101 MB\n 21 44285 firma2.dok_kuupaev_idx 94 MB\n 22 19076304 firma2.rid_rtellimus_idx 90 MB\n 24 44282 firma2.dok_dokumnr_idx 74 MB\n 25 43479 firma2.dok_pkey 74 MB\n 26 18663923 firma2.dok_yksus_pattern_idx 65 MB\n 27 18801591 firma2.dok_sihtyksus_pattern_idx 64 MB\n 32 18774881 firma2.dok_doktyyp 47 MB\n\n\noutput from vacuum full:\n\n\nINFO: free space map contains 14353 pages in 314 relations\nDETAIL: A total of 20000 page slots are in use (including overhead).\n89664 page slots are required to track all free space.\nCurrent limits are: 20000 page slots, 1000 relations, using 182 KB.\nNOTICE: number of page slots needed (89664) exceeds max_fsm_pages (20000)\nHINT: Consider increasing the configuration parameter \"max_fsm_pages\" to a\nvalue over 89664.\n\nQuery returned successfully with no result in 10513335 ms.\n\n", "msg_date": "Sun, 23 Nov 2008 16:39:37 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "\n> Risky to try in prod server. Requires creating randomly distributed\n> product_id testcase to measure\n> difference.\n> \n> What should I do next?\n\nI guess you have backups - take them, restore the database on a \ndifferent machine (preferably with the same / similar hw config) and \ntune the queries on it.\n\nAfter restoring all the tables / indexes will be 'clean' (not bloated), \nso you'll see if performing VACUUM FULL / CLUSTER is the right solution \nor if you have to change the application internals.\n\nSure, the times will be slightly different but the performance problems \nshould remain the same.\n\nregards\nTomas\n", "msg_date": "Sun, 23 Nov 2008 19:47:15 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "\n> Appoaches which probably does not change perfomance:\n\n> 6. Upgrade to 8.4 or to 8.3.5\n\nBoth of these will improve performance a little, even with the same query plan and same data. I would expect about a 10% improvement for 8.3.x on most memory bound select queries. 8.4 won't be out for a few months.\n\n> 7. run server on standalone mode and recover 1 GB pg_shdepend bloated index.\n\n> 8. tune some conf file parameters:\n> > work_mem = 512\n> I'd consider increasing this value a little - 0.5 MB seems too low to me\n> (but not necessarily).\n\nThis is very easy to try. You can change work_mem for just a single session, and this can in some cases help performance quite a bit, and in others not at all.\nI would not recommend having it lower than at least 4MB on a server like that unless you have a lot of concurrently active queries / connections.\nTo try it, simply use the SET command. To try out 32MB, just do:\nSET work_mem = '32MB';\nand the value will be changed locally for that session only. See if it affects your test query or not.\nhttp://www.postgresql.org/docs/8.3/interactive/sql-set.html\n\n> > effective_cache_size= 70000\n> Well, your server has 2GB of RAM and usually it's recommended to set\n> this value to about 60-70% of your RAM, so using 540MB (25%) seems quite\n> low.\n\n> Data size is nearly the same as RAM size. It is unpleasant surprise that\n> queries take so long time.\n\n> What should I do next?\n\nFirst, demonstrate that it is all or mostly in memory -- use iostat or other tools to ensure that there is not much disk activity during the query. If your system doesn't have iostat installed, it should be installed. It is a very useful tool.\nIf it is all cached in memory, you may want to ensure that your shared_buffers is a reasonalbe size so that there is less shuffling of data from the kernel to postgres and back. Generally, shared_buffers works best between 5% and 25% of system memory.\nIf it is completely CPU bound then the work done for the query has to be reduced by altering the plan to a more optimal one or making the work it has to do at each step easier. Most of the ideas in this thread revolve around those things.\n\nBased on the time it took to do the vacuum, I suspect your disk subsystem is a bit slow. If it can be determined that there is much disk I/O in your use cases, there are generally several things that can be done to tune Linux I/O. The main ones in my experience are the 'readahead' value for each disk which helps sequential reads significantly, and trying out the linux 'deadline' scheduler and comparing it to the more commonly used 'cfq' scheduler. If the system is configured with the anticipatory scheduler, absolutely switch to cfq or deadline as the anticipatory scheduler will perform horribly poorly for a database.\n\n> Andrus.\n", "msg_date": "Sun, 23 Nov 2008 13:30:40 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "> I guess you have backups - take them, restore the database on a different\n> machine (preferably with the same / similar hw config) and tune the\n> queries on it.\n>\n> After restoring all the tables / indexes will be 'clean' (not bloated), so\n> you'll see if performing VACUUM FULL / CLUSTER is the right solution or if\n> you have to change the application internals.\n>\n> Sure, the times will be slightly different but the performance problems\n> should remain the same.\n\nVACUUM FULL has\nMy test computer has PostgreSql 8.3, 4 GB RAM, SSD disks, Intel X2Extreme\nCPU\nSo it is much faster than this prod server.\nNo idea how to emulate this environment.\nI can create new db in prod server as old copy but this can be used in late\nnight only.\n\nWhere to find script which clones some database in server? Something like\n\nCREATE DATABASE newdb AS SELECT * FROM olddb;\n\nIt would be more convenient to run db cloning script from pgadmin command\nwindow.\nOnly way I found is to use SSH with pg_dup/pg_restore. This requires SSH\naccess to server and SSH port opening to public internet.\n\nOr probably try to run CLUSTER command in prod server. Hopefully clustering\nby product id cannot make things slow\ntoo much.\n\nAndrus. \n\n", "msg_date": "Sun, 23 Nov 2008 23:35:41 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Scott,\n\nthank you.\n\n>> > work_mem = 512\n>This is very easy to try. You can change work_mem for just a single\n>session, and this can in some cases help performance quite a bit, and in\n>others not at all.\n>I would not recommend having it lower than at least 4MB on a server like\n>that unless you have a lot of concurrently active queries / connections.\n>To try it, simply use the SET command. To try out 32MB, just do:\n>SET work_mem = '32MB';\n\n8.1.4 requires int value.\nSET work_mem = 33554432;\ncauses:\nERROR: 33554432 is outside the valid range for parameter \"work_mem\" (64 ..\n2097151)\n\nSo max allowed value seems to be 2 MB\nI tested it when this server was idle by running both queries several times\nafter VACUUM FULL was running\n\nSET work_mem = 2097151;\nset search_path to firma2,public;\nexplain analyze\nSELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n\"Aggregate (cost=177291.36..177291.37 rows=1 width=0) (actual\ntime=5153.856..5153.859 rows=1 loops=1)\"\n\" -> Nested Loop (cost=73607.45..177229.96 rows=24561 width=0) (actual\ntime=1395.935..5071.247 rows=21541 loops=1)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 rows=1\nwidth=24) (actual time=0.078..0.087 rows=1 loops=1)\"\n\" Index Cond: ('X05'::bpchar = toode)\"\n\" -> Hash Join (cost=73607.45..176978.33 rows=24561 width=24)\n(actual time=1395.836..4911.425 rows=21541 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=4083.10..101802.05\nrows=270316 width=28) (actual time=238.578..2367.189 rows=278247 loops=1)\"\n\" Recheck Cond: (toode = 'X05'::bpchar)\"\n\" -> Bitmap Index Scan on rid_toode_idx\n(cost=0.00..4083.10 rows=270316 width=0) (actual time=150.868..150.868\nrows=278248 loops=1)\"\n\" Index Cond: (toode = 'X05'::bpchar)\"\n\" -> Hash (cost=69242.72..69242.72 rows=112651 width=4)\n(actual time=1146.081..1146.081 rows=110170 loops=1)\"\n\" -> Bitmap Heap Scan on dok (cost=1492.28..69242.72\nrows=112651 width=4) (actual time=46.210..696.803 rows=110170 loops=1)\"\n\" Recheck Cond: (kuupaev >= '2008-09-01'::date)\"\n\" -> Bitmap Index Scan on dok_kuupaev_idx\n(cost=0.00..1492.28 rows=112651 width=0) (actual time=33.938..33.938\nrows=110232 loops=1)\"\n\" Index Cond: (kuupaev >=\n'2008-09-01'::date)\"\n\"Total runtime: 5154.911 ms\"\n\n\nSET work_mem to default;\nset search_path to firma2,public;\nexplain analyze\nSELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n\"Aggregate (cost=181869.36..181869.37 rows=1 width=0) (actual\ntime=7807.867..7807.871 rows=1 loops=1)\"\n\" -> Nested Loop (cost=74048.45..181807.96 rows=24561 width=0) (actual\ntime=2607.429..7728.138 rows=21541 loops=1)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 rows=1\nwidth=24) (actual time=0.079..0.091 rows=1 loops=1)\"\n\" Index Cond: ('X05'::bpchar = toode)\"\n\" -> Hash Join (cost=74048.45..181556.33 rows=24561 width=24)\n(actual time=2607.332..7569.612 rows=21541 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=4083.10..101802.05\nrows=270316 width=28) (actual time=1147.071..4528.845 rows=278247 loops=1)\"\n\" Recheck Cond: (toode = 'X05'::bpchar)\"\n\" -> Bitmap Index Scan on rid_toode_idx\n(cost=0.00..4083.10 rows=270316 width=0) (actual time=1140.337..1140.337\nrows=278248 loops=1)\"\n\" Index Cond: (toode = 'X05'::bpchar)\"\n\" -> Hash (cost=69242.72..69242.72 rows=112651 width=4)\n(actual time=1240.988..1240.988 rows=110170 loops=1)\"\n\" -> Bitmap Heap Scan on dok (cost=1492.28..69242.72\nrows=112651 width=4) (actual time=68.053..769.448 rows=110170 loops=1)\"\n\" Recheck Cond: (kuupaev >= '2008-09-01'::date)\"\n\" -> Bitmap Index Scan on dok_kuupaev_idx\n(cost=0.00..1492.28 rows=112651 width=0) (actual time=61.358..61.358\nrows=110232 loops=1)\"\n\" Index Cond: (kuupaev >=\n'2008-09-01'::date)\"\n\"Total runtime: 7808.174 ms\"\n\nIn both cases vmstat 2 shows only cpu activity when queries are running:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n...\n 0 0 232 123692 0 1891044 0 0 0 0 252 17 0 0\n100 0\n 0 0 232 123692 0 1891044 0 0 0 0 252 17 0 0\n100 0\n 0 0 232 123684 0 1891044 0 0 0 162 254 22 0 0\n100 0\n 0 0 232 123684 0 1891044 0 0 0 0 252 18 0 0\n100 0\n 1 0 232 123056 0 1891444 0 0 0 13 254 21 62 5 34\n0 <---- start of slower query\n 1 0 232 102968 0 1911060 0 0 0 16 252 18 26 75 0\n0\n 1 0 232 77424 0 1936996 0 0 0 0 252 18 37 63 0\n0\n 1 0 232 71464 0 1941928 0 0 0 73 260 34 38 62 0\n0\n 0 0 232 123420 0 1891044 0 0 0 32 257 31 8 15 77\n0 <-------- end of slower query\n 0 0 232 123420 0 1891044 0 0 0 25 255 24 0 0\n100 0\n 0 0 232 123420 0 1891044 0 0 0 28 255 27 0 0\n100 0\n\n\nIs it safe to set\n\nwork_mem = 2097151\n\nin postgresql.conf file ?\n\n>First, demonstrate that it is all or mostly in memory -- use iostat or\n>other tools to ensure that there is not much disk activity during the\n>query. If your system doesn't have iostat installed, it should be\n>installed. It is a very useful tool.\n\n# iostat\nbash: iostat: command not found\n# locate iostat\n/usr/src/linux-2.6.16-gentoo-r9/Documentation/iostats.txt\n\nI have few experience in Linux. No idea how to install or compile iostat in\nthis system.\n\n>If it is all cached in memory, you may want to ensure that your\n>shared_buffers is a reasonalbe size so that there is less shuffling of data\n>from the kernel to postgres and back. Generally, shared_buffers works best\n>between 5% and 25% of system memory.\n\ncurrently shared_buffers = 15000\n\n>If it is completely CPU bound then the work done for the query has to be\n>reduced by altering the plan to a more optimal one or making the work it\n>has to do at each step easier. Most of the ideas in this thread revolve\n>around those things.\n\nWhen running on loaded server even after VACUUM FULL, response time for\noriginal work_mem is longer probably because it must fetch blocks from \ndisk.\n\n>Based on the time it took to do the vacuum, I suspect your disk subsystem\n>is a bit slow. If it can be determined that there is much disk I/O in your\n>use cases, there are generally several things that can be done to tune\n>Linux I/O. The main ones in my experience are the 'readahead' value for\n>each disk which helps sequential reads significantly, and trying out the\n>linux 'deadline' scheduler and comparing it to the more commonly used 'cfq'\n>scheduler. If the system is configured with the anticipatory scheduler,\n>absolutely switch to cfq or deadline as the anticipatory scheduler will\n>perform horribly poorly for a database.\n\nThis is 3 year old cheap server.\nNo idea what to config.\n\nThere is also other similar server which as 1.2 GB more usable memory. No \nidea is it worth to switch into it.\nAfter some years sales data will still exceed this more memory.\n\nAndrus. \n\n", "msg_date": "Mon, 24 Nov 2008 00:43:09 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "> My test computer has PostgreSql 8.3, 4 GB RAM, SSD disks, Intel X2Extreme\n> CPU\n> So it is much faster than this prod server.\n> No idea how to emulate this environment.\n> I can create new db in prod server as old copy but this can be used in late\n> night only.\n\nWell, a faster but comparable system may not be a problem - the query \nmight run 10 times faster, but it still will be slow (say 40 seconds \ninstead of 8 minutes).\n\nWhat is a problem is a different I/O system - SSD instead of traditional \ndrives in this case. I have no direct experience with with SSD yet, but \nAFAIK the seek time is much better compared to regular drives (say 0.1ms \ninstead of 10ms, that is 100-times faster).\n\nSo you can't just put on old SATA drive into the test machine?\n\n> Where to find script which clones some database in server? Something like\n> \n> CREATE DATABASE newdb AS SELECT * FROM olddb;\n >\n> It would be more convenient to run db cloning script from pgadmin command\n> window.\n> Only way I found is to use SSH with pg_dup/pg_restore. This requires SSH\n> access to server and SSH port opening to public internet.\n\nYes, using pg_dump | pg_restore is probably the way to clone database. \nBut it will slow down the system, as it has to do a lot of I/O (and as \nit seems to be a bottleneck already, I don't think this is a good idea).\n\n> Or probably try to run CLUSTER command in prod server. Hopefully clustering\n> by product id cannot make things slow\n> too much.\n\nAs already noted, CLUSTER command causes exclusive lock on the database. \nSo this is an operation you'd like to do on production server ...\n\nTomas\n", "msg_date": "Mon, 24 Nov 2008 00:11:40 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "> Scott,\n> \n> thank you.\n> \n>>> > work_mem = 512\n>> This is very easy to try. You can change work_mem for just a single\n>> session, and this can in some cases help performance quite a bit, and in\n>> others not at all.\n>> I would not recommend having it lower than at least 4MB on a server like\n>> that unless you have a lot of concurrently active queries / connections.\n>> To try it, simply use the SET command. To try out 32MB, just do:\n>> SET work_mem = '32MB';\n> \n> 8.1.4 requires int value.\n> SET work_mem = 33554432;\n> causes:\n> ERROR: 33554432 is outside the valid range for parameter \"work_mem\" (64 ..\n> 2097151)\n> \n> So max allowed value seems to be 2 MB\n> I tested it when this server was idle by running both queries several times\n> after VACUUM FULL was running\n> \n> SET work_mem = 2097151;\n\n\nNo, not really. The work_mem value is specified in kilobytes, so you've \nset it to 2GB :-)\n\n> set search_path to firma2,public;\n> explain analyze\n> SELECT sum(1)\n> FROM dok JOIN rid USING (dokumnr)\n> JOIN toode USING (toode)\n> WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n> \"Aggregate (cost=177291.36..177291.37 rows=1 width=0) (actual\n> time=5153.856..5153.859 rows=1 loops=1)\"\n> \" -> Nested Loop (cost=73607.45..177229.96 rows=24561 width=0) (actual\n> time=1395.935..5071.247 rows=21541 loops=1)\"\n> \" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 rows=1\n> width=24) (actual time=0.078..0.087 rows=1 loops=1)\"\n> \" Index Cond: ('X05'::bpchar = toode)\"\n> \" -> Hash Join (cost=73607.45..176978.33 rows=24561 width=24)\n> (actual time=1395.836..4911.425 rows=21541 loops=1)\"\n> \" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n> \" -> Bitmap Heap Scan on rid (cost=4083.10..101802.05\n> rows=270316 width=28) (actual time=238.578..2367.189 rows=278247 loops=1)\"\n> \" Recheck Cond: (toode = 'X05'::bpchar)\"\n> \" -> Bitmap Index Scan on rid_toode_idx\n> (cost=0.00..4083.10 rows=270316 width=0) (actual time=150.868..150.868\n> rows=278248 loops=1)\"\n> \" Index Cond: (toode = 'X05'::bpchar)\"\n> \" -> Hash (cost=69242.72..69242.72 rows=112651 width=4)\n> (actual time=1146.081..1146.081 rows=110170 loops=1)\"\n> \" -> Bitmap Heap Scan on dok (cost=1492.28..69242.72\n> rows=112651 width=4) (actual time=46.210..696.803 rows=110170 loops=1)\"\n> \" Recheck Cond: (kuupaev >= '2008-09-01'::date)\"\n> \" -> Bitmap Index Scan on dok_kuupaev_idx\n> (cost=0.00..1492.28 rows=112651 width=0) (actual time=33.938..33.938\n> rows=110232 loops=1)\"\n> \" Index Cond: (kuupaev >=\n> '2008-09-01'::date)\"\n> \"Total runtime: 5154.911 ms\"\n> \n> \n> SET work_mem to default;\n> set search_path to firma2,public;\n> explain analyze\n> SELECT sum(1)\n> FROM dok JOIN rid USING (dokumnr)\n> JOIN toode USING (toode)\n> WHERE rid.toode='X05' AND dok.kuupaev>='2008-09-01'\n> \"Aggregate (cost=181869.36..181869.37 rows=1 width=0) (actual\n> time=7807.867..7807.871 rows=1 loops=1)\"\n> \" -> Nested Loop (cost=74048.45..181807.96 rows=24561 width=0) (actual\n> time=2607.429..7728.138 rows=21541 loops=1)\"\n> \" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 rows=1\n> width=24) (actual time=0.079..0.091 rows=1 loops=1)\"\n> \" Index Cond: ('X05'::bpchar = toode)\"\n> \" -> Hash Join (cost=74048.45..181556.33 rows=24561 width=24)\n> (actual time=2607.332..7569.612 rows=21541 loops=1)\"\n> \" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n> \" -> Bitmap Heap Scan on rid (cost=4083.10..101802.05\n> rows=270316 width=28) (actual time=1147.071..4528.845 rows=278247 loops=1)\"\n> \" Recheck Cond: (toode = 'X05'::bpchar)\"\n> \" -> Bitmap Index Scan on rid_toode_idx\n> (cost=0.00..4083.10 rows=270316 width=0) (actual time=1140.337..1140.337\n> rows=278248 loops=1)\"\n> \" Index Cond: (toode = 'X05'::bpchar)\"\n> \" -> Hash (cost=69242.72..69242.72 rows=112651 width=4)\n> (actual time=1240.988..1240.988 rows=110170 loops=1)\"\n> \" -> Bitmap Heap Scan on dok (cost=1492.28..69242.72\n> rows=112651 width=4) (actual time=68.053..769.448 rows=110170 loops=1)\"\n> \" Recheck Cond: (kuupaev >= '2008-09-01'::date)\"\n> \" -> Bitmap Index Scan on dok_kuupaev_idx\n> (cost=0.00..1492.28 rows=112651 width=0) (actual time=61.358..61.358\n> rows=110232 loops=1)\"\n> \" Index Cond: (kuupaev >=\n> '2008-09-01'::date)\"\n> \"Total runtime: 7808.174 ms\"\n> \n> In both cases vmstat 2 shows only cpu activity when queries are running:\n> \n> procs -----------memory---------- ---swap-- -----io---- --system-- \n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> ...\n> 0 0 232 123692 0 1891044 0 0 0 0 252 17 0 0\n> 100 0\n> 0 0 232 123692 0 1891044 0 0 0 0 252 17 0 0\n> 100 0\n> 0 0 232 123684 0 1891044 0 0 0 162 254 22 0 0\n> 100 0\n> 0 0 232 123684 0 1891044 0 0 0 0 252 18 0 0\n> 100 0\n> 1 0 232 123056 0 1891444 0 0 0 13 254 21 62 5 34\n> 0 <---- start of slower query\n> 1 0 232 102968 0 1911060 0 0 0 16 252 18 26 75 0\n> 0\n> 1 0 232 77424 0 1936996 0 0 0 0 252 18 37 63 0\n> 0\n> 1 0 232 71464 0 1941928 0 0 0 73 260 34 38 62 0\n> 0\n> 0 0 232 123420 0 1891044 0 0 0 32 257 31 8 15 77\n> 0 <-------- end of slower query\n> 0 0 232 123420 0 1891044 0 0 0 25 255 24 0 0\n> 100 0\n> 0 0 232 123420 0 1891044 0 0 0 28 255 27 0 0\n> 100 0\n> \n\nWell, this parameter specifies how much memory may be used for in-memory \nsorting and hash tables. If more memory is needed, the sort / hash table \nwill be performed on-disk.\n\nI guess the difference in speed is not huge, so in this case the hash \ntable (used for join) probably fits into the 1024kB (which is a default \nvalue).\n\n> Is it safe to set\n> \n> work_mem = 2097151\n> \n> in postgresql.conf file ?\n\nEach sort in a query uses a separate area in a memory (up to work_mem). \nSo if you have 10 sessions running a query with 2 sorts, that may occupy\n\n 10 * 2 * work_mem\n\nof memory. Let's suppose you set a reasonable value (say 8096) instead \nof 2GB. That gives about 160MB.\n\nAnyway this depends - if you have a lot of slow queries caused by \non-disk sorts / hash tables, use a higher value. Otherwise leave it as \nit is.\n\n>> First, demonstrate that it is all or mostly in memory -- use iostat or\n>> other tools to ensure that there is not much disk activity during the\n>> query. If your system doesn't have iostat installed, it should be\n>> installed. It is a very useful tool.\n> \n> # iostat\n> bash: iostat: command not found\n> # locate iostat\n> /usr/src/linux-2.6.16-gentoo-r9/Documentation/iostats.txt\n> \n> I have few experience in Linux. No idea how to install or compile iostat in\n> this system.\n> \n>> If it is all cached in memory, you may want to ensure that your\n>> shared_buffers is a reasonalbe size so that there is less shuffling of \n>> data\n>> from the kernel to postgres and back. Generally, shared_buffers works \n>> best\n>> between 5% and 25% of system memory.\n> \n> currently shared_buffers = 15000\n\nThat's 120MB, i.e. about 6% of the memory. Might be a little bit higher, \nbut seems reasonable.\n\n>> If it is completely CPU bound then the work done for the query has to be\n>> reduced by altering the plan to a more optimal one or making the work it\n>> has to do at each step easier. Most of the ideas in this thread revolve\n>> around those things.\n> \n> When running on loaded server even after VACUUM FULL, response time for\n> original work_mem is longer probably because it must fetch blocks from \n> disk.\n> \n>> Based on the time it took to do the vacuum, I suspect your disk subsystem\n>> is a bit slow. If it can be determined that there is much disk I/O in \n>> your\n>> use cases, there are generally several things that can be done to tune\n>> Linux I/O. The main ones in my experience are the 'readahead' value for\n>> each disk which helps sequential reads significantly, and trying out the\n>> linux 'deadline' scheduler and comparing it to the more commonly used \n>> 'cfq'\n>> scheduler. If the system is configured with the anticipatory scheduler,\n>> absolutely switch to cfq or deadline as the anticipatory scheduler will\n>> perform horribly poorly for a database.\n> \n> This is 3 year old cheap server.\n> No idea what to config.\n\nWell, how could we know that if you don't?\n\nAnyway the options mentioned by Scott Carey are related to linux kernel, \nso it shouldn't be a problem to change.\n\n> There is also other similar server which as 1.2 GB more usable memory. \n> No idea is it worth to switch into it.\n> After some years sales data will still exceed this more memory.\n\nGiven the fact that the performance issues are caused by bloated tables \nand / or slow I/O subsystem, moving to a similar system won't help I guess.\n\nregards\nTomas\n", "msg_date": "Mon, 24 Nov 2008 00:39:48 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Tomas,\n\n> Let's suppose you set a reasonable value (say 8096) instead of 2GB. That \n> gives about 160MB.\n> Anyway this depends - if you have a lot of slow queries caused by on-disk \n> sorts / hash tables, use a higher value. Otherwise leave it as it is.\n\nProbably product orders table is frequently joined which product table.\ncurrently there was work_memory = 512 in conf file.\n\nI changed it to work_memory = 8096\n\n>>> If it is all cached in memory, you may want to ensure that your\n>>> shared_buffers is a reasonalbe size so that there is less shuffling of \n>>> data\n>>> from the kernel to postgres and back. Generally, shared_buffers works \n>>> best\n>>> between 5% and 25% of system memory.\n>>\n>> currently shared_buffers = 15000\n>\n> That's 120MB, i.e. about 6% of the memory. Might be a little bit higher, \n> but seems reasonable.\n\nI changed it to 20000\n\n> Given the fact that the performance issues are caused by bloated tables \n> and / or slow I/O subsystem, moving to a similar system won't help I \n> guess.\n\nI have ran VACUUM FULL ANALYZE VERBOSE\nand set MAX_FSM_PAGES = 150000\n\nSo there is no any bloat except pg_shdepend indexes which should not affect \nto query speed.\n\nAndrus. \n\n", "msg_date": "Mon, 24 Nov 2008 14:35:25 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": ">> Given the fact that the performance issues are caused by bloated tables\n>> and / or slow I/O subsystem, moving to a similar system won't help I\n>> guess.\n>\n> I have ran VACUUM FULL ANALYZE VERBOSE\n> and set MAX_FSM_PAGES = 150000\n>\n> So there is no any bloat except pg_shdepend indexes which should not\n> affect to query speed.\n\nOK, what was the number of unused pointer items in the VACUUM output?\n\nThe query performance is still the same as when the tables were bloated?\nWhat are the outputs of iostat/vmstat/dstat/top when running the query?\n\nregards\nTomas\n\n", "msg_date": "Mon, 24 Nov 2008 14:39:24 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Hash join on int takes 8..114 seconds" }, { "msg_contents": "Tomas,\n\n> OK, what was the number of unused pointer items in the VACUUM output?\n\nI posted it in this thread:\n\nVACUUM FULL ANALYZE VERBOSE;\n...\nINFO: free space map contains 14353 pages in 314 relations\nDETAIL: A total of 20000 page slots are in use (including overhead).\n89664 page slots are required to track all free space.\nCurrent limits are: 20000 page slots, 1000 relations, using 182 KB.\nNOTICE: number of page slots needed (89664) exceeds max_fsm_pages (20000)\nHINT: Consider increasing the configuration parameter \"max_fsm_pages\" to a\nvalue over 89664.\n\nQuery returned successfully with no result in 10513335 ms.\n\n> The query performance is still the same as when the tables were bloated?\n\nSeems to be the same.\nHowever I improved yesterday after previous message other queries not to \nscan whole\nproduct orders (rid) table.\nNow there is only few disk activity after 5-300 seconds which seems not to\naffect to those query results. So issue in this thread has been gone away.\n\nNow this query runs using constant time 8 seconds:\n\nexplain analyze\nSELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode = 'X05' AND dok.kuupaev>='2008-09-01'\n\"Aggregate (cost=182210.77..182210.78 rows=1 width=0) (actual\ntime=8031.600..8031.604 rows=1 loops=1)\"\n\" -> Nested Loop (cost=74226.74..182149.27 rows=24598 width=0) (actual\ntime=2602.474..7948.121 rows=21711 loops=1)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 rows=1\nwidth=24) (actual time=0.077..0.089 rows=1 loops=1)\"\n\" Index Cond: ('X05'::bpchar = toode)\"\n\" -> Hash Join (cost=74226.74..181897.28 rows=24598 width=24)\n(actual time=2602.378..7785.315 rows=21711 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=4084.54..101951.60\nrows=270725 width=28) (actual time=1129.925..4686.601 rows=278417 loops=1)\"\n\" Recheck Cond: (toode = 'X05'::bpchar)\"\n\" -> Bitmap Index Scan on rid_toode_idx\n(cost=0.00..4084.54 rows=270725 width=0) (actual time=1123.202..1123.202\nrows=278426 loops=1)\"\n\" Index Cond: (toode = 'X05'::bpchar)\"\n\" -> Hash (cost=69419.29..69419.29 rows=112766 width=4)\n(actual time=1251.496..1251.496 rows=111088 loops=1)\"\n\" -> Bitmap Heap Scan on dok (cost=1492.68..69419.29\nrows=112766 width=4) (actual time=70.837..776.249 rows=111088 loops=1)\"\n\" Recheck Cond: (kuupaev >= '2008-09-01'::date)\"\n\" -> Bitmap Index Scan on dok_kuupaev_idx\n(cost=0.00..1492.68 rows=112766 width=0) (actual time=64.177..64.177\nrows=111343 loops=1)\"\n\" Index Cond: (kuupaev >=\n'2008-09-01'::date)\"\n\"Total runtime: 8031.905 ms\"\n\n\nInterestingly using like is 80 times faster:\n\nexplain analyze\nSELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode like 'X05' AND dok.kuupaev>='2008-09-01'\n\"Aggregate (cost=88178.69..88178.70 rows=1 width=0) (actual\ntime=115.335..115.339 rows=1 loops=1)\"\n\" -> Hash Join (cost=71136.22..88117.36 rows=24532 width=0) (actual\ntime=115.322..115.322 rows=0 loops=1)\"\n\" Hash Cond: (\"outer\".toode = \"inner\".toode)\"\n\" -> Hash Join (cost=70163.36..86253.20 rows=24598 width=24)\n(actual time=0.046..0.046 rows=0 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Bitmap Heap Scan on rid (cost=21.16..6307.52 rows=270725\nwidth=28) (actual time=0.037..0.037 rows=0 loops=1)\"\n\" Filter: (toode ~~ 'X05'::text)\"\n\" -> Bitmap Index Scan on rid_toode_pattern_idx\n(cost=0.00..21.16 rows=1760 width=0) (actual time=0.028..0.028 rows=0\nloops=1)\"\n\" Index Cond: (toode ~=~ 'X05'::bpchar)\"\n\" -> Hash (cost=69419.29..69419.29 rows=112766 width=4)\n(never executed)\"\n\" -> Bitmap Heap Scan on dok (cost=1492.68..69419.29\nrows=112766 width=4) (never executed)\"\n\" Recheck Cond: (kuupaev >= '2008-09-01'::date)\"\n\" -> Bitmap Index Scan on dok_kuupaev_idx\n(cost=0.00..1492.68 rows=112766 width=0) (never executed)\"\n\" Index Cond: (kuupaev >=\n'2008-09-01'::date)\"\n\" -> Hash (cost=853.29..853.29 rows=13429 width=24) (actual\ntime=114.757..114.757 rows=13412 loops=1)\"\n\" -> Seq Scan on toode (cost=0.00..853.29 rows=13429\nwidth=24) (actual time=0.014..58.319 rows=13412 loops=1)\"\n\"Total runtime: 115.505 ms\"\n\nI posted also a test script in other thread which shows also that like is\nmagitude faster than equality check.\n\nrid.toode = 'X05'\n\nand\n\nrid.toode like 'X05'\n\nare exactly the same conditions, there are indexes for both conditions.\n\nSo I do'nt understand why results are so different.\n\nIn other sample which I posted in thread \"Increasing pattern index query\nspeed\" like is 4 times slower:\n\nSELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode = '99000010' AND dok.kuupaev BETWEEN '2008-11-21' AND\n'2008-11-21'\n AND dok.yksus LIKE 'ORISSAARE%'\n\n\"Aggregate (cost=43.09..43.10 rows=1 width=0) (actual\ntime=12674.675..12674.679 rows=1 loops=1)\"\n\" -> Nested Loop (cost=29.57..43.08 rows=1 width=0) (actual\ntime=2002.045..12673.645 rows=19 loops=1)\"\n\" -> Nested Loop (cost=29.57..37.06 rows=1 width=24) (actual\ntime=2001.922..12672.344 rows=19 loops=1)\"\n\" -> Index Scan using dok_kuupaev_idx on dok (cost=0.00..3.47\nrows=1 width=4) (actual time=342.812..9810.627 rows=319 loops=1)\"\n\" Index Cond: ((kuupaev >= '2008-11-21'::date) AND\n(kuupaev <= '2008-11-21'::date))\"\n\" Filter: (yksus ~~ 'ORISSAARE%'::text)\"\n\" -> Bitmap Heap Scan on rid (cost=29.57..33.58 rows=1\nwidth=28) (actual time=8.948..8.949 rows=0 loops=319)\"\n\" Recheck Cond: ((\"outer\".dokumnr = rid.dokumnr) AND\n(rid.toode = '99000010'::bpchar))\"\n\" -> BitmapAnd (cost=29.57..29.57 rows=1 width=0)\n(actual time=8.930..8.930 rows=0 loops=319)\"\n\" -> Bitmap Index Scan on rid_dokumnr_idx\n(cost=0.00..2.52 rows=149 width=0) (actual time=0.273..0.273 rows=2\nloops=319)\"\n\" Index Cond: (\"outer\".dokumnr =\nrid.dokumnr)\"\n\" -> Bitmap Index Scan on rid_toode_idx\n(cost=0.00..26.79 rows=1941 width=0) (actual time=8.596..8.596 rows=15236\nloops=319)\"\n\" Index Cond: (toode = '99000010'::bpchar)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 rows=1\nwidth=24) (actual time=0.043..0.048 rows=1 loops=19)\"\n\" Index Cond: ('99000010'::bpchar = toode)\"\n\"Total runtime: 12675.191 ms\"\n\nexplain analyze SELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode like '99000010%' AND dok.kuupaev BETWEEN '2008-11-21' AND\n'2008-11-21'\n AND dok.yksus LIKE 'ORISSAARE%'\n\n\"Aggregate (cost=17.99..18.00 rows=1 width=0) (actual\ntime=46465.609..46465.613 rows=1 loops=1)\"\n\" -> Nested Loop (cost=0.00..17.98 rows=1 width=0) (actual\ntime=3033.947..46465.462 rows=19 loops=1)\"\n\" -> Nested Loop (cost=0.00..11.96 rows=1 width=24) (actual\ntime=3033.890..46464.310 rows=19 loops=1)\"\n\" Join Filter: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Index Scan using dok_kuupaev_idx on dok (cost=0.00..5.93\nrows=1 width=4) (actual time=0.044..5.419 rows=319 loops=1)\"\n\" Index Cond: ((kuupaev >= '2008-11-21'::date) AND\n(kuupaev <= '2008-11-21'::date))\"\n\" Filter: (yksus ~~ 'ORISSAARE%'::text)\"\n\" -> Index Scan using rid_toode_pattern_idx on rid\n(cost=0.00..6.01 rows=1 width=28) (actual time=0.025..88.046 rows=15322\nloops=319)\"\n\" Index Cond: ((toode ~>=~ '99000010'::bpchar) AND (toode\n~<~ '99000011'::bpchar))\"\n\" Filter: (toode ~~ '99000010%'::text)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 rows=1\nwidth=24) (actual time=0.034..0.039 rows=1 loops=19)\"\n\" Index Cond: (\"outer\".toode = toode.toode)\"\n\"Total runtime: 46465.833 ms\"\n\n\n> What are the outputs of iostat/vmstat/dstat/top when running the query?\n\niostat and dstat are not installed in Gentoo.\nFor last like query which constantly takes 46 seconds vmstat 5 shows:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n 0 0 216 124780 0 1899236 0 0 10 13 277 78 3 1 95\n0\n 0 0 216 124656 0 1899248 0 0 2 8 267 59 2 0 98\n0\n 1 0 216 124664 0 1899244 0 0 0 7 266 62 2 0 97\n0\n 0 0 216 124788 0 1899248 0 0 0 8 273 73 3 1 96\n0\n 0 0 216 124656 0 1899252 0 0 0 2 265 54 2 0 97\n0\n 1 0 216 124656 0 1899252 0 0 0 22 268 63 14 39 48\n0 <-------- start of query\n 1 0 216 124656 0 1899252 0 0 0 62 267 61 25 75 0\n0\n 1 0 216 124656 0 1899252 0 0 0 2 266 55 28 72 0\n0\n 1 0 216 124664 0 1899256 0 0 0 5 265 56 26 74 0\n0\n 1 0 216 124788 0 1899256 0 0 0 10 271 67 25 75 0\n0\n 1 0 216 124664 0 1899256 0 0 0 3 265 54 25 75 0\n0\n 1 0 216 124160 0 1899260 0 0 0 1 265 57 28 72 0\n0\n 1 0 216 125020 0 1899260 0 0 0 21 272 60 28 72 0\n0\n 1 0 216 124020 0 1899264 0 0 0 0 271 73 29 71 0\n0\n 0 0 216 124260 0 1899268 0 0 0 3 266 61 19 59 22\n0 <------ end of query\n 1 0 216 125268 0 1899260 0 0 0 9 268 59 2 0 97\n0\n 0 0 216 124912 0 1899268 0 0 0 5 270 59 3 0 96\n0\n 0 0 216 124656 0 1899272 0 0 0 5 267 64 2 0 98\n0\n 0 0 216 124664 0 1899272 0 0 0 0 263 50 2 0 98\n0\n\ntop shows single postmaster process activity:\n\ntop - 23:07:49 up 27 days, 4:20, 1 user, load average: 0.25, 0.22, 0.12\nTasks: 50 total, 3 running, 47 sleeping, 0 stopped, 0 zombie\nCpu(s): 29.3% us, 70.7% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si\nMem: 2075828k total, 1951604k used, 124224k free, 0k buffers\nSwap: 3911816k total, 216k used, 3911600k free, 1899348k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n29687 postgres 25 0 144m 124m 121m R 96.3 6.2 8:18.04 postmaster\n 8147 root 16 0 4812 1628 1316 S 1.0 0.1 0:00.03 sshd\n 8141 root 15 0 5080 1892 1528 S 0.7 0.1 0:00.02 sshd\n 8145 sshd 15 0 4816 1220 912 S 0.3 0.1 0:00.01 sshd\n 8151 sshd 15 0 4812 1120 816 S 0.3 0.1 0:00.01 sshd\n 1 root 16 0 1480 508 444 S 0.0 0.0 0:01.89 init\n 2 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/0\n 3 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/0\n 4 root 10 -5 0 0 0 S 0.0 0.0 0:00.63 khelper\n 5 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kthread\n 7 root 10 -5 0 0 0 S 0.0 0.0 2:21.73 kblockd/0\n 8 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 kacpid\n 115 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 aio/0\n 114 root 15 0 0 0 0 S 0.0 0.0 9:21.41 kswapd0\n 116 root 10 -5 0 0 0 S 0.0 0.0 0:12.06 xfslogd/0\n 117 root 10 -5 0 0 0 S 0.0 0.0 1:36.43 xfsdatad/0\n 706 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kseriod\n 723 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kpsmoused\n 738 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 ata/0\n 740 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_0\n 741 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_1\n 742 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_2\n 743 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_3\n 762 root 10 -5 0 0 0 S 0.0 0.0 0:18.64 xfsbufd\n 763 root 10 -5 0 0 0 S 0.0 0.0 0:00.79 xfssyncd\n 963 root 16 -4 1712 528 336 S 0.0 0.0 0:00.24 udevd\n 6677 root 15 0 1728 572 400 S 0.0 0.0 0:10.14 syslog-ng\n 7183 root 15 0 3472 828 672 S 0.0 0.0 0:08.50 sshd\n 7222 root 16 0 1736 672 556 S 0.0 0.0 0:00.03 cron\n 7237 root 16 0 1620 712 608 S 0.0 0.0 0:00.00 agetty\n 7238 root 17 0 1616 712 608 S 0.0 0.0 0:00.00 agetty\n 7239 root 16 0 1616 712 608 S 0.0 0.0 0:00.00 agetty\n 7240 root 16 0 1616 708 608 S 0.0 0.0 0:00.00 agetty\n 7241 root 16 0 1616 708 608 S 0.0 0.0 0:00.00 agetty\n31873 root 15 0 1616 712 608 S 0.0 0.0 0:00.00 agetty\n14908 postgres 16 0 141m 10m 9936 S 0.0 0.5 0:01.44 postmaster\n14915 postgres 16 0 8468 1360 896 S 0.0 0.1 0:00.36 postmaster\n\nAndrus.\n\n", "msg_date": "Mon, 24 Nov 2008 23:16:27 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join on int takes 8..114 seconds" } ]
[ { "msg_contents": "Hi chaps,\n\nI've had this old card sitting on my desk for a while. It appears to be a U160 card with 128Mb BBU so I thought I'd wang it in my test machine (denian etch) and give it a bash.\n\nI set up 4 36Gb drives in raid 0+1, but I don't seem to be able to get more than 20MB/s write speed out of it for large files (2XRAM usual tests with dd from dev/zero). I don't expect anything great, but I thought it'd do a little better than that.\n\nI've tried writeback and write through modes, tried changing cache flush times, disabled and enabled multiple PCI delayed transactions, all seem to have little effect.\n\nFinally I decided to wave goodbye to Dell's firmware. LSI has it down as a MegaRAID 493 elite 1600, so I flashed it with their latest firmware. Doesn't seem to have helped either though.\n\nHas anybody else used this card in the past? I'm wondering if this is a driver issue, or if the card is and always was just crap? If so I'll proabably try sw raid on it instead.\n\nAny thoughts?\n\nGlyn\n\n\n \n", "msg_date": "Sat, 22 Nov 2008 14:18:25 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Perc 3 DC" }, { "msg_contents": "On Sat, Nov 22, 2008 at 7:18 AM, Glyn Astill <[email protected]> wrote:\n> Hi chaps,\n>\n> I've had this old card sitting on my desk for a while. It appears to be a U160 card with 128Mb BBU so I thought I'd wang it in my test machine (denian etch) and give it a bash.\n>\n> I set up 4 36Gb drives in raid 0+1, but I don't seem to be able to get more than 20MB/s write speed out of it for large files (2XRAM usual tests with dd from dev/zero). I don't expect anything great, but I thought it'd do a little better than that.\n\nYou really have two choices. First is to try and use it as a plain\nSCSI card, maybe with caching turned on, and do the raid in software.\nSecond is to cut it into pieces and make jewelry out of it. Anything\nbefore the Perc 6 series is seriously brain damaged, and the Perc6\nbrings the dell raid array line squarly in line with a 5 year old LSI\nmegaraid, give or take. And that's being generous.\n\n> I've tried writeback and write through modes, tried changing cache flush times, disabled and enabled multiple PCI delayed transactions, all seem to have little effect.\n\nYeah, it's like trying to performance tune a yugo.\n\n> Finally I decided to wave goodbye to Dell's firmware. LSI has it down as a MegaRAID 493 elite 1600, so I flashed it with their latest firmware. Doesn't seem to have helped either though.\n\nDoes it have a battery backup module? Often you can't really turn on\nwrite-back without one. That would certainly slow things down. But\nyou should certainly expect > 20 M/s on a modern RAID controller\nwriting out to a 4 disk RAID10\n\n> Has anybody else used this card in the past? I'm wondering if this is a driver issue, or if the card is and always was just crap? If so I'll proabably try sw raid on it instead.\n\nIt's pretty much a low end card. Look into modern LSI, Areca, or\nEscalade/3Ware cards if you want good performance.\n", "msg_date": "Sat, 22 Nov 2008 09:16:54 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "--- On Sat, 22/11/08, Scott Marlowe <[email protected]> wrote:\n \n> You really have two choices. First is to try and use it as\n> a plain\n> SCSI card, maybe with caching turned on, and do the raid in\n> software.\n> Second is to cut it into pieces and make jewelry out of it.\n\nHaha, I'm not really into jewelry, although I had thought of smacking it into a pile of dust with a lump hammer, that's much more my thing.\n\n> Anything\n> before the Perc 6 series is seriously brain damaged, and\n> the Perc6\n> brings the dell raid array line squarly in line with a 5\n> year old LSI\n> megaraid, give or take. And that's being generous.\n> \n\nWell this card thinks it's a 5 year old lsi megaraid. I've got a pile of perc5i megaraid paperweights on my desk at work, so this was kinda expected really.\n\n> > I've tried writeback and write through modes,\n> tried changing cache flush times, disabled and enabled\n> multiple PCI delayed transactions, all seem to have little\n> effect.\n> \n> Yeah, it's like trying to performance tune a yugo.\n> \n\nDid I mention I drive a yugo?\n\n> > Finally I decided to wave goodbye to Dell's\n> firmware. LSI has it down as a MegaRAID 493 elite 1600, so I\n> flashed it with their latest firmware. Doesn't seem to\n> have helped either though.\n> \n> Does it have a battery backup module? Often you can't\n> really turn on\n> write-back without one. That would certainly slow things\n> down. But\n> you should certainly expect > 20 M/s on a modern RAID\n> controller\n> writing out to a 4 disk RAID10\n> \n\nYeah the battery's on it, that and the 128Mb is really the only reason I thought I'd give it a whirl.\n\n\n \n", "msg_date": "Sat, 22 Nov 2008 16:59:02 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "On Sat, Nov 22, 2008 at 9:59 AM, Glyn Astill <[email protected]> wrote:\n> --- On Sat, 22/11/08, Scott Marlowe <[email protected]> wrote:\n>\n>> You really have two choices. First is to try and use it as\n>> a plain\n>> SCSI card, maybe with caching turned on, and do the raid in\n>> software.\n>> Second is to cut it into pieces and make jewelry out of it.\n>\n> Haha, I'm not really into jewelry, although I had thought of smacking it into a pile of dust with a lump hammer, that's much more my thing.\n\nWell, I think the important thing here is to follow your bliss. :)\n\n>> Anything\n>> before the Perc 6 series is seriously brain damaged, and\n>> the Perc6\n>> brings the dell raid array line squarly in line with a 5\n>> year old LSI\n>> megaraid, give or take. And that's being generous.\n>>\n>\n> Well this card thinks it's a 5 year old lsi megaraid. I've got a pile of perc5i megaraid paperweights on my desk at work, so this was kinda expected really.\n\nYeah we just wound up buying a RAID controller from Dell for handling\n8 500Gig drives and no one checked and they sold us a Perc5, which can\nonly handle a 2TB array apparently. Set it up as a RAID-10 and it's\ndefinitely got \"meh\" levels of performance. I had an old workstation\nwith a 4 port SATA card (no raid) running software raid and it handily\nstomps this 8 disk machine into the ground.\n\n>> > I've tried writeback and write through modes,\n>> tried changing cache flush times, disabled and enabled\n>> multiple PCI delayed transactions, all seem to have little\n>> effect.\n>>\n>> Yeah, it's like trying to performance tune a yugo.\n>\n> Did I mention I drive a yugo?\n\nWell, they're fine vehicles for what they do (I hear they're still\nquite an icon in eastern europe) but they aren't gonna win a lot of\ncash at the race track. :)\n\n>> down. But\n>> you should certainly expect > 20 M/s on a modern RAID\n>> controller\n>> writing out to a 4 disk RAID10\n>>\n>\n> Yeah the battery's on it, that and the 128Mb is really the only reason I thought I'd give it a whirl.\n\nWe had a bunch of 18xx series servers last company I was at (we went\nfrom unix / linux to Microsoft, so ordered some 400 machines to\nreplace a dozen or so unix machines) and they all came with the\nadaptec based Perc 3s. We wound up with a large cardboard box full of\nthem and running software RAID to get decent performance and to stop\nthem from locking up randomly. I had the only two running linux and\nequipped with LSI based Perc3. They were stable, but the pair of intel\nOEM boxes they replaced, which had 1/4 the CPU horsepower, were still\nnoticeably faster with their older but LSI based RAID controllers. I\nleft that place as soon as I could, I just couldn't handle a large\nportion of my job being to walk around a data center resetting locked\nup windows boxes each morning.\n\nI think as much as anything the busses on the dells are the problem,\nresulting in pretty poor throughput, especially true of the old\nserverworks chipset machines. Those things are pretty much boat\nanchors.\n", "msg_date": "Sat, 22 Nov 2008 11:27:07 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "--- On Sat, 22/11/08, Scott Marlowe <[email protected]> wrote:\n\n> I had an old workstation with a 4 port SATA card (no raid) running\n> software raid and it handily stomps this 8 disk machine into the ground.\n\nYeah, I think this machine will be going that route.\n\n> We had a bunch of 18xx series servers last company I was at\n> (we went\n> from unix / linux to Microsoft, so ordered some 400\n> machines to\n> replace a dozen or so unix machines) \n\nI'm not surprised. We've just had some management \"inserted\" to make decisions like that for us. Honestly if I get asked one more time why we're not utilizing iSCSI or <insert buzzword here> more .... But that's another matter.\n\n> \n> I think as much as anything the busses on the dells are the\n> problem,\n> resulting in pretty poor throughput, especially true of the\n> old\n> serverworks chipset machines. Those things are pretty much\n> boat\n> anchors.\n\nFunny that, possibly explains some of the useless supermicro hardware I had a while back.\n\n\n \n", "msg_date": "Sat, 22 Nov 2008 22:00:11 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "Glyn Astill wrote:\n> --- On Sat, 22/11/08, Scott Marlowe <[email protected]> wrote:\n> \n> \n>>You really have two choices. First is to try and use it as\n>>a plain\n>>SCSI card, maybe with caching turned on, and do the raid in\n>>software.\n>>Second is to cut it into pieces and make jewelry out of it.\n> \n> \n> Haha, I'm not really into jewelry, although I had thought of smacking it into a pile of dust with a lump hammer, that's much more my thing.\n> \n> \n>> Anything\n>>before the Perc 6 series is seriously brain damaged, and\n>>the Perc6\n>>brings the dell raid array line squarly in line with a 5\n>>year old LSI\n>>megaraid, give or take. And that's being generous.\n>>\n> \n> \n> Well this card thinks it's a 5 year old lsi megaraid. I've got a pile of perc5i megaraid paperweights on my desk at work, so this was kinda expected really.\n> \n> \n>>>I've tried writeback and write through modes,\n>>\n>>tried changing cache flush times, disabled and enabled\n>>multiple PCI delayed transactions, all seem to have little\n>>effect.\n>>\n>>Yeah, it's like trying to performance tune a yugo.\n>>\n> \n> \n> Did I mention I drive a yugo?\n> \n> \n>>>Finally I decided to wave goodbye to Dell's\n>>\n>>firmware. LSI has it down as a MegaRAID 493 elite 1600, so I\n>>flashed it with their latest firmware. Doesn't seem to\n>>have helped either though.\n>>\n>>Does it have a battery backup module? Often you can't\n>>really turn on\n>>write-back without one. That would certainly slow things\n>>down. But\n>>you should certainly expect > 20 M/s on a modern RAID\n>>controller\n>>writing out to a 4 disk RAID10\n>>\n> \n> \n> Yeah the battery's on it, that and the 128Mb is really the only reason I thought I'd give it a whirl.\n> \n> \n> \n> \nIs the battery functioning? We found that the unit had to be on and charged before write back caching\nwould work.\n", "msg_date": "Mon, 24 Nov 2008 08:23:54 -0500", "msg_from": "Steve Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "--- On Mon, 24/11/08, Steve Clark <[email protected]> wrote:\n\n> > Yeah the battery's on it, that and the 128Mb is\n> really the only reason I thought I'd give it a whirl.\n> > \n> > \n> Is the battery functioning? We found that the unit had to\n> be on and charged before write back caching\n> would work.\n\nYeah the battery is on there, and in the BIOS it says it's \"PRESENT\" and the status is \"GOOD\".\n\n\n\n\n \n", "msg_date": "Mon, 24 Nov 2008 14:49:17 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "On Mon, Nov 24, 2008 at 7:49 AM, Glyn Astill <[email protected]> wrote:\n> --- On Mon, 24/11/08, Steve Clark <[email protected]> wrote:\n>\n>> > Yeah the battery's on it, that and the 128Mb is\n>> really the only reason I thought I'd give it a whirl.\n>> >\n>> >\n>> Is the battery functioning? We found that the unit had to\n>> be on and charged before write back caching\n>> would work.\n>\n> Yeah the battery is on there, and in the BIOS it says it's \"PRESENT\" and the status is \"GOOD\".\n\nIf I remember correctly, older LSI cards had pretty poor performance\nin RAID 1+0 (or any layered RAID really). Have you tried setting up\nRAID-1 pairs on the card and then striping them with the OS?\n", "msg_date": "Mon, 24 Nov 2008 07:52:32 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "--- Scott Marlowe <[email protected]> wrote:\n\n> >\n> > Yeah the battery is on there, and in the BIOS it says it's\n> \"PRESENT\" and the status is \"GOOD\".\n> \n> If I remember correctly, older LSI cards had pretty poor\n> performance\n> in RAID 1+0 (or any layered RAID really). Have you tried setting\n> up\n> RAID-1 pairs on the card and then striping them with the OS?\n> \n\nNot yet no, but that's a good suggestion and I do intend to give it a\nwhirl. I get about 27MB/s from raid 1 (10 is about the same) so\nhopefully I can up the throughput to the speed of about one disk with\nsw raid.\n\nFor kicks I did try raid 5 on it; 6.9MB/s made it hard to resist\ngoing to get the hammer, which is still a very attractive option.\n\n\n\n \n", "msg_date": "Mon, 24 Nov 2008 15:06:02 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "On Mon, Nov 24, 2008 at 8:06 AM, Glyn Astill <[email protected]> wrote:\n> --- Scott Marlowe <[email protected]> wrote:\n>\n>> >\n>> > Yeah the battery is on there, and in the BIOS it says it's\n>> \"PRESENT\" and the status is \"GOOD\".\n>>\n>> If I remember correctly, older LSI cards had pretty poor\n>> performance\n>> in RAID 1+0 (or any layered RAID really). Have you tried setting\n>> up\n>> RAID-1 pairs on the card and then striping them with the OS?\n>>\n>\n> Not yet no, but that's a good suggestion and I do intend to give it a\n> whirl. I get about 27MB/s from raid 1 (10 is about the same) so\n> hopefully I can up the throughput to the speed of about one disk with\n> sw raid.\n>\n> For kicks I did try raid 5 on it; 6.9MB/s made it hard to resist\n> going to get the hammer, which is still a very attractive option.\n\nWell, I prefer making keychain fobs still, but from a technical\nperspective, I guess either option is a good one.\n\nSrsly, also look at running pure sw RAID on it with the controller\nproviding caching only. I don't expect a PERC 3DC to win any awards,\nbut the less you give that card to do the better off you'll be.\n", "msg_date": "Mon, 24 Nov 2008 08:18:18 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "On Monday 24 November 2008 14:49:17 Glyn Astill wrote:\n> --- On Mon, 24/11/08, Steve Clark <[email protected]> wrote:\n> > > Yeah the battery's on it, that and the 128Mb is\n> >\n> > really the only reason I thought I'd give it a whirl.\n> >\n> >\n> > Is the battery functioning? We found that the unit had to\n> > be on and charged before write back caching\n> > would work.\n>\n> Yeah the battery is on there, and in the BIOS it says it's \"PRESENT\" and\n> the status is \"GOOD\".\n\nSorry I deleted the beginning of this on getting back from a week off.\n\nWriteback is configurable. You can enabled write back caching when the unit is \nnot charged if you like. It is offered when you create the array (and can be \nchanged later). It is arguably a silly thing to do, but it is an option.\n\nI have some reasonable performance stats for this card assuming you have a \nsuitably recent version of the driver software, DELL use to ship with a Linux \nkernel that had a broken driver for this card resulting is very poor \nperformance (i.e. substantially slower than software RAID). I have a note \nnever to use with Linux before 2.6.22 as the LSI driver bundled had issues, \nDELL themselves shipped (if you asked \"why is performance so bad\") a Redhat \nkernel with a later driver for the card than the official Linux kernel.\n\nThat said a couple of weeks back ours corrupted a volume on replacing a dead \nhard disk, so I'm never touching these cheap and tacky LSI RAID cards ever \nagain. It is suppose to just start rebuilding the array when you insert the \nreplacement drive, if it doesn't \"just work\" schedule some down time and \nfigure out exactly why, don't (for example) blindly follow the instructions \nin the manual on what to do if it doesn't \"just work\".\n", "msg_date": "Mon, 24 Nov 2008 15:41:43 +0000", "msg_from": "Simon Waters <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "On Mon, Nov 24, 2008 at 8:41 AM, Simon Waters <[email protected]> wrote:\n\n> That said a couple of weeks back ours corrupted a volume on replacing a dead\n> hard disk, so I'm never touching these cheap and tacky LSI RAID cards ever\n> again. It is suppose to just start rebuilding the array when you insert the\n> replacement drive, if it doesn't \"just work\" schedule some down time and\n> figure out exactly why, don't (for example) blindly follow the instructions\n> in the manual on what to do if it doesn't \"just work\".\n\nReminds me of a horror story at a company I was at some years ago.\nAnother group was running Oracle on a nice little 4 way Xeon with a\nGig of ram (back when they was a monster server) and had an LSI card.\nThey unplugged the server to move it into the hosting center, and in\nthe move, the scsi cable came loose. When the machine came up, the\nLSI RAID marked every drive bad and the old 4xx series card had no\nfacility for forcing it to take back a drive. All their work on the\ndb was gone, newest backup was months old. I'm pretty sure they now\nunderstand why RAID5 is no replacement for a good backup plan.\n\nI had a 438 in a dual ppro200, and it worked just fine, but I never\ntrusted it to auto rebuild anything, and made backups every night. It\nwas slow (in the 30 meg/second reads on a 6 disk RAID 5, not faster in\nRAID-10 for reads or writes) but reliable.\n\nNewer LSI cards seem quite nice, but I'm now using an Areca 16xx\nseries and am so far very happy with it's reliability and somewhat\nhappy with its performance. Sequential read speed is meh, but random\nperformance is very good, so for a db server, it's a nice unit.\n", "msg_date": "Mon, 24 Nov 2008 09:24:09 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perc 3 DC" }, { "msg_contents": "\n> Not yet no, but that's a good suggestion and I do intend to give it a\n> whirl. I get about 27MB/s from raid 1 (10 is about the same) so\n> hopefully I can up the throughput to the speed of about one disk with\n> sw raid.\n\n\tFYI I get more than 200 MB/s out of a Linux Software RAID5 of 3 SATA \ndrives (the new Samsung Spinpoints...)\n\t(Intel ICH8 chipset, Core 2 Duo).\n", "msg_date": "Mon, 24 Nov 2008 17:56:31 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perc 3 DC" } ]
[ { "msg_contents": "Both queries return same result (19) and return same data.\nPattern query is a much slower (93 sec) than equality check (13 sec).\nHow to fix this ?\nUsing 8.1.4, utf-8 encoding, et-EE locale.\n\nAndrus.\n\nSELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode = '99000010' AND dok.kuupaev BETWEEN '2008-11-21' AND \n'2008-11-21'\n AND dok.yksus LIKE 'ORISSAARE%'\n\n\"Aggregate (cost=43.09..43.10 rows=1 width=0) (actual \ntime=12674.675..12674.679 rows=1 loops=1)\"\n\" -> Nested Loop (cost=29.57..43.08 rows=1 width=0) (actual \ntime=2002.045..12673.645 rows=19 loops=1)\"\n\" -> Nested Loop (cost=29.57..37.06 rows=1 width=24) (actual \ntime=2001.922..12672.344 rows=19 loops=1)\"\n\" -> Index Scan using dok_kuupaev_idx on dok (cost=0.00..3.47 \nrows=1 width=4) (actual time=342.812..9810.627 rows=319 loops=1)\"\n\" Index Cond: ((kuupaev >= '2008-11-21'::date) AND \n(kuupaev <= '2008-11-21'::date))\"\n\" Filter: (yksus ~~ 'ORISSAARE%'::text)\"\n\" -> Bitmap Heap Scan on rid (cost=29.57..33.58 rows=1 \nwidth=28) (actual time=8.948..8.949 rows=0 loops=319)\"\n\" Recheck Cond: ((\"outer\".dokumnr = rid.dokumnr) AND \n(rid.toode = '99000010'::bpchar))\"\n\" -> BitmapAnd (cost=29.57..29.57 rows=1 width=0) \n(actual time=8.930..8.930 rows=0 loops=319)\"\n\" -> Bitmap Index Scan on rid_dokumnr_idx \n(cost=0.00..2.52 rows=149 width=0) (actual time=0.273..0.273 rows=2 \nloops=319)\"\n\" Index Cond: (\"outer\".dokumnr = \nrid.dokumnr)\"\n\" -> Bitmap Index Scan on rid_toode_idx \n(cost=0.00..26.79 rows=1941 width=0) (actual time=8.596..8.596 rows=15236 \nloops=319)\"\n\" Index Cond: (toode = '99000010'::bpchar)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 rows=1 \nwidth=24) (actual time=0.043..0.048 rows=1 loops=19)\"\n\" Index Cond: ('99000010'::bpchar = toode)\"\n\"Total runtime: 12675.191 ms\"\n\nexplain analyze SELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode like '99000010%' AND dok.kuupaev BETWEEN '2008-11-21' AND \n'2008-11-21'\n AND dok.yksus LIKE 'ORISSAARE%'\n\n\n\"Aggregate (cost=15.52..15.53 rows=1 width=0) (actual \ntime=92966.501..92966.505 rows=1 loops=1)\"\n\" -> Nested Loop (cost=0.00..15.52 rows=1 width=0) (actual \ntime=24082.032..92966.366 rows=19 loops=1)\"\n\" -> Nested Loop (cost=0.00..9.50 rows=1 width=24) (actual \ntime=24081.919..92965.116 rows=19 loops=1)\"\n\" Join Filter: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Index Scan using dok_kuupaev_idx on dok (cost=0.00..3.47 \nrows=1 width=4) (actual time=0.203..13924.324 rows=319 loops=1)\"\n\" Index Cond: ((kuupaev >= '2008-11-21'::date) AND \n(kuupaev <= '2008-11-21'::date))\"\n\" Filter: (yksus ~~ 'ORISSAARE%'::text)\"\n\" -> Index Scan using rid_toode_pattern_idx on rid \n(cost=0.00..6.01 rows=1 width=28) (actual time=0.592..166.778 rows=15235 \nloops=319)\"\n\" Index Cond: ((toode ~>=~ '99000010'::bpchar) AND (toode \n~<~ '99000011'::bpchar))\"\n\" Filter: (toode ~~ '99000010%'::text)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.01 rows=1 \nwidth=24) (actual time=0.041..0.046 rows=1 loops=19)\"\n\" Index Cond: (\"outer\".toode = toode.toode)\"\n\"Total runtime: 92967.512 ms\"\n\n", "msg_date": "Sat, 22 Nov 2008 20:04:30 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Increasing pattern index query speed" }, { "msg_contents": "Andrus wrote:\n> Both queries return same result (19) and return same data.\n> Pattern query is a much slower (93 sec) than equality check (13 sec).\n> How to fix this ?\n> Using 8.1.4, utf-8 encoding, et-EE locale\n\nThey're different queries. The fact that they return the same results is\na coincidence.\n\nThis\n\n> WHERE rid.toode = '99000010' \n\nIs a different condition to this\n\n> WHERE rid.toode like '99000010%'\n\nYou aren't going to get the same plans.\n\nAnyway, I think the problem is in the dok JOIN rid bit look:\n\n> \"Aggregate (cost=43.09..43.10 rows=1 width=0) (actual\n> time=12674.675..12674.679 rows=1 loops=1)\"\n> \" -> Nested Loop (cost=29.57..43.08 rows=1 width=0) (actual\n> time=2002.045..12673.645 rows=19 loops=1)\"\n> \" -> Nested Loop (cost=29.57..37.06 rows=1 width=24) (actual\n> time=2001.922..12672.344 rows=19 loops=1)\"\n\n> \"Aggregate (cost=15.52..15.53 rows=1 width=0) (actual\n> time=92966.501..92966.505 rows=1 loops=1)\"\n> \" -> Nested Loop (cost=0.00..15.52 rows=1 width=0) (actual\n> time=24082.032..92966.366 rows=19 loops=1)\"\n> \" -> Nested Loop (cost=0.00..9.50 rows=1 width=24) (actual\n> time=24081.919..92965.116 rows=19 loops=1)\"\n\nThese are the same but the times are different. I'd be very surprised if\nyou can reproduce these times reliably.\n\n\nCan I give you some wider-ranging suggestions Andrus?\n1. Fix the vacuuming issue in your hash-join question.\n2. Monitor the system to make sure you know if/when disk activity is high.\n3. *Then* start to profile individual queries and look into their plans.\nChange the queries one at a time and monitor again.\n\nOtherwise, it's very difficult to figure out whether changes you make\nare effective.\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 24 Nov 2008 09:36:17 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Richard,\n\n> These are the same but the times are different. I'd be very surprised if\n> you can reproduce these times reliably.\n\nI re-tried today again and got same results: in production database pattern \nquery is many times slower that equality query.\ntoode and rid base contain only single product starting with 99000010\nSo both queries should scan exactly same numbers of rows.\n\n> Can I give you some wider-ranging suggestions Andrus?\n> 1. Fix the vacuuming issue in your hash-join question.\n\nI have ran VACUUM FULL VERBOSE ANALYSE and set max_fsm_pages=150000\nSo issue is fixed before those tests.\n\n> 2. Monitor the system to make sure you know if/when disk activity is high.\n\nI optimized this system. Now there are short (some seconds) sales queries \nabout after every 5 - 300 seconds which cause few disk activity and add few \nnew rows to some tables.\nI havent seen that this activity affects to this test result.\n\n> 3. *Then* start to profile individual queries and look into their plans.\n> Change the queries one at a time and monitor again.\n\nHow to change pattern matching query to faster ?\n\nAndrus.\n\nBtw.\n\nI tried to reproduce this big difference in test server in 8.3 using sample \ndata script below and got big difference but in opposite direction.\n\nexplain analyze SELECT sum(1)\n FROM orders\nJOIN orders_products USING (order_id)\nJOIN products USING (product_id)\nWHERE orders.order_date>'2006-01-01' and ...\n\ndifferent where clauses produce different results:\n\nAND orders_products.product_id = '3370000000000000' -- 880 .. 926 ms\nAND orders_products.product_id like '3370000000000000%' -- 41 ..98 ms\n\nSo patter index is 10 .. 20 times (!) faster always.\nNo idea why.\n\nTest data creation script:\n\nbegin;\nCREATE OR REPLACE FUNCTION Counter() RETURNS int IMMUTABLE AS\n$_$\nSELECT 3500000;\n$_$ LANGUAGE SQL;\n\nCREATE TEMP TABLE orders (order_id INTEGER NOT NULL, order_date DATE NOT \nNULL);\nCREATE TEMP TABLE products (product_id CHAR(20) NOT NULL, product_name \nchar(70) NOT NULL, quantity numeric(12,2) default 1);\nCREATE TEMP TABLE orders_products (order_id INTEGER NOT NULL, product_id \nCHAR(20),\n id serial, price numeric(12,2) default 1 );\n\nINSERT INTO products SELECT (n*power( 10,13))::INT8::CHAR(20),\n 'product number ' || n::TEXT FROM generate_series(0,13410) AS n;\n\nINSERT INTO orders\nSELECT n,'2005-01-01'::date + (4000.0 * n/Counter() * '1 DAY'::interval)\n FROM generate_series(0, Counter()/3 ) AS n;\n\nSET work_mem TO 2097151;\n\nINSERT INTO orders_products SELECT\n generate_series/3 as order_id,\n ( (1+ (generate_series % 13410))*power( 10,13))::INT8::CHAR(20) AS \nproduct_id\nFROM generate_series(1, Counter());\n\nALTER TABLE orders ADD PRIMARY KEY (order_id);\nALTER TABLE products ADD PRIMARY KEY (product_id);\nALTER TABLE orders_products ADD PRIMARY KEY (id);\n\nALTER TABLE orders_products ADD FOREIGN KEY (product_id) REFERENCES \nproducts(product_id);\nALTER TABLE orders_products ADD FOREIGN KEY (order_id) REFERENCES \norders(order_id) ON DELETE CASCADE;\n\nCREATE INDEX orders_date ON orders( order_date );\nCREATE INDEX order_product_pattern_idx ON orders_products( product_id \nbpchar_pattern_ops );\n\nCOMMIT;\nSET work_mem TO DEFAULT;\nANALYZE; \n\n", "msg_date": "Mon, 24 Nov 2008 22:33:37 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Andrus,\n\nMy first thought on the query where a pattern being faster than the query with an exact value is that the planner does not have good enough statistics on that column. Without looking at the explain plans further, I would suggest trying something simple. The fact that it is fasster on 8.3 but slower on 8.1 may have to do with changes between versions, or may simply be due to luck in the statistics sampling.\n\nSee if increasing the statistics target on that column significantly does anything:\n\nEXPLAIN (your query);\nALTER TABLE orders_products ALTER COLUMN product_id SET STATISTICS 2000;\nANALYZE orders_products;\nEXPLAIN (your query);\n\n2000 is simply a guess of mine for a value much larger than the default. This will generally make query planning slower but the system will have a lot more data about that column and the distribution of data in it. This should help stabilize the query performance.\n\nIf this has an effect, the query plans will change.\n\nYour question below really boils down to something more simple:\n --Why is the most optimal query plan not chosen? This is usually due to either insufficient statistics or quirks in how the query planner works on a specific data set or with certain configuration options.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Andrus\nSent: Monday, November 24, 2008 12:34 PM\nTo: Richard Huxton\nCc: [email protected]\nSubject: Re: [PERFORM] Increasing pattern index query speed\n\nRichard,\n\n> These are the same but the times are different. I'd be very surprised if\n> you can reproduce these times reliably.\n\nI re-tried today again and got same results: in production database pattern\nquery is many times slower that equality query.\ntoode and rid base contain only single product starting with 99000010\nSo both queries should scan exactly same numbers of rows.\n\n> Can I give you some wider-ranging suggestions Andrus?\n> 1. Fix the vacuuming issue in your hash-join question.\n\nI have ran VACUUM FULL VERBOSE ANALYSE and set max_fsm_pages=150000\nSo issue is fixed before those tests.\n\n> 2. Monitor the system to make sure you know if/when disk activity is high.\n\nI optimized this system. Now there are short (some seconds) sales queries\nabout after every 5 - 300 seconds which cause few disk activity and add few\nnew rows to some tables.\nI havent seen that this activity affects to this test result.\n\n> 3. *Then* start to profile individual queries and look into their plans.\n> Change the queries one at a time and monitor again.\n\nHow to change pattern matching query to faster ?\n\nAndrus.\n\nBtw.\n\nI tried to reproduce this big difference in test server in 8.3 using sample\ndata script below and got big difference but in opposite direction.\n\nexplain analyze SELECT sum(1)\n FROM orders\nJOIN orders_products USING (order_id)\nJOIN products USING (product_id)\nWHERE orders.order_date>'2006-01-01' and ...\n\ndifferent where clauses produce different results:\n\nAND orders_products.product_id = '3370000000000000' -- 880 .. 926 ms\nAND orders_products.product_id like '3370000000000000%' -- 41 ..98 ms\n\nSo patter index is 10 .. 20 times (!) faster always.\nNo idea why.\n\nTest data creation script:\n\nbegin;\nCREATE OR REPLACE FUNCTION Counter() RETURNS int IMMUTABLE AS\n$_$\nSELECT 3500000;\n$_$ LANGUAGE SQL;\n\nCREATE TEMP TABLE orders (order_id INTEGER NOT NULL, order_date DATE NOT\nNULL);\nCREATE TEMP TABLE products (product_id CHAR(20) NOT NULL, product_name\nchar(70) NOT NULL, quantity numeric(12,2) default 1);\nCREATE TEMP TABLE orders_products (order_id INTEGER NOT NULL, product_id\nCHAR(20),\n id serial, price numeric(12,2) default 1 );\n\nINSERT INTO products SELECT (n*power( 10,13))::INT8::CHAR(20),\n 'product number ' || n::TEXT FROM generate_series(0,13410) AS n;\n\nINSERT INTO orders\nSELECT n,'2005-01-01'::date + (4000.0 * n/Counter() * '1 DAY'::interval)\n FROM generate_series(0, Counter()/3 ) AS n;\n\nSET work_mem TO 2097151;\n\nINSERT INTO orders_products SELECT\n generate_series/3 as order_id,\n ( (1+ (generate_series % 13410))*power( 10,13))::INT8::CHAR(20) AS\nproduct_id\nFROM generate_series(1, Counter());\n\nALTER TABLE orders ADD PRIMARY KEY (order_id);\nALTER TABLE products ADD PRIMARY KEY (product_id);\nALTER TABLE orders_products ADD PRIMARY KEY (id);\n\nALTER TABLE orders_products ADD FOREIGN KEY (product_id) REFERENCES\nproducts(product_id);\nALTER TABLE orders_products ADD FOREIGN KEY (order_id) REFERENCES\norders(order_id) ON DELETE CASCADE;\n\nCREATE INDEX orders_date ON orders( order_date );\nCREATE INDEX order_product_pattern_idx ON orders_products( product_id\nbpchar_pattern_ops );\n\nCOMMIT;\nSET work_mem TO DEFAULT;\nANALYZE;\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Nov 2008 14:50:56 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Andrus wrote:\n> \n> So patter index is 10 .. 20 times (!) faster always.\n> No idea why.\n\nBecause you don't have a normal index on the product_id column? You\ncan't use xxx_pattern_ops indexes for non-pattern tests.\n\n> Test data creation script:\n\nThe only change to the script was the obvious char(nn) => varchar(nn)\nand I didn't use TEMP tables (so I could see what I was doing). Then, I\ncreated the \"standard\" index on order_products.product_id.\n\nEXPLAIN ANALYSE from my slow dev box are listed below. Database is in\nLATIN9 encoding with locale=C.\n\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2993.69..2993.70 rows=1 width=0) (actual\ntime=2.960..2.960 rows=1 loops=1)\n -> Nested Loop (cost=10.81..2993.23 rows=182 width=0) (actual\ntime=0.972..2.901 rows=189 loops=1)\n -> Index Scan using products_pkey on products\n(cost=0.00..8.27 rows=1 width=18) (actual time=0.017..0.019 rows=1 loops=1)\n Index Cond: ((product_id)::text = '3370000000000000'::text)\n -> Nested Loop (cost=10.81..2983.14 rows=182 width=18)\n(actual time=0.951..2.785 rows=189 loops=1)\n -> Bitmap Heap Scan on orders_products\n(cost=10.81..942.50 rows=251 width=22) (actual time=0.296..0.771\nrows=261 loops=1)\n Recheck Cond: ((product_id)::text =\n'3370000000000000'::text)\n -> Bitmap Index Scan on\norder_product_pattern_eq_idx (cost=0.00..10.75 rows=251 width=0)\n(actual time=0.230..0.230 rows=261 loops=1)\n Index Cond: ((product_id)::text =\n'3370000000000000'::text)\n -> Index Scan using orders_pkey on orders\n(cost=0.00..8.12 rows=1 width=4) (actual time=0.006..0.007 rows=1 loops=261)\n Index Cond: (orders.order_id =\norders_products.order_id)\n Filter: (orders.order_date > '2006-01-01'::date)\n Total runtime: 3.051 ms\n(13 rows)\n\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=25.56..25.57 rows=1 width=0) (actual time=8.244..8.245\nrows=1 loops=1)\n -> Nested Loop (cost=0.00..25.55 rows=1 width=0) (actual\ntime=1.170..8.119 rows=378 loops=1)\n -> Nested Loop (cost=0.00..17.17 rows=1 width=4) (actual\ntime=0.043..4.167 rows=522 loops=1)\n -> Index Scan using order_product_pattern_eq_idx on\norders_products (cost=0.00..8.88 rows=1 width=22) (actual\ntime=0.029..1.247 rows=522 loops=1)\n Index Cond: (((product_id)::text >=\n'3370000000000000'::text) AND ((product_id)::text <\n'3370000000000001'::text))\n Filter: ((product_id)::text ~~\n'3370000000000000%'::text)\n -> Index Scan using products_pkey on products\n(cost=0.00..8.27 rows=1 width=18) (actual time=0.004..0.004 rows=1\nloops=522)\n Index Cond: ((products.product_id)::text =\n(orders_products.product_id)::text)\n -> Index Scan using orders_pkey on orders (cost=0.00..8.37\nrows=1 width=4) (actual time=0.006..0.006 rows=1 loops=522)\n Index Cond: (orders.order_id = orders_products.order_id)\n Filter: (orders.order_date > '2006-01-01'::date)\n Total runtime: 8.335 ms\n(12 rows)\n\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 26 Nov 2008 10:41:12 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Andrus schrieb:\n> Richard,\n>\n>> These are the same but the times are different. I'd be very surprised if\n>> you can reproduce these times reliably.\n>\n> I re-tried today again and got same results: in production database \n> pattern query is many times slower that equality query.\n> toode and rid base contain only single product starting with 99000010\n> So both queries should scan exactly same numbers of rows.\n>\n>> Can I give you some wider-ranging suggestions Andrus?\n>> 1. Fix the vacuuming issue in your hash-join question.\n>\n> I have ran VACUUM FULL VERBOSE ANALYSE and set max_fsm_pages=150000\n> So issue is fixed before those tests.\n>\n>> 2. Monitor the system to make sure you know if/when disk activity is \n>> high.\n>\n> I optimized this system. Now there are short (some seconds) sales \n> queries about after every 5 - 300 seconds which cause few disk \n> activity and add few new rows to some tables.\n> I havent seen that this activity affects to this test result.\n>\n>> 3. *Then* start to profile individual queries and look into their plans.\n>> Change the queries one at a time and monitor again.\n>\n> How to change pattern matching query to faster ?\n>\n> Andrus.\n>\n> Btw.\n>\n> I tried to reproduce this big difference in test server in 8.3 using \n> sample data script below and got big difference but in opposite \n> direction.\n>\n> explain analyze SELECT sum(1)\n> FROM orders\n> JOIN orders_products USING (order_id)\n> JOIN products USING (product_id)\n> WHERE orders.order_date>'2006-01-01' and ...\n>\n> different where clauses produce different results:\n>\n> AND orders_products.product_id = '3370000000000000' -- 880 .. 926 ms\n> AND orders_products.product_id like '3370000000000000%' -- 41 ..98 ms\n>\n> So patter index is 10 .. 20 times (!) faster always.\n> No idea why.\n>\n> Test data creation script:\n>\n> begin;\n> CREATE OR REPLACE FUNCTION Counter() RETURNS int IMMUTABLE AS\n> $_$\n> SELECT 3500000;\n> $_$ LANGUAGE SQL;\n>\n> CREATE TEMP TABLE orders (order_id INTEGER NOT NULL, order_date DATE \n> NOT NULL);\n> CREATE TEMP TABLE products (product_id CHAR(20) NOT NULL, product_name \n> char(70) NOT NULL, quantity numeric(12,2) default 1);\n> CREATE TEMP TABLE orders_products (order_id INTEGER NOT NULL, \n> product_id CHAR(20),\n> id serial, price numeric(12,2) default 1 );\n>\n> INSERT INTO products SELECT (n*power( 10,13))::INT8::CHAR(20),\n> 'product number ' || n::TEXT FROM generate_series(0,13410) AS n;\n>\n> INSERT INTO orders\n> SELECT n,'2005-01-01'::date + (4000.0 * n/Counter() * '1 DAY'::interval)\n> FROM generate_series(0, Counter()/3 ) AS n;\n>\n> SET work_mem TO 2097151;\n>\n> INSERT INTO orders_products SELECT\n> generate_series/3 as order_id,\n> ( (1+ (generate_series % 13410))*power( 10,13))::INT8::CHAR(20) AS \n> product_id\n> FROM generate_series(1, Counter());\n>\n> ALTER TABLE orders ADD PRIMARY KEY (order_id);\n> ALTER TABLE products ADD PRIMARY KEY (product_id);\n> ALTER TABLE orders_products ADD PRIMARY KEY (id);\n>\n> ALTER TABLE orders_products ADD FOREIGN KEY (product_id) REFERENCES \n> products(product_id);\n> ALTER TABLE orders_products ADD FOREIGN KEY (order_id) REFERENCES \n> orders(order_id) ON DELETE CASCADE;\n>\n> CREATE INDEX orders_date ON orders( order_date );\n> CREATE INDEX order_product_pattern_idx ON orders_products( product_id \n> bpchar_pattern_ops );\n>\n> COMMIT;\n> SET work_mem TO DEFAULT;\n> ANALYZE;\n>\nNo wonder that = compares bad, you created the index this way:\nCREATE INDEX order_product_pattern_idx ON orders_products( product_id \nbpchar_pattern_ops );\nwhy not:\nCREATE INDEX order_product_pattern_idx ON orders_products( product_id);\n\nexplain analyze SELECT sum(1)\nFROM orders\nJOIN orders_products USING (order_id)\nJOIN products USING (product_id)\nWHERE orders.order_date>'2006-01-01'\nAND orders_products.product_id = '3370000000000000';\n\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3013.68..3013.69 rows=1 width=0) (actual \ntime=8.206..8.207 rows=1 loops=1)\n -> Nested Loop (cost=10.83..3013.21 rows=185 width=0) (actual \ntime=2.095..7.962 rows=189 loops=1)\n -> Index Scan using products_pkey on products \n(cost=0.00..8.27 rows=1 width=18) (actual time=0.036..0.038 rows=1 loops=1)\n Index Cond: ((product_id)::text = '3370000000000000'::text)\n -> Nested Loop (cost=10.83..3003.09 rows=185 width=18) \n(actual time=2.052..7.474 rows=189 loops=1)\n -> Bitmap Heap Scan on orders_products \n(cost=10.83..949.68 rows=253 width=22) (actual time=0.161..0.817 \nrows=261 loops=1)\n Recheck Cond: ((product_id)::text = \n'3370000000000000'::text)\n -> Bitmap Index Scan on foo (cost=0.00..10.76 \nrows=253 width=0) (actual time=0.116..0.116 rows=261 loops=1)\n Index Cond: ((product_id)::text = \n'3370000000000000'::text)\n -> Index Scan using orders_pkey on orders \n(cost=0.00..8.10 rows=1 width=4) (actual time=0.020..0.021 rows=1 loops=261)\n Index Cond: (orders.order_id = \norders_products.order_id)\n Filter: (orders.order_date > '2006-01-01'::date)\n Total runtime: 8.268 ms\n\n", "msg_date": "Wed, 26 Nov 2008 15:15:29 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Richard and Mario,\n\n> You can't use xxx_pattern_ops indexes for non-pattern tests.\n\nI missed regular index. Sorry for that. Now issue with testcase is solved. \nThank you very much.\n\nI researched issue in live 8.1.4 db a bit more.\nPerformed vacuum and whole db reindex.\nTried several times to run two same pattern queries in quiet db.\n\nadditonal condition\n\nAND dok.kuupaev BETWEEN date'2008-11-21' AND date'2008-11-21'\n\ntakes 239 seconds to run.\n\nadditonal condition\n\nAND dok.kuupaev = date'2008-11-21'\n\ntakes 1 seconds.\n\nBoth query conditions are logically the same.\nHow to make BETWEEN query fast (real queries are running as between queries \nover some date range)?\n\nP.S. VACUUM issues warning that free space map 150000 is not sufficient, \n160000 nodes reqired.\nTwo days ago after vacuum full there were 60000 used enties in FSM. No idea \nwhy this occurs.\n\nAndrus.\n\nset search_path to firma2,public;\nexplain analyze SELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode like '99000010%'\n AND dok.kuupaev BETWEEN date'2008-11-21' AND date'2008-11-21'\n\"Aggregate (cost=17.86..17.87 rows=1 width=0) (actual \ntime=239346.647..239346.651 rows=1 loops=1)\"\n\" -> Nested Loop (cost=0.00..17.85 rows=1 width=0) (actual \ntime=3429.715..239345.923 rows=108 loops=1)\"\n\" -> Nested Loop (cost=0.00..11.84 rows=1 width=24) (actual \ntime=3429.666..239339.687 rows=108 loops=1)\"\n\" Join Filter: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Index Scan using dok_kuupaev_idx on dok (cost=0.00..5.81 \nrows=1 width=4) (actual time=0.028..13.341 rows=1678 loops=1)\"\n\" Index Cond: ((kuupaev >= '2008-11-21'::date) AND \n(kuupaev <= '2008-11-21'::date))\"\n\" -> Index Scan using rid_toode_pattern_idx on rid \n(cost=0.00..6.01 rows=1 width=28) (actual time=0.025..86.156 rows=15402 \nloops=1678)\"\n\" Index Cond: ((toode ~>=~ '99000010'::bpchar) AND (toode \n~<~ '99000011'::bpchar))\"\n\" Filter: (toode ~~ '99000010%'::text)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.00 rows=1 \nwidth=24) (actual time=0.032..0.037 rows=1 loops=108)\"\n\" Index Cond: (\"outer\".toode = toode.toode)\"\n\"Total runtime: 239347.132 ms\"\n\nexplain analyze SELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode like '99000010%'\n AND dok.kuupaev = date'2008-11-21'\n\"Aggregate (cost=17.86..17.87 rows=1 width=0) (actual time=707.028..707.032 \nrows=1 loops=1)\"\n\" -> Nested Loop (cost=0.00..17.85 rows=1 width=0) (actual \ntime=60.890..706.460 rows=108 loops=1)\"\n\" -> Nested Loop (cost=0.00..11.84 rows=1 width=24) (actual \ntime=60.848..701.908 rows=108 loops=1)\"\n\" -> Index Scan using rid_toode_pattern_idx on rid \n(cost=0.00..6.01 rows=1 width=28) (actual time=0.120..247.636 rows=15402 \nloops=1)\"\n\" Index Cond: ((toode ~>=~ '99000010'::bpchar) AND (toode \n~<~ '99000011'::bpchar))\"\n\" Filter: (toode ~~ '99000010%'::text)\"\n\" -> Index Scan using dok_dokumnr_idx on dok (cost=0.00..5.81 \nrows=1 width=4) (actual time=0.021..0.021 rows=0 loops=15402)\"\n\" Index Cond: (dok.dokumnr = \"outer\".dokumnr)\"\n\" Filter: (kuupaev = '2008-11-21'::date)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.00 rows=1 \nwidth=24) (actual time=0.021..0.026 rows=1 loops=108)\"\n\" Index Cond: (\"outer\".toode = toode.toode)\"\n\"Total runtime: 707.250 ms\"\n\nvmstat 5 output during running slower query:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id \nwa\n 2 0 332 738552 0 1264832 0 0 4 1 1 11 6 1 83 \n10\n 1 0 332 738520 0 1264832 0 0 0 135 259 34 24 76 0 \n0\n 1 0 332 738488 0 1264832 0 0 0 112 263 42 24 76 0 \n0\n 1 0 332 738504 0 1264832 0 0 0 13 252 19 23 77 0 \n0\n 1 0 332 738528 0 1264832 0 0 0 31 255 26 26 74 0 \n0\n 1 0 332 738528 0 1264832 0 0 0 6 251 18 27 73 0 \n0\n 1 0 332 738544 0 1264856 0 0 5 22 254 25 27 73 0 \n0\n 1 0 332 737908 0 1264856 0 0 0 13 252 22 27 73 0 \n0\n 1 0 332 737932 0 1264856 0 0 0 2 251 18 23 77 0 \n0\n 1 0 332 737932 0 1264856 0 0 0 2 251 17 25 75 0 \n0\n 1 0 332 737932 0 1264856 0 0 0 4 252 19 25 75 0 \n0\n 1 0 332 737932 0 1264856 0 0 0 0 250 16 26 74 0 \n0\n 1 0 332 737932 0 1264856 0 0 0 8 252 19 26 74 0 \n0\n 1 0 332 737924 0 1264856 0 0 0 67 252 19 24 76 0 \n0\n 1 0 332 737900 0 1264856 0 0 0 13 258 37 25 75 0 \n0\n 1 0 332 737916 0 1264856 0 0 0 0 251 16 26 74 0 \n0\n 1 0 332 737932 0 1264856 0 0 0 2 251 18 26 74 0 \n0\n 1 1 332 736740 0 1264864 0 0 2 0 258 26 25 75 0 \n0\n 1 0 332 737716 0 1265040 0 0 10 91 267 60 28 72 0 \n0\n 1 0 332 737724 0 1265040 0 0 0 2 251 17 24 76 0 \n0\n 1 0 332 737732 0 1265044 0 0 1 219 288 128 24 76 0 \n0\n r b swpd free buff cache si so bi bo in cs us sy id \nwa\n 2 0 332 737732 0 1265044 0 0 0 20 255 25 23 77 0 \n0\n 1 0 332 737748 0 1265044 0 0 0 11 252 22 24 76 0 \n0\n 1 0 332 737748 0 1265044 0 0 0 0 250 16 24 76 0 \n0\n 1 0 332 737748 0 1265044 0 0 0 20 254 24 24 76 0 \n0\n 1 0 332 737740 0 1265044 0 0 0 87 252 20 26 74 0 \n0\n 1 0 332 737748 0 1265044 0 0 0 28 254 24 25 75 0 \n0\n 1 0 332 737748 0 1265052 0 0 2 6 251 18 27 73 0 \n0\n 1 0 332 737748 0 1265052 0 0 0 0 250 17 23 77 0 \n0\n 1 0 332 737748 0 1265052 0 0 0 2 251 17 26 74 0 \n0\n 1 0 332 737732 0 1265052 0 0 0 0 251 19 26 74 0 \n0\n 1 0 332 737732 0 1265052 0 0 0 1 251 17 25 75 0 \n0\n 1 0 332 737740 0 1265052 0 0 0 0 250 17 23 77 0 \n0\n 1 0 332 737748 0 1265052 0 0 0 0 250 16 24 76 0 \n0\n 1 0 332 737748 0 1265052 0 0 0 4 252 19 26 74 0 \n0\n 0 0 332 737740 0 1265052 0 0 0 0 252 20 12 37 51 \n0\n 0 0 332 737740 0 1265052 0 0 0 1 252 17 0 0 \n100 0 <-- query ends here probably\n 0 0 332 737740 0 1265052 0 0 0 4 251 18 0 0 \n100 0\n 0 0 332 734812 0 1265452 0 0 11 18 270 39 3 0 96 \n1\n 0 0 332 737172 0 1265632 0 0 18 153 261 35 1 0 98 \n1\n 0 0 332 737180 0 1265632 0 0 0 0 250 17 0 0 \n100 0\n 0 0 332 737188 0 1265632 0 0 0 0 251 16 0 0 \n100 0\n\n\n", "msg_date": "Wed, 26 Nov 2008 20:56:28 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Andrus wrote:\n> Richard and Mario,\n> \n>> You can't use xxx_pattern_ops indexes for non-pattern tests.\n> \n> I missed regular index. Sorry for that. Now issue with testcase is\n> solved. Thank you very much.\n> \n> I researched issue in live 8.1.4 db a bit more.\n> Performed vacuum and whole db reindex.\n> Tried several times to run two same pattern queries in quiet db.\n\nAnd the results were?\n\n> additonal condition\n\nOne problem at a time. Let's get the pattern-matching speed problems on\nyour live server sorted, then we can look at different queries.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 26 Nov 2008 19:07:36 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Scott,\n\n>My first thought on the query where a pattern being faster than the query \n>with an exact value is that the planner does not have good enough \n>statistics on that column. Without looking at the explain plans further, I \n>would suggest trying something simple. The fact that it is fasster on 8.3 \n>but slower on 8.1 may have to do with changes between versions, or may \n>simply be due to luck in the statistics sampling.\n>See if increasing the statistics target on that column significantly does \n>anything:\n>EXPLAIN (your query);\nALTER TABLE orders_products ALTER COLUMN product_id SET STATISTICS 2000;\nANALYZE orders_products;\nEXPLAIN (your query);\n>2000 is simply a guess of mine for a value much larger than the default. \n>This will generally make query planning slower but the system will have a \n>lot more data about that column and the distribution of data in it. This \n>should help stabilize the query performance.\n>If this has an effect, the query plans will change.\n>Your question below really boils down to something more simple:\n> --Why is the most optimal query plan not chosen? This is usually due to \n> either insufficient statistics or quirks in how the query planner works on \n> a specific data >set or with certain configuration options.\n\nThank you very much.\nI found that AND dok.kuupaev = date'2008-11-21' runs fast but\nAND dok.kuupaev BETWEEN date'2008-11-21' AND date'2008-11-21' runs very \nslow.\n\nexplain SELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode like '99000010%'\n\nplan with default statistics:\n\n\"Aggregate (cost=17.86..17.87 rows=1 width=0)\"\n\" -> Nested Loop (cost=0.00..17.85 rows=1 width=0)\"\n\" -> Nested Loop (cost=0.00..11.84 rows=1 width=24)\"\n\" Join Filter: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Index Scan using dok_kuupaev_idx on dok (cost=0.00..5.81 \nrows=1 width=4)\"\n\" Index Cond: ((kuupaev >= '2008-11-21'::date) AND \n(kuupaev <= '2008-11-21'::date))\"\n\" -> Index Scan using rid_toode_pattern_idx on rid \n(cost=0.00..6.01 rows=1 width=28)\"\n\" Index Cond: ((toode ~>=~ '99000010'::bpchar) AND (toode \n~<~ '99000011'::bpchar))\"\n\" Filter: (toode ~~ '99000010%'::text)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.00 rows=1 \nwidth=24)\"\n\" Index Cond: (\"outer\".toode = toode.toode)\"\n\nafter statistics is changed query runs fast ( 70 ... 1000 ms)\n\nALTER TABLE rid ALTER COLUMN toode SET STATISTICS 1000;\nanalyze rid;\nexplain analyze SELECT sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE rid.toode like '99000010%'\n AND dok.kuupaev BETWEEN date'2008-11-21' AND date'2008-11-21'\n\"Aggregate (cost=27.04..27.05 rows=1 width=0) (actual time=44.830..44.834 \nrows=1 loops=1)\"\n\" -> Nested Loop (cost=0.00..27.04 rows=1 width=0) (actual \ntime=0.727..44.370 rows=108 loops=1)\"\n\" -> Nested Loop (cost=0.00..21.02 rows=1 width=24) (actual \ntime=0.688..40.519 rows=108 loops=1)\"\n\" -> Index Scan using dok_kuupaev_idx on dok (cost=0.00..5.81 \nrows=1 width=4) (actual time=0.027..8.094 rows=1678 loops=1)\"\n\" Index Cond: ((kuupaev >= '2008-11-21'::date) AND \n(kuupaev <= '2008-11-21'::date))\"\n\" -> Index Scan using rid_dokumnr_idx on rid \n(cost=0.00..15.20 rows=1 width=28) (actual time=0.011..0.011 rows=0 \nloops=1678)\"\n\" Index Cond: (\"outer\".dokumnr = rid.dokumnr)\"\n\" Filter: (toode ~~ '99000010%'::text)\"\n\" -> Index Scan using toode_pkey on toode (cost=0.00..6.00 rows=1 \nwidth=24) (actual time=0.016..0.020 rows=1 loops=108)\"\n\" Index Cond: (\"outer\".toode = toode.toode)\"\n\"Total runtime: 45.050 ms\"\n\nIt seems that you are genius.\n\nI used 1000 since doc wrote that max value is 1000\n\nRid table contains 3.5millions rows, will increase 1 millions of rows per \nyear and is updated frequently, mostly by adding.\n\nIs it OK to leave\n\nSET STATISTICS 1000;\n\nsetting for this table this column or should I try to decrease it ?\n\nAndrus. \n\n", "msg_date": "Wed, 26 Nov 2008 21:24:48 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Richard,\n\n> And the results were?\n\nResults are provided in bottom of the message to which you replied.\n\n> One problem at a time. Let's get the pattern-matching speed problems on\n> your live server sorted, then we can look at different queries.\n\nFirst message in this thread described the issue with query having\nadditional condition\n\nAND dok.kuupaev BETWEEN '2008-11-21' AND '2008-11-21'\n\nIt seems that this problem occurs when pattern matching and BETWEEN\nconditions are used in same query.\n\nAccording to Scott Garey great recommendation I added\n\nALTER TABLE rid ALTER COLUMN toode SET STATISTICS 1000;\n\nThis fixes testcase in live server, see my other message.\nIs it OK to run\n\nALTER TABLE rid ALTER COLUMN toode SET STATISTICS 1000\n\nin prod database or should I try to decrease 1000 to smaller value ?\nrid is big increasing table and is changed frequently, mostly by adding \nrows.\n\nAndrus. \n\n", "msg_date": "Wed, 26 Nov 2008 21:40:30 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "> Is it OK to run\n>\n> ALTER TABLE rid ALTER COLUMN toode SET STATISTICS 1000\n>\n> in prod database or should I try to decrease 1000 to smaller value ?\n> rid is big increasing table and is changed frequently, mostly by adding\n> rows.\n\npgAdmin shows default_statistic_target value has its default value 10 in \npostgresql.conf file\n\nAndrus. \n\n", "msg_date": "Wed, 26 Nov 2008 21:45:27 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Andrus wrote:\n> Richard,\n> \n>> And the results were?\n> \n> Results are provided in bottom of the message to which you replied.\n\nNo - the explains there were contrasting a date test BETWEEN versus =.\n\n>> One problem at a time. Let's get the pattern-matching speed problems on\n>> your live server sorted, then we can look at different queries.\n> \n> First message in this thread described the issue with query having\n> additional condition\n> \n> AND dok.kuupaev BETWEEN '2008-11-21' AND '2008-11-21'\n\nAh, I think I understand. The test case was *missing* this clause.\n\n> It seems that this problem occurs when pattern matching and BETWEEN\n> conditions are used in same query.\n> \n> According to Scott Garey great recommendation I added\n> \n> ALTER TABLE rid ALTER COLUMN toode SET STATISTICS 1000;\n> \n> This fixes testcase in live server, see my other message.\n> Is it OK to run\n> \n> ALTER TABLE rid ALTER COLUMN toode SET STATISTICS 1000\n> \n> in prod database or should I try to decrease 1000 to smaller value ?\n> rid is big increasing table and is changed frequently, mostly by adding\n> rows.\n\nThis will try to track the 1000 most-common values of \"toode\", whereas\nthe default is to try to track the most common 10 values. Tracking more\nvalues means the planner has more accurate information but makes ANALYSE\ntake longer to run, and also makes planning each query take slightly longer.\n\nTry 100, 200, 500 and see if they work *for a range of queries* -\nthere's no point in having it much higher than it needs to be.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 26 Nov 2008 19:54:53 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Richard,\n\n>> Results are provided in bottom of the message to which you replied.\n>\n> No - the explains there were contrasting a date test BETWEEN versus =.\n\nI changed rid.toode statitics target to 100:\n\nALTER TABLE firma2.rid ALTER COLUMN toode SET STATISTICS 100;\nanalyze firma2.rid;\n\nAnalyze takes 3 seconds and testcase rans fast.\nI'm planning to monitor results by looking log file for queries which take \nlonger than 10 seconds.\n\nDo you still need results ?\nIf yes, which query and how many times should I run?\n\n> Ah, I think I understand. The test case was *missing* this clause.\n\nI added this clause to testcase. Also added char(70) colums containing \npadding characters to all three tables. Cannot still reproduce this issue\nin testcase in fast devel 8.3 notebook.\nIn testcase order_products contains product_id values in a very regular \norder, maybe this affects the results. No idea how to use random() to \ngenerate random\nproducts for every order.\n\nAndrus. \n\n", "msg_date": "Wed, 26 Nov 2008 22:20:52 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Andrus wrote:\n> Richard,\n> \n>>> Results are provided in bottom of the message to which you replied.\n>>\n>> No - the explains there were contrasting a date test BETWEEN versus =.\n> \n> I changed rid.toode statitics target to 100:\n> \n> ALTER TABLE firma2.rid ALTER COLUMN toode SET STATISTICS 100;\n> analyze firma2.rid;\n> \n> Analyze takes 3 seconds and testcase rans fast.\n> I'm planning to monitor results by looking log file for queries which\n> take longer than 10 seconds.\n\nSensible. I don't know if 10 seconds is the right value for your\ndatabase, but there will be a point that filters out most of your\ntraffic but still gives enough to find problems.\n\n> Do you still need results ?\n> If yes, which query and how many times should I run?\n\nIf changing the statistics seems to help, you're not going to want to go\nback just to repeat tests.\n\n>> Ah, I think I understand. The test case was *missing* this clause.\n> \n> I added this clause to testcase. Also added char(70) colums containing\n> padding characters to all three tables. Cannot still reproduce this issue\n> in testcase in fast devel 8.3 notebook.\n> In testcase order_products contains product_id values in a very regular\n> order, maybe this affects the results. No idea how to use random() to\n> generate random\n> products for every order.\n\nIdeally you don't even want random products. You want a distribution of\nproducts that matches the same \"shape\" as you have in your production\ndatabase.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 26 Nov 2008 20:33:13 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "> I used 1000 since doc wrote that max value is 1000\n> Rid table contains 3.5millions rows, will increase 1 millions of rows per\n> year and is updated frequently, mostly by adding.\n\n> Is it OK to leave\n\n> SET STATISTICS 1000;\n\n> setting for this table this column or should I try to decrease it ?\n\n> Andrus.\n\nIf you expect millions of rows, and this is one of your most important use cases, leaving that column's statistics target at 1000 is probably fine. You will incur a small cost on most queries that use this column (query planning is more expensive as it may have to scan all 1000 items for a match), but the risk of a bad query plan and a very slow query is a lot less.\n\nIt is probably worth the small constant cost to prevent bad queries in your case, and since the table will be growing. Larger tables need larger statistics common values buckets in general.\n\nLeave this at 1000, focus on your other issues first. After all the other major issues are done you can come back and see if a smaller value is worth trying or not.\n\nYou may also end up setting higher statistics targets on some other columns to fix other issues. You may want to set the value in the configuration file higher than the default 10 -- I'd recommend starting with 40 and re-analyzing the tables. Going from 10 to 40 has a minor cost but can help the planner create significantly better queries if you have skewed data distributions.\n\n-Scott\n", "msg_date": "Wed, 26 Nov 2008 14:27:46 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing pattern index query speed" }, { "msg_contents": "Scott,\n\n>You may also end up setting higher statistics targets on some other columns\nto fix other issues. You may want to set the value in the configuration\nfile higher than the default 10 -- I'd recommend starting with 40 and\nre-analyzing the tables.\n\nThank you.\n\nI set rid.toode statistics target to 1000. analyze rid now takes 40 seconds.\nMore queries run less than 10 seconds after this change.\n\nI set default_statistic target to 40 and ran ANALYZE.\n\nHowever there are still queries which take more than 10 seconds.\n\nset search_path to firma2,public;\nALTER TABLE dok ALTER COLUMN kuupaev SET STATISTICS 1000;\nanalyze dok; -- 86 seconds\n\nexplain analyze select sum(1)\n FROM dok JOIN rid USING (dokumnr)\n JOIN toode USING (toode)\n WHERE dok.kuupaev>='2008-10-01'\n AND\n ( (\n dok.doktyyp IN\n('V','G','Y','K','I','T','D','N','H','M','E','B','A','R','C','F','J','Q')\n AND CASE WHEN NOT dok.objrealt OR dok.doktyyp='I' THEN dok.yksus\nELSE rid.kuluobjekt END LIKE 'RIISIPERE%'\n )\n OR\n ( dok.doktyyp IN ('O','S','I','U','D','P')\n AND CASE WHEN dok.objrealt THEN rid.kuluobjekt ELSE dok.sihtyksus\nEND LIKE 'RIISIPERE%'\n )\n )\nALTER TABLE dok ALTER COLUMN kuupaev SET STATISTICS -1;\nanalyze dok; -- 3 seconds\n\n\"Aggregate (cost=302381.83..302381.84 rows=1 width=0) (actual\ntime=32795.966..32795.970 rows=1 loops=1)\"\n\" -> Merge Join (cost=302298.07..302379.45 rows=951 width=0) (actual\ntime=31478.319..32614.691 rows=47646 loops=1)\"\n\" Merge Cond: (\"outer\".toode = \"inner\".toode)\"\n\" -> Sort (cost=300522.54..300524.92 rows=954 width=24) (actual\ntime=31254.424..31429.436 rows=47701 loops=1)\"\n\" Sort Key: rid.toode\"\n\" -> Hash Join (cost=73766.03..300475.32 rows=954 width=24)\n(actual time=920.122..30418.627 rows=47701 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" Join Filter: ((((\"inner\".doktyyp = 'V'::bpchar) OR\n(\"inner\".doktyyp = 'G'::bpchar) OR (\"inner\".doktyyp = 'Y'::bpchar) OR\n(\"inner\".doktyyp = 'K'::bpchar) OR (\"inner\".doktyyp = 'I'::bpchar) OR\n(\"inner\".doktyyp = 'T'::bpchar) OR (\"inner\".doktyyp = 'D'::bpchar) OR\n(\"inner\".doktyyp = 'N'::bpchar) OR (\"inner\".doktyyp = 'H'::bpchar) OR\n(\"inner\".doktyyp = 'M'::bpchar) OR (\"inner\".doktyyp = 'E'::bpchar) OR\n(\"inner\".doktyyp = 'B'::bpchar) OR (\"inner\".doktyyp = 'A'::bpchar) OR\n(\"inner\".doktyyp = 'R'::bpchar) OR (\"inner\".doktyyp = 'C'::bpchar) OR\n(\"inner\".doktyyp = 'F'::bpchar) OR (\"inner\".doktyyp = 'J'::bpchar) OR\n(\"inner\".doktyyp = 'Q'::bpchar)) AND (CASE WHEN ((NOT\n(\"inner\".objrealt)::boolean) OR (\"inner\".doktyyp = 'I'::bpchar)) THEN\n\"inner\".yksus ELSE \"outer\".kuluobjekt END ~~ 'RIISIPERE%'::text)) OR\n(((\"inner\".doktyyp = 'O'::bpchar) OR (\"inner\".doktyyp = 'S'::bpchar) OR\n(\"inner\".doktyyp = 'I'::bpchar) OR (\"inner\".doktyyp = 'U'::bpchar) OR\n(\"inner\".doktyyp = 'D'::bpchar) OR (\"inner\".doktyyp = 'P'::bpchar)) AND\n(CASE WHEN (\"inner\".objrealt)::boolean THEN \"outer\".kuluobjekt ELSE\n\"inner\".sihtyksus END ~~ 'RIISIPERE%'::text)))\"\n\" -> Seq Scan on rid (cost=0.00..129635.37 rows=3305337\nwidth=42) (actual time=0.040..14458.666 rows=3293574 loops=1)\"\n\" -> Hash (cost=73590.93..73590.93 rows=70042 width=38)\n(actual time=916.812..916.812 rows=72439 loops=1)\"\n\" -> Bitmap Heap Scan on dok\n(cost=414.75..73590.93 rows=70042 width=38) (actual time=28.704..589.116\nrows=72439 loops=1)\"\n\" Recheck Cond: (kuupaev >=\n'2008-10-01'::date)\"\n\" Filter: ((doktyyp = 'V'::bpchar) OR\n(doktyyp = 'G'::bpchar) OR (doktyyp = 'Y'::bpchar) OR (doktyyp =\n'K'::bpchar) OR (doktyyp = 'I'::bpchar) OR (doktyyp = 'T'::bpchar) OR\n(doktyyp = 'D'::bpchar) OR (doktyyp = 'N'::bpchar) OR (doktyyp =\n'H'::bpchar) OR (doktyyp = 'M'::bpchar) OR (doktyyp = 'E'::bpchar) OR\n(doktyyp = 'B'::bpchar) OR (doktyyp = 'A'::bpchar) OR (doktyyp =\n'R'::bpchar) OR (doktyyp = 'C'::bpchar) OR (doktyyp = 'F'::bpchar) OR\n(doktyyp = 'J'::bpchar) OR (doktyyp = 'Q'::bpchar) OR (doktyyp =\n'O'::bpchar) OR (doktyyp = 'S'::bpchar) OR (doktyyp = 'I'::bpchar) OR\n(doktyyp = 'U'::bpchar) OR (doktyyp = 'D'::bpchar) OR (doktyyp =\n'P'::bpchar))\"\n\" -> Bitmap Index Scan on dok_kuupaev_idx\n(cost=0.00..414.75 rows=72500 width=0) (actual time=20.049..20.049\nrows=72664 loops=1)\"\n\" Index Cond: (kuupaev >=\n'2008-10-01'::date)\"\n\" -> Sort (cost=1775.54..1809.10 rows=13423 width=24) (actual\ntime=223.235..457.888 rows=59876 loops=1)\"\n\" Sort Key: toode.toode\"\n\" -> Seq Scan on toode (cost=0.00..855.23 rows=13423\nwidth=24) (actual time=0.046..63.783 rows=13427 loops=1)\"\n\"Total runtime: 32807.767 ms\"\n\nHow to speed this up ?\n\n'RIISIPERE%' is shop group code. Using this condition can limit scan to 6\ntimes less documentes since there are 6 shops.\nMayber is it possible to create indexes or other way to force index search\nfor condition\n\n dok.doktyyp IN\n('V','G','Y','K','I','T','D','N','H','M','E','B','A','R','C','F','J','Q')\n AND CASE WHEN NOT dok.objrealt OR dok.doktyyp='I' THEN dok.yksus\nELSE rid.kuluobjekt END LIKE 'RIISIPERE%'\n )\n OR\n ( dok.doktyyp IN ('O','S','I','U','D','P')\n AND CASE WHEN dok.objrealt THEN rid.kuluobjekt ELSE dok.sihtyksus\nEND LIKE 'RIISIPERE%'\n )\n\nor is it possible to re-write this condition so that it uses existing\npattern indexes\n\nCREATE INDEX dok_sihtyksus_pattern_idx ON firma2.dok (sihtyksus\nbpchar_pattern_ops);\nCREATE INDEX dok_yksus_pattern_idx ON firma2.dok (yksus\nbpchar_pattern_ops);\n\nwithout changing tables structureˇ?\n\nAndrus. \n\n", "msg_date": "Fri, 28 Nov 2008 16:58:15 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing pattern index query speed" } ]
[ { "msg_contents": "There are indexes on rid(dokumnr) and dok(dokumnr) and dokumnr is int.\nInstead of using single key index, 8.1.4 scans over whole rid table.\nSometimes idtelluued can contain more than single row so replacing join with \nequality is not possible.\n\nHow to fix ?\n\nAndrus.\n\nCREATE TEMP TABLE idtellUued(dokumnr INT) ON COMMIT DROP;\n INSERT INTO idtellUued VALUES(1249228);\nexplain analyze select 1\n from dok JOIN rid USING(dokumnr)\n JOIN idtellUued USING(dokumnr)\n\n\"Hash Join (cost=7483.22..222259.77 rows=5706 width=0) (actual \ntime=14905.981..27065.903 rows=8 loops=1)\"\n\" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n\" -> Seq Scan on rid (cost=0.00..198240.33 rows=3295833 width=4) (actual \ntime=0.036..15021.641 rows=3280576 loops=1)\"\n\" -> Hash (cost=7477.87..7477.87 rows=2140 width=8) (actual \ntime=0.114..0.114 rows=1 loops=1)\"\n\" -> Nested Loop (cost=0.00..7477.87 rows=2140 width=8) (actual \ntime=0.076..0.099 rows=1 loops=1)\"\n\" -> Seq Scan on idtelluued (cost=0.00..31.40 rows=2140 \nwidth=4) (actual time=0.006..0.011 rows=1 loops=1)\"\n\" -> Index Scan using dok_dokumnr_idx on dok (cost=0.00..3.47 \nrows=1 width=4) (actual time=0.051..0.058 rows=1 loops=1)\"\n\" Index Cond: (dok.dokumnr = \"outer\".dokumnr)\"\n\"Total runtime: 27066.080 ms\"\n\n", "msg_date": "Sat, 22 Nov 2008 21:33:28 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "seq scan over 3.3 million rows instead of single key index access" }, { "msg_contents": "\"Andrus\" <[email protected]> writes:\n\n> There are indexes on rid(dokumnr) and dok(dokumnr) and dokumnr is int.\n> Instead of using single key index, 8.1.4 scans over whole rid table.\n> Sometimes idtelluued can contain more than single row so replacing join with\n> equality is not possible.\n>\n> How to fix ?\n\nFirstly the current 8.1 release is 8.1.15. Any of the bugs fixed in those 11\nreleases might be related to this.\n\nSecondly:\n\n> CREATE TEMP TABLE idtellUued(dokumnr INT) ON COMMIT DROP;\n> INSERT INTO idtellUued VALUES(1249228);\n> explain analyze select 1\n> from dok JOIN rid USING(dokumnr)\n> JOIN idtellUued USING(dokumnr)\n>\n> \" -> Seq Scan on idtelluued (cost=0.00..31.40 rows=2140 width=4)\n> (actual time=0.006..0.011 rows=1 loops=1)\"\n\nThe planner thinks there are 2,140 rows in that temporary table so I don't\nbelieve this is from the example posted. I would suggest running ANALYZE\nidtellUued at some point before the problematic query.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Sat, 22 Nov 2008 23:03:50 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan over 3.3 million rows instead of single key index access" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> \"Andrus\" <[email protected]> writes:\n>> There are indexes on rid(dokumnr) and dok(dokumnr) and dokumnr is int.\n>> Instead of using single key index, 8.1.4 scans over whole rid table.\n>> Sometimes idtelluued can contain more than single row so replacing join with\n>> equality is not possible.\n>> \n>> How to fix ?\n\n> Firstly the current 8.1 release is 8.1.15. Any of the bugs fixed in those 11\n> releases might be related to this.\n\nIf this can still be reproduced in 8.1.15 it would be worth looking into.\nMy first guess is that there are multiple relevant indexes on the big\ntable and the old bugs in choose_bitmap_and() are making it mess up.\n\n\n> The planner thinks there are 2,140 rows in that temporary table so I don't\n> believe this is from the example posted. I would suggest running ANALYZE\n> idtellUued at some point before the problematic query.\n\nNo, that's a pretty likely default assumption for a never-vacuumed,\nnever-analyzed table. Your advice is correct though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Nov 2008 18:58:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan over 3.3 million rows instead of single key index access" }, { "msg_contents": "Gregory,\n\n> I would suggest running ANALYZE\n> idtellUued at some point before the problematic query.\n\nThank you.\nAfter adding analyze all is OK.\nIs analyze command required in 8.3 also ?\nOr is it better better to specify some hint at create temp table time since \nI know the number of rows before running query ?\n\nAndrus.\n\nset search_path to firma2,public;\n CREATE TEMP TABLE idtellUued(dokumnr INT) ON COMMIT DROP;\n INSERT INTO idtellUued VALUES(1249228);\nanalyze idtelluued;\n explain analyze select 1\n from dok JOIN rid USING(dokumnr)\n JOIN idtellUued USING(dokumnr)\n\n\"Nested Loop (cost=0.00..275.18 rows=3 width=0) (actual time=87.266..87.388 \nrows=8 loops=1)\"\n\" -> Nested Loop (cost=0.00..6.95 rows=1 width=8) (actual \ntime=36.613..36.636 rows=1 loops=1)\"\n\" -> Seq Scan on idtelluued (cost=0.00..1.01 rows=1 width=4) \n(actual time=0.009..0.015 rows=1 loops=1)\"\n\" -> Index Scan using dok_dokumnr_idx on dok (cost=0.00..5.93 \nrows=1 width=4) (actual time=36.585..36.590 rows=1 loops=1)\"\n\" Index Cond: (dok.dokumnr = \"outer\".dokumnr)\"\n\" -> Index Scan using rid_dokumnr_idx on rid (cost=0.00..267.23 rows=80 \nwidth=4) (actual time=50.635..50.672 rows=8 loops=1)\"\n\" Index Cond: (\"outer\".dokumnr = rid.dokumnr)\"\n\"Total runtime: 87.586 ms\"\n\n", "msg_date": "Sun, 23 Nov 2008 06:20:08 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seq scan over 3.3 million rows instead of single key index access" }, { "msg_contents": "Andrus <[email protected]> schrieb:\n\n> There are indexes on rid(dokumnr) and dok(dokumnr) and dokumnr is int.\n> Instead of using single key index, 8.1.4 scans over whole rid table.\n> Sometimes idtelluued can contain more than single row so replacing join \n> with equality is not possible.\n>\n> How to fix ?\n>\n> Andrus.\n>\n> CREATE TEMP TABLE idtellUued(dokumnr INT) ON COMMIT DROP;\n> INSERT INTO idtellUued VALUES(1249228);\n> explain analyze select 1\n> from dok JOIN rid USING(dokumnr)\n> JOIN idtellUued USING(dokumnr)\n>\n> \"Hash Join (cost=7483.22..222259.77 rows=5706 width=0) (actual \n> time=14905.981..27065.903 rows=8 loops=1)\"\n> \" Hash Cond: (\"outer\".dokumnr = \"inner\".dokumnr)\"\n> \" -> Seq Scan on rid (cost=0.00..198240.33 rows=3295833 width=4) \n> (actual time=0.036..15021.641 rows=3280576 loops=1)\"\n\nHow many rows contains rid? The estimation are okay, rows=3295833 and\nactual rows=3280576 are nearly identical. An index-scan makes only sense\nif rid contains considerable more than 3000000 rows.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Sun, 23 Nov 2008 09:10:24 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan over 3.3 million rows instead of single key index access" }, { "msg_contents": "Andrus <[email protected]> schrieb:\n\n> There are indexes on rid(dokumnr) and dok(dokumnr) and dokumnr is int.\n> Instead of using single key index, 8.1.4 scans over whole rid table.\n> Sometimes idtelluued can contain more than single row so replacing join \n> with equality is not possible.\n>\n> How to fix ?\n>\n> Andrus.\n>\n> CREATE TEMP TABLE idtellUued(dokumnr INT) ON COMMIT DROP;\n> INSERT INTO idtellUued VALUES(1249228);\n> explain analyze select 1\n> from dok JOIN rid USING(dokumnr)\n> JOIN idtellUued USING(dokumnr)\n\nTry to analyse the idtellUued-table after the insert. The planner has no\nknowledge that this table contains only one or e few rows, the planner\nassume 1000 (iirc) in this table.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Sun, 23 Nov 2008 09:13:39 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan over 3.3 million rows instead of single key index access" }, { "msg_contents": "> An index-scan makes only sense if rid contains considerable more than \n> 3000000 rows.\n\nI'm sorry, I meant using index to get the row.\n\nAndrus. \n\n", "msg_date": "Sun, 23 Nov 2008 13:43:08 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seq scan over 3.3 million rows instead of single keyindex access" }, { "msg_contents": "am Sun, dem 23.11.2008, um 6:20:08 +0200 mailte Andrus folgendes:\n> Gregory,\n> \n> > I would suggest running ANALYZE\n> >idtellUued at some point before the problematic query.\n> \n> Thank you.\n> After adding analyze all is OK.\n> Is analyze command required in 8.3 also ?\n\nYes.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Sun, 23 Nov 2008 13:20:11 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan over 3.3 million rows instead of single key index access" } ]
[ { "msg_contents": "Adding limit clause causes very slow query:\n\nexplain analyze select * from firma2.dok where doktyyp='J' order by dokumnr\nlimit 100\n\"Limit (cost=0.00..4371.71 rows=100 width=1107) (actual\ntime=33189.971..33189.971 rows=0 loops=1)\"\n\" -> Index Scan using dok_dokumnr_idx on dok (cost=0.00..278740.01\nrows=6376 width=1107) (actual time=33189.959..33189.959 rows=0 loops=1)\"\n\" Filter: (doktyyp = 'J'::bpchar)\"\n\"Total runtime: 33190.103 ms\"\n\n\nWithout limit is is fast:\n\nexplain analyze select * from firma2.dok where doktyyp='J' order by dokumnr\n\"Sort (cost=7061.80..7077.74 rows=6376 width=1107) (actual\ntime=0.119..0.119 rows=0 loops=1)\"\n\" Sort Key: dokumnr\"\n\" -> Index Scan using dok_doktyyp on dok (cost=0.00..3118.46 rows=6376\nwidth=1107) (actual time=0.101..0.101 rows=0 loops=1)\"\n\" Index Cond: (doktyyp = 'J'::bpchar)\"\n\"Total runtime: 0.245 ms\"\n\nHow to fix this without dropping dok_doktyyp index so that limit can safely \nused for paged data access ?\n\nindexes:\n\ndok_doktyyp: dok(doktyyp)\ndok_dokumnr_idx: dok(dokumnr)\n\ntypes:\n\ndokumnr int primary key\ndoktyyp char(1)\n\nAndrus.\n\nUsing 8.1.4\n\n", "msg_date": "Sun, 23 Nov 2008 22:52:54 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "limit clause produces wrong query plan" }, { "msg_contents": "> it was veery fast. To be honest I do not know what is happening?!\n\nThis is really weird.\nIt seems that PostgreSql OFFSET / LIMIT are not optimized and thus typical \npaging queries\n\nSELECT ... FROM bigtable ORDER BY intprimarykey OFFSET pageno*100 LIMIT 100\n\nor even first page query\n\nSELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 0 LIMIT 100\n\ncannot be used in PostgreSql at all for big tables.\n\nDo you have any idea how to fix this ?\n\nAndrus. \n\n", "msg_date": "Mon, 24 Nov 2008 19:26:22 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limit clause produces wrong query plan" }, { "msg_contents": "On Mon, Nov 24, 2008 at 10:26 AM, Andrus <[email protected]> wrote:\n>> it was veery fast. To be honest I do not know what is happening?!\n>\n> This is really weird.\n> It seems that PostgreSql OFFSET / LIMIT are not optimized and thus typical\n> paging queries\n\nAnd how exactly should it be optimized? If a query is even moderately\ninteresting, with a few joins and a where clause, postgresql HAS to\ncreate the rows that come before your offset in order to assure that\nit's giving you the right rows.\n\n> SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET pageno*100 LIMIT 100\n\nThis will get progressively slower as pageno goes up.\n\n> SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 0 LIMIT 100\n\nThat should be plenty fast.\n\n> cannot be used in PostgreSql at all for big tables.\n\nCan't be used in any real database with any real complexity to its query either.\n\n> Do you have any idea how to fix this ?\n\nA standard workaround is to use some kind of sequential, or nearly so,\nid field, and then use between on that field.\n\nselect * from table where idfield between x and x+100;\n", "msg_date": "Mon, 24 Nov 2008 12:23:10 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause produces wrong query plan" }, { "msg_contents": "Scott,\n\n>And how exactly should it be optimized? If a query is even moderately\n>interesting, with a few joins and a where clause, postgresql HAS to\n>create the rows that come before your offset in order to assure that\n>it's giving you the right rows.\n\nSELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 100 LIMIT 100\n\nIt should scan primary key in index order for 200 first keys and skipping \nfirst 100 keys.\n\n>> SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 0 LIMIT 100\n>\n> That should be plenty fast.\n\nThe example which I posted shows that\n\nSELECT ... FROM bigtable ORDER BY intprimarykey LIMIT 100\n\nthis is extremely *slow*: seq scan is performed over whole bigtable.\n\n> A standard workaround is to use some kind of sequential, or nearly so,\n> id field, and then use between on that field.\n>\n> select * from table where idfield between x and x+100;\n\nUsers can delete and insert any rows in table.\nThis appoarch requires updating x in every row in big table after each\ninsert, delete or order column change and is thus extremely slow.\nSo I do'nt understand how this can be used for large tables.\n\nAndrus.\n\n", "msg_date": "Mon, 24 Nov 2008 22:04:54 +0200", "msg_from": "\"Andrus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limit clause produces wrong query plan" }, { "msg_contents": "Andrus wrote:\n> Scott,\n> \n>> And how exactly should it be optimized? If a query is even moderately\n>> interesting, with a few joins and a where clause, postgresql HAS to\n>> create the rows that come before your offset in order to assure that\n>> it's giving you the right rows.\n> \n> SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 100 LIMIT 100\n> \n> It should scan primary key in index order for 200 first keys and \n> skipping first 100 keys.\n\n... which if you have a lot of table joins, unions/intersects/whatever \nelse, should be done on which field and how?\n\nFor a query like:\n\nselect * t1 join t2 using (id) where t1.id='x' order by t1.id limit 100;\n\nit has to join the tables first (may involve a seq scan) to make sure \nthe id's match up, reduce the number of rows to match the where clause \n(may/may not be done first, I don't know) - the limit is applied last.\n\nit can't grab the first 100 entries from t1 - because they might not \nhave a matching id in t2, let alone match the where clause.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Tue, 25 Nov 2008 09:19:55 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause produces wrong query plan" }, { "msg_contents": "> SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 100 LIMIT 100\n\n\tI think pagination is overrated.\n\n\tIf the query produces, for instance, something like 100 rows or less, \nmore often than not, getting all the rows will take the exact same time as \ngetting a portion of the rows... in all those cases, it is much better to \ncache the results somewhere (user session, table, whatever) and paginate \nbased on that, rather than perform the same query lots of times. \nEspecially when some non-indexed sorting takes place in which case you are \ngonna fetch all the rows anyway. Something like row-id can be stored \ninstead of the full rows, also. There are exceptions of course.\n\n\tAnd if the query produces 20.000 results... who is ever going to scroll \nto page 1257 ?\n\n> The example which I posted shows that\n>\n> SELECT ... FROM bigtable ORDER BY intprimarykey LIMIT 100\n>\n> this is extremely *slow*: seq scan is performed over whole bigtable.\n\n\tThis is wrong though. It should use an index, especially if you have a \nLIMIT...\n\n", "msg_date": "Mon, 24 Nov 2008 23:58:50 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause produces wrong query plan" }, { "msg_contents": "I believe his earlier (original) complaint was that it was slower with the LIMIT than with no limit. As in, the select (*) query was faster to get the whole thing than apply the limit. Wherever that is the case, it is broken.\n\nCertainly a complex join makes this more difficult, but one would agree that with a LIMIT it should never be slower than without, right?\n\nAny LIMIT combined with an ORDER by at the last stage can't be completed without fetching every row and sorting. If there is an index on the column being sorted at each table in the join it is possible to short-circuit the plan after enough result rows are found, but this would have to iterate on each arm of the join until there were enough matches, and postgres does not have any such iterative query strategy that I'm aware of.\nFor a single large, indexed table with no joins it certainly makes sense to terminate the index scan early.\n\nOften, one is better off with explicit subselects for the join branches with explicit LIMIT on each of those (oversized to compensate for the likelihood of a match) but you have to be willing to get less than the LIMIT ammount if there are not enough matches, and that is usually bad for 'paging' style queries rather than 'report on top X' queries where the premature truncation is probably ok and is enough to make the query fast. But if you have very large datasets on the query branch arms, asking for only 10K of 300K rows to sort and then match against 10K in another arm, and find 500 items, can be a huge gain versus doing the entire thing.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Scott Marlowe\nSent: Monday, November 24, 2008 11:23 AM\nTo: Andrus\nCc: [email protected]\nSubject: Re: [PERFORM] limit clause produces wrong query plan\n\nOn Mon, Nov 24, 2008 at 10:26 AM, Andrus <[email protected]> wrote:\n>> it was veery fast. To be honest I do not know what is happening?!\n>\n> This is really weird.\n> It seems that PostgreSql OFFSET / LIMIT are not optimized and thus typical\n> paging queries\n\nAnd how exactly should it be optimized? If a query is even moderately\ninteresting, with a few joins and a where clause, postgresql HAS to\ncreate the rows that come before your offset in order to assure that\nit's giving you the right rows.\n\n> SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET pageno*100 LIMIT 100\n\nThis will get progressively slower as pageno goes up.\n\n> SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 0 LIMIT 100\n\nThat should be plenty fast.\n\n> cannot be used in PostgreSql at all for big tables.\n\nCan't be used in any real database with any real complexity to its query either.\n\n> Do you have any idea how to fix this ?\n\nA standard workaround is to use some kind of sequential, or nearly so,\nid field, and then use between on that field.\n\nselect * from table where idfield between x and x+100;\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Nov 2008 15:10:08 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit clause produces wrong query plan" } ]
[ { "msg_contents": "Hi All;\n\nI've installed pg_buffercache and I want to use it to help define the optimal \nshared_buffers size. \n\nCurrently I run this each 15min via cron:\ninsert into buffercache_stats select now(), isdirty, count(*) as buffers, \n(count(*) * 8192) as memory from pg_buffercache group by 1,2;\n\nand here's it's explain plan\nexplain insert into buffercache_stats select now(), isdirty, count(*) as \nbuffers, (count(*) * 8192) as memory from pg_buffercache group by 1,2;\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Subquery Scan \"*SELECT*\" (cost=65.00..65.23 rows=2 width=25)\n -> HashAggregate (cost=65.00..65.12 rows=2 width=1)\n -> Function Scan on pg_buffercache_pages p (cost=0.00..55.00 \nrows=1000 width=1)\n(3 rows)\n\n\nThen once a day I will pull a report from the buffercache_stats table. The \nbuffercache_stats table is our own creation :\n\n\\d buffercache_stats\n Table \"public.buffercache_stats\"\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n snap_timestamp | timestamp without time zone |\n isdirty | boolean |\n buffers | integer |\n memory | integer |\n\n\nHere's my issue, the server that we'll eventually roll this out to is \nextremely busy and the every 15min query above has the potential to have a \nhuge impact on performance.\n\nDoes anyone have any suggestions per a better approach or maybe a way to \nimprove the performance for the above query ?\n\nThanks in advance...\n", "msg_date": "Mon, 24 Nov 2008 11:43:56 -0700", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Monitoring buffercache..." }, { "msg_contents": "On Mon, 2008-11-24 at 11:43 -0700, Kevin Kempter wrote:\n> Hi All;\n> \n> I've installed pg_buffercache and I want to use it to help define the optimal \n> shared_buffers size. \n> \n> Currently I run this each 15min via cron:\n> insert into buffercache_stats select now(), isdirty, count(*) as buffers, \n> (count(*) * 8192) as memory from pg_buffercache group by 1,2;\n> \n> and here's it's explain plan\n> explain insert into buffercache_stats select now(), isdirty, count(*) as \n> buffers, (count(*) * 8192) as memory from pg_buffercache group by 1,2;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------\n> Subquery Scan \"*SELECT*\" (cost=65.00..65.23 rows=2 width=25)\n> -> HashAggregate (cost=65.00..65.12 rows=2 width=1)\n> -> Function Scan on pg_buffercache_pages p (cost=0.00..55.00 \n> rows=1000 width=1)\n> (3 rows)\n> \n> \n> Then once a day I will pull a report from the buffercache_stats table. The \n> buffercache_stats table is our own creation :\n> \n> \\d buffercache_stats\n> Table \"public.buffercache_stats\"\n> Column | Type | Modifiers\n> ----------------+-----------------------------+-----------\n> snap_timestamp | timestamp without time zone |\n> isdirty | boolean |\n> buffers | integer |\n> memory | integer |\n> \n> \n> Here's my issue, the server that we'll eventually roll this out to is \n> extremely busy and the every 15min query above has the potential to have a \n> huge impact on performance.\n\nI wouldn't routinely run pg_buffercache on a busy database. Plus, I\ndon't think that pg_buffercache will answer this question for you. It\nwill tell you whats currently in the buffer pool and the clean/dirty\nstatus, but that's not the first place I'd look, but what you really\nneed is to figure out the hit ratio on the buffer pool and go from\nthere.\n\n> Does anyone have any suggestions per a better approach or maybe a way to \n> improve the performance for the above query ?\n\nYou should be able to use the blocks hit vs block read data in the\npg_stat_database view (for the overall database), and drill down into\npg_statio_user_tables/pg_statio_all_tables to get more detailed data if\nyou want.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Mon, 24 Nov 2008 14:45:22 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Monitoring buffercache..." }, { "msg_contents": "On Mon, Nov 24, 2008 at 11:43 AM, Kevin Kempter\n<[email protected]> wrote:\n> Hi All;\n>\n> I've installed pg_buffercache and I want to use it to help define the optimal\n> shared_buffers size.\n>\n> Currently I run this each 15min via cron:\n> insert into buffercache_stats select now(), isdirty, count(*) as buffers,\n> (count(*) * 8192) as memory from pg_buffercache group by 1,2;\n>\n> and here's it's explain plan\n> explain insert into buffercache_stats select now(), isdirty, count(*) as\n> buffers, (count(*) * 8192) as memory from pg_buffercache group by 1,2;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------\n> Subquery Scan \"*SELECT*\" (cost=65.00..65.23 rows=2 width=25)\n> -> HashAggregate (cost=65.00..65.12 rows=2 width=1)\n> -> Function Scan on pg_buffercache_pages p (cost=0.00..55.00\n> rows=1000 width=1)\n> (3 rows)\n>\n>\n> Then once a day I will pull a report from the buffercache_stats table. The\n> buffercache_stats table is our own creation :\n>\n> \\d buffercache_stats\n> Table \"public.buffercache_stats\"\n> Column | Type | Modifiers\n> ----------------+-----------------------------+-----------\n> snap_timestamp | timestamp without time zone |\n> isdirty | boolean |\n> buffers | integer |\n> memory | integer |\n>\n>\n> Here's my issue, the server that we'll eventually roll this out to is\n> extremely busy and the every 15min query above has the potential to have a\n> huge impact on performance.\n>\n> Does anyone have any suggestions per a better approach or maybe a way to\n> improve the performance for the above query ?\n\nI wouldn't worry about running it every 15 minutes unless it's on a\nREALLY slow machine.\n\nI just ran it in a loop over and over on my 8 core opteron server and\nit ran the load factor up by almost exactly 1.0. Under our normal\ndaily load, it sits at 1.9 to 2.5, and it climbed to 2.9 under the new\nload of running that query over and over. So, it doesn't seem to be\nblocking or anything.\n", "msg_date": "Mon, 24 Nov 2008 12:46:30 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Monitoring buffercache..." }, { "msg_contents": "On Mon, 2008-11-24 at 12:46 -0700, Scott Marlowe wrote:\n> On Mon, Nov 24, 2008 at 11:43 AM, Kevin Kempter\n> <[email protected]> wrote:\n> > Hi All;\n> >\n> > I've installed pg_buffercache and I want to use it to help define the optimal\n> > shared_buffers size.\n> >\n> > Currently I run this each 15min via cron:\n> > insert into buffercache_stats select now(), isdirty, count(*) as buffers,\n> > (count(*) * 8192) as memory from pg_buffercache group by 1,2;\n> >\n> > and here's it's explain plan\n> > explain insert into buffercache_stats select now(), isdirty, count(*) as\n> > buffers, (count(*) * 8192) as memory from pg_buffercache group by 1,2;\n> > QUERY PLAN\n> > -------------------------------------------------------------------------------------------\n> > Subquery Scan \"*SELECT*\" (cost=65.00..65.23 rows=2 width=25)\n> > -> HashAggregate (cost=65.00..65.12 rows=2 width=1)\n> > -> Function Scan on pg_buffercache_pages p (cost=0.00..55.00\n> > rows=1000 width=1)\n> > (3 rows)\n> >\n> >\n> > Then once a day I will pull a report from the buffercache_stats table. The\n> > buffercache_stats table is our own creation :\n> >\n> > \\d buffercache_stats\n> > Table \"public.buffercache_stats\"\n> > Column | Type | Modifiers\n> > ----------------+-----------------------------+-----------\n> > snap_timestamp | timestamp without time zone |\n> > isdirty | boolean |\n> > buffers | integer |\n> > memory | integer |\n> >\n> >\n> > Here's my issue, the server that we'll eventually roll this out to is\n> > extremely busy and the every 15min query above has the potential to have a\n> > huge impact on performance.\n> >\n> > Does anyone have any suggestions per a better approach or maybe a way to\n> > improve the performance for the above query ?\n> \n> I wouldn't worry about running it every 15 minutes unless it's on a\n> REALLY slow machine.\n> \n> I just ran it in a loop over and over on my 8 core opteron server and\n> it ran the load factor up by almost exactly 1.0. Under our normal\n> daily load, it sits at 1.9 to 2.5, and it climbed to 2.9 under the new\n> load of running that query over and over. So, it doesn't seem to be\n> blocking or anything.\n\nThe internal docs for pg_buffercache_pages.c state:\n\n\"To get a consistent picture of the buffer state, we must lock all\npartitions of the buffer map. Needless to say, this is horrible\nfor concurrency. Must grab locks in increasing order to avoid\npossible deadlocks.\"\n\nI'd be concerned about that running routinely.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Mon, 24 Nov 2008 14:52:06 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Monitoring buffercache..." }, { "msg_contents": "On Mon, Nov 24, 2008 at 12:52 PM, Brad Nicholson\n<[email protected]> wrote:\n>> I just ran it in a loop over and over on my 8 core opteron server and\n>> it ran the load factor up by almost exactly 1.0. Under our normal\n>> daily load, it sits at 1.9 to 2.5, and it climbed to 2.9 under the new\n>> load of running that query over and over. So, it doesn't seem to be\n>> blocking or anything.\n>\n> The internal docs for pg_buffercache_pages.c state:\n>\n> \"To get a consistent picture of the buffer state, we must lock all\n> partitions of the buffer map. Needless to say, this is horrible\n> for concurrency. Must grab locks in increasing order to avoid\n> possible deadlocks.\"\n\nWell, the pg hackers tend to take a parnoid view (it's a good thing\nTM) on things like this. My guess is that the period of time for\nwhich pg_buffercache takes locks on the buffer map are short enough\nthat it isn't a real big deal on a fast enough server. On mine, it\ncertainly had no real negative effects for the 5 minutes or so it was\nrunning in a loop. None I could see, and we run hundreds of queries\nper second on our system.\n\nOf course, for certain other types of loads it could be a much bigger\nissue. But for our load, on our machine, it was virtually\nunnoticeable.\n", "msg_date": "Mon, 24 Nov 2008 14:24:47 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Monitoring buffercache..." }, { "msg_contents": "On Mon, 24 Nov 2008, Kevin Kempter wrote:\n\n> Currently I run this each 15min via cron:\n> insert into buffercache_stats select now(), isdirty, count(*) as buffers,\n> (count(*) * 8192) as memory from pg_buffercache group by 1,2;\n\nThis query isn't going to save the information you need to figure out if \nshared_buffers is working effectively for you. You'll need the usage \ncount information (if you're on 8.3) and a notion of what tables it's \ncaching large amounts of data from to do that. What's going to happen \nwith the above is that you'll watch shared_buffers grow to fill whatever \nsize you've allocated it, and then the only useful information you'll be \nsaving is what percentage of that happens to be dirty. If it happens that \nthe working set of everything you touch is smaller than shared_buffers, \nyou'll find that out, but you don't need this query to figure that \nout--just look at the amount of shared memory the postgres processes are \nusing with ipcs or top and you can find where that peaks at.\n\nI've got some queries that I find more useful, along with a general \nsuggested methodology for figuring out if you've sized the buffers \ncorrectly, in my \"Inside the PostgreSQL Buffer Cache\" presentation at at \nhttp://www.westnet.com/~gsmith/content/postgresql\n\n> Does anyone have any suggestions per a better approach or maybe a way to\n> improve the performance for the above query ?\n\nIt's possible to obtain this data in a rather messy but faster way by not \ntaking all those locks. Someone even submitted a patch to do just that: \nhttp://archives.postgresql.org/pgsql-general/2008-02/msg00453.php\n\nI wouldn't recommend doing that, and it's really the only way to make this \nrun faster.\n\nIt's nice to grab a snapshot of the buffer cache every now and then just \nto see what tends to accumulate high usage counts and such. I predict \nthat trying to collect it all the time will leave you overwhelmed with \ndata it's hard to analyze and act on. I'd suggest a snapshot per hour, \nspread across one normal day every week, would be more than enough data to \nfigure out how your system is behaving. If you want something worth \nsaving every 15 minutes, you should save a snapshot of the data in \npg_statio_user_tables.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 24 Nov 2008 23:34:03 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Monitoring buffercache..." }, { "msg_contents": "On Mon, 24 Nov 2008, Scott Marlowe wrote:\n\n> My guess is that the period of time for which pg_buffercache takes locks \n> on the buffer map are short enough that it isn't a real big deal on a \n> fast enough server.\n\nAs the server involved gets faster, the amount of time the locks are \ntypically held for drops.\n\nAs your shared_buffers allocation increases, that amount of time goes up.\n\nSo how painful the overhead is depends on how fast your CPU is relative to \nhow much memory is in it. Since faster systems tend to have more RAM in \nthem, too, it's hard to say whether the impact will be noticable.\n\nAlso, noting that the average case isn't impacted much isn't the concern \nhere. The problem is how much having all partition locks held will \nincrease impact worst-case latency.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 24 Nov 2008 23:40:52 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Monitoring buffercache..." }, { "msg_contents": "On Mon, Nov 24, 2008 at 9:40 PM, Greg Smith <[email protected]> wrote:\n> On Mon, 24 Nov 2008, Scott Marlowe wrote:\n>\n>> My guess is that the period of time for which pg_buffercache takes locks\n>> on the buffer map are short enough that it isn't a real big deal on a fast\n>> enough server.\n>\n> As the server involved gets faster, the amount of time the locks are\n> typically held for drops.\n>\n> As your shared_buffers allocation increases, that amount of time goes up.\n>\n> So how painful the overhead is depends on how fast your CPU is relative to\n> how much memory is in it. Since faster systems tend to have more RAM in\n> them, too, it's hard to say whether the impact will be noticable.\n>\n> Also, noting that the average case isn't impacted much isn't the concern\n> here. The problem is how much having all partition locks held will increase\n> impact worst-case latency.\n\nTrue. I was just looking to see how it impacted my servers. Just\nFYI, it's an 8 core opteron 2.1GHz with 32 Gig 667MHz DDR2 ram. It\nruns on a fast RAID-10 set (12 15k drives under an areca 1680, but I\ndon't know if that matters that much here.) It can pretty easily about\n400 or so active transactions and still be responsive, but noticeably\nslower. At anything under about 50 or so transactions it's still\nquite fast.\n\nIt's configured to have 8Gig of the 32Gig allocated as shared buffers,\nand a buffercache query takes about 1 second to run. Is the\nshared_mem locked all that time? And is it only locked against\nwrites?\n", "msg_date": "Mon, 24 Nov 2008 21:54:00 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Monitoring buffercache..." }, { "msg_contents": "Scott Marlowe wrote:\n> On Mon, Nov 24, 2008 at 12:52 PM, Brad Nicholson\n> <[email protected]> wrote:\n> \n>>> I just ran it in a loop over and over on my 8 core opteron server and\n>>> it ran the load factor up by almost exactly 1.0. Under our normal\n>>> daily load, it sits at 1.9 to 2.5, and it climbed to 2.9 under the new\n>>> load of running that query over and over. So, it doesn't seem to be\n>>> blocking or anything.\n>>> \n>> The internal docs for pg_buffercache_pages.c state:\n>>\n>> \"To get a consistent picture of the buffer state, we must lock all\n>> partitions of the buffer map. Needless to say, this is horrible\n>> for concurrency. Must grab locks in increasing order to avoid\n>> possible deadlocks.\"\n>> \n>\n> Well, the pg hackers tend to take a parnoid view (it's a good thing\n> TM) on things like this. My guess is that the period of time for\n> which pg_buffercache takes locks on the buffer map are short enough\n> that it isn't a real big deal on a fast enough server. On mine, it\n> certainly had no real negative effects for the 5 minutes or so it was\n> running in a loop. None I could see, and we run hundreds of queries\n> per second on our system.\n>\n> Of course, for certain other types of loads it could be a much bigger\n> issue. But for our load, on our machine, it was virtually\n> unnoticeable.\n>\n> \nYeah, I wouldn't worry about accessing it every 15 minutes! I put the \ncomment there to make it clear that (like pg_locks) selecting from it \n*very frequently* could effect performance.\n\nCheers\n\nMark\n", "msg_date": "Wed, 26 Nov 2008 08:37:57 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Monitoring buffercache..." } ]
[ { "msg_contents": "Hi,\n\nI have a problem with large objects in postgresql 8.1: The performance of\nloading large objects into a database goes way down after a few days of\noperation.\n\nI have a cron job kicking in twice a day, which generates and loads around\n6000 large objects of 3.7MB each. Each night, old data is deleted, so\nthere is never more than 24000 large object in the database.\n\nIf I start loading on a freshly installed database, load times for this is\naround 13 minutes, including generating the data to be stored. If I let\nthe database run for a few days, this takes much longer. After one or two\ndays, this goes down to almost an hour, with logs indicating that this\nextra time is solely spent transferring the large objects from file to\ndatabase.\n\nTurning autovacuum on or off seems to have no effect on this.\n\nI have only made the following changes to the default postgresql.conf file:\nmax_fsm_pages = 25000000\nvacuum_cost_delay = 10\ncheckpoint_segments = 256\n\nSo, my question for you is: Why does this happen, and what can I do about it?\n\nRegards,\n\nVegard B�nes\n\n\n", "msg_date": "Tue, 25 Nov 2008 12:54:23 -0000 (GMT)", "msg_from": "\"=?iso-8859-1?Q?Vegard_B=F8nes?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Deteriorating performance when loading large objects" }, { "msg_contents": "Vegard Bønes wrote:\n> Hi,\n> \n> I have a problem with large objects in postgresql 8.1: The performance of\n> loading large objects into a database goes way down after a few days of\n> operation.\n> \n> I have a cron job kicking in twice a day, which generates and loads around\n> 6000 large objects of 3.7MB each. Each night, old data is deleted, so\n> there is never more than 24000 large object in the database.\n\n> So, my question for you is: Why does this happen, and what can I do about it?\n\nTry putting a \"vacuumdb -zf\" command as a cron job after the data is\ndeleted.", "msg_date": "Tue, 25 Nov 2008 14:17:47 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deteriorating performance when loading large objects" }, { "msg_contents": "\"=?iso-8859-1?Q?Vegard_B=F8nes?=\" <[email protected]> writes:\n> I have a problem with large objects in postgresql 8.1: The performance of\n> loading large objects into a database goes way down after a few days of\n> operation.\n\n> I have a cron job kicking in twice a day, which generates and loads around\n> 6000 large objects of 3.7MB each. Each night, old data is deleted, so\n> there is never more than 24000 large object in the database.\n\nAre you sure you're deleting the large objects themselves (ie,\nlo_unlink), and not just deleting some references to them?\n\nA manual \"vacuum verbose\" on pg_largeobject might be informative.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Nov 2008 08:36:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deteriorating performance when loading large objects " }, { "msg_contents": "> \"=?iso-8859-1?Q?Vegard_B=F8nes?=\" <[email protected]> writes:\n>> I have a problem with large objects in postgresql 8.1: The performance\n>> of loading large objects into a database goes way down after a few\n>> days of operation.\n>\n>> I have a cron job kicking in twice a day, which generates and loads\n>> around 6000 large objects of 3.7MB each. Each night, old data is\n>> deleted, so there is never more than 24000 large object in the\n>> database.\n>\n> Are you sure you're deleting the large objects themselves (ie,\n> lo_unlink), and not just deleting some references to them?\n>\n> A manual \"vacuum verbose\" on pg_largeobject might be informative.\n\n\nI do call lo_unlink via a trigger function. Also, a SELECT count(distinct\nloid) FROM pg_largeobject yields the same result as a similar call to the\ntable which references the large objects.\n\nRunning VACUUM VERBOSE pg_largeobject took quite some time. Here's the\noutput:\n\nINFO: vacuuming \"pg_catalog.pg_largeobject\"\nINFO: index \"pg_largeobject_loid_pn_index\" now contains 11060658 row\nversions in 230587 pages\nDETAIL: 178683 index pages have been deleted, 80875 are currently reusable.\nCPU 0.92s/0.10u sec elapsed 199.38 sec.\nINFO: \"pg_largeobject\": found 0 removable, 11060658 nonremovable row\nversions in 6849398 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 84508215 unused item pointers.\n0 pages are entirely empty.\nCPU 0.98s/0.10u sec elapsed 4421.17 sec.\nVACUUM\n\nI will try to run VACUUM ANALYZE FULL after the next delete tonight, as\nsuggested by Ivan Voras in another post. But as I understand it, this will\nput an exclusive lock on whatever table is being vacuumed, so it is not\nreally an option for the database in question, as it needs to be\naccessitble 24 hours a day.\n\nIs there any other possible solution to this?\n\nAs a side note, I have noticed that loading times seem to have stabilized\nat just above an hour.\n\nRegards,\n\nVegard B�nes\n\n\n\n", "msg_date": "Thu, 27 Nov 2008 19:04:45 -0000 (GMT)", "msg_from": "\"=?iso-8859-1?Q?Vegard_B=F8nes?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Deteriorating performance when loading large objects" }, { "msg_contents": "\"=?iso-8859-1?Q?Vegard_B=F8nes?=\" <[email protected]> writes:\n> Running VACUUM VERBOSE pg_largeobject took quite some time. Here's the\n> output:\n\n> INFO: vacuuming \"pg_catalog.pg_largeobject\"\n> INFO: index \"pg_largeobject_loid_pn_index\" now contains 11060658 row\n> versions in 230587 pages\n> DETAIL: 178683 index pages have been deleted, 80875 are currently reusable.\n> CPU 0.92s/0.10u sec elapsed 199.38 sec.\n> INFO: \"pg_largeobject\": found 0 removable, 11060658 nonremovable row\n> versions in 6849398 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 84508215 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.98s/0.10u sec elapsed 4421.17 sec.\n> VACUUM\n\nHmm ... although you have no dead rows now, the very large number of\nunused item pointers suggests that there were times in the past when\npg_largeobject didn't get vacuumed often enough. You need to look at\nyour vacuuming policy. If you're using autovacuum, it might need to have\nits parameters adjusted. Otherwise, how often are you vacuuming, and\nare you doing it as superuser?\n\n> I will try to run VACUUM ANALYZE FULL after the next delete tonight, as\n> suggested by Ivan Voras in another post.\n\nActually, a CLUSTER might be more effective.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Nov 2008 16:29:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deteriorating performance when loading large objects " }, { "msg_contents": "Tom Lane schrieb:\n> \"=?iso-8859-1?Q?Vegard_B=F8nes?=\" <[email protected]> writes:\n> \n>> Running VACUUM VERBOSE pg_largeobject took quite some time. Here's the\n>> output:\n>> \n>\n> \n>> INFO: vacuuming \"pg_catalog.pg_largeobject\"\n>> INFO: index \"pg_largeobject_loid_pn_index\" now contains 11060658 row\n>> versions in 230587 pages\n>> DETAIL: 178683 index pages have been deleted, 80875 are currently reusable.\n>> CPU 0.92s/0.10u sec elapsed 199.38 sec.\n>> INFO: \"pg_largeobject\": found 0 removable, 11060658 nonremovable row\n>> versions in 6849398 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> There were 84508215 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 0.98s/0.10u sec elapsed 4421.17 sec.\n>> VACUUM\n>> \n>\n> Hmm ... although you have no dead rows now, the very large number of\n> unused item pointers suggests that there were times in the past when\n> pg_largeobject didn't get vacuumed often enough. You need to look at\n> your vacuuming policy. If you're using autovacuum, it might need to have\n> its parameters adjusted. Otherwise, how often are you vacuuming, and\n> are you doing it as superuser?\n>\n> \n>> I will try to run VACUUM ANALYZE FULL after the next delete tonight, as\n>> suggested by Ivan Voras in another post.\n>> \n>\n> Actually, a CLUSTER might be more effective.\n>\n> \t\t\tregards, tom lane\n>\n> \n\nDoes CLUSTER really help here? On my 8.2 database, I get:\nCLUSTER pg_largeobject_loid_pn_index on pg_largeobject ;\nERROR: \"pg_largeobject\" is a system catalog\n\n\nHas this changed in >= 8.3?\n", "msg_date": "Fri, 28 Nov 2008 09:30:40 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deteriorating performance when loading large objects" }, { "msg_contents": "Tom Lane wrote:\n>> INFO: vacuuming \"pg_catalog.pg_largeobject\"\n>> INFO: index \"pg_largeobject_loid_pn_index\" now contains 11060658 row\n>> versions in 230587 pages\n>> DETAIL: 178683 index pages have been deleted, 80875 are currently reusable.\n>> CPU 0.92s/0.10u sec elapsed 199.38 sec.\n>> INFO: \"pg_largeobject\": found 0 removable, 11060658 nonremovable row\n>> versions in 6849398 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> There were 84508215 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 0.98s/0.10u sec elapsed 4421.17 sec.\n>> VACUUM\n> \n> Hmm ... although you have no dead rows now, the very large number of\n> unused item pointers suggests that there were times in the past when\n> pg_largeobject didn't get vacuumed often enough. You need to look at\n> your vacuuming policy. If you're using autovacuum, it might need to have\n> its parameters adjusted. Otherwise, how often are you vacuuming, and\n> are you doing it as superuser?\n> \n>> I will try to run VACUUM ANALYZE FULL after the next delete tonight, as\n>> suggested by Ivan Voras in another post.\n> \n> Actually, a CLUSTER might be more effective.\n> \n> \t\t\tregards, tom lane\n> \n\nI have autovacuum turned on, but the high number of unused item pointers \nmay be related to some experimentation I did earlier on the same \ndatabase with only vacuuming after the nightly delete. I have, however, \nseen the same performance degradation on a database that had only ever \nbeen vacuumed by autovacuum.\n\nI am a bit unsure about what parameters to adjust in order to maintain a \ngood loading performance for bulk loading. Do you have any suggestions?\n\nAlso, VACUUM ANALYZE FULL has been running for 15 hours now, blocking \nthe loading of today's data. It will be interesting to see how the \ndatabase will work once it is completed. Is there any point in trying to \nuse CLUSTER instead if this does not give any result?\n\n\nRegards,\n\nVegard B�nes\n\n", "msg_date": "Fri, 28 Nov 2008 16:32:24 +0100", "msg_from": "=?ISO-8859-1?Q?Vegard_B=F8nes?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deteriorating performance when loading large objects" } ]
[ { "msg_contents": "I have a problem with partitioning and I'm wondering if anyone can provide\nsome insight. I'm trying to find the max value of a column across multiple\npartitions. The query against the partition set is quite slow while queries\nagainst child partitions is very fast!\n\n\nI setup a basic Range Partition table definition.\n A parent table: Data { dataID, sensorID, value, ts }\n child tables Data_YYYY_WEEKNO { dataID, sensorID, value, ts} inherited\nfrom Data\n Each child tables has a primary key index on dataID and a\ncomposite index on (sensorID, ts).\n Each child has check constraints for the week range identified in\nthe table name (non overlapping)\n\nI want to perform a simple operation: select the max ts (timestamp) giving\na sensorID. Given my indexs on the table, this should be a simple and fast\noperation.\n\n\nDB=# EXPLAIN ANALYZE select max(ts) from \"Data\" where valid=true and\n\"sensorID\"=8293 ;\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=334862.92..334862.93 rows=1 width=8) (actual\ntime=85183.381..85183.383 rows=1 loops=1)\n -> Append (cost=2.30..329397.68 rows=2186096 width=8) (actual\ntime=1.263..76592.755 rows=2205408 loops=1)\n -> Bitmap Heap Scan on \"Data\" (cost=2.30..8.84 rows=3 width=8)\n(actual time=0.027..0.027 rows=0 loops=1)\n Recheck Cond: (\"sensorID\" = 8293)\n Filter: valid\n -> Bitmap Index Scan on \"def_data_sensorID_ts\"\n(cost=0.00..2.30 rows=6 width=0) (actual time=0.021..0.021 rows=0 loops=1)\n Index Cond: (\"sensorID\" = 8293)\n -> *Index Scan using \"Data_2008_01_sensorID_ts_index\" on\n\"Data_2008_01\" \"Data\"* (cost=0.00..4.27 rows=1 width=8) (actual\ntime=0.014..0.014 rows=0 loops=1)\n Index Cond: (\"sensorID\" = 8293)\n Filter: valid\n -> *Bitmap Heap Scan on \"Data_2008_02\" \"Data\"* (cost=3.01..121.08\nrows=98 width=8) (actual time=0.017..0.017 rows=0 loops=1)\n Recheck Cond: (\"sensorID\" = 8293)\n Filter: valid\n -> Bitmap Index Scan on \"Data_2008_02_sensorID_ts_index\"\n(cost=0.00..2.99 rows=98 width=0) (actual time=0.011..0.011 rows=0 loops=1)\n Index Cond: (\"sensorID\" = 8293)\n.\n.\n. (omitted a list of all partitions with same as data above)\n.\n Total runtime: 85188.694 ms\n\n\n\n\nWhen I query against a specific partition:\n\n\nDB=# EXPLAIN ANALYZE select max(ts) from \"Data_2008_48\" where valid=true\nand \"sensorID\"=8293 ;\n\nQUERY\nPLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.10..0.11 rows=1 width=0) (actual time=3.830..3.832 rows=1\nloops=1)\n InitPlan\n -> Limit (cost=0.00..0.10 rows=1 width=8) (actual time=3.817..3.819\nrows=1 loops=1)\n -> Index Scan Backward using \"Data_2008_48_sensorID_ts_index\" on\n\"Data_2008_48\" (cost=0.00..15304.55 rows=148959 width=8) (actual\ntime=3.813..3.813 rows=1 loops=1)\n Index Cond: (\"sensorID\" = 8293)\n Filter: ((ts IS NOT NULL) AND valid)\n Total runtime: 0.225 ms\n\n\nThe query plan against the child partition makes sense - Uses the index to\nfind the max value. The query plan for the partitions uses a combination of\nbitmap heap scans and index scans.\nWhy would the query plan choose to use a bitmap heap scan after bitmap index\nscan or is that the best choice? (what is it doing?) and what can I do to\nspeed up this query?\n\nAs a sanity check I did a union query of all partitions to find the max(ts).\nMy manual union query executed in 13ms vs the query against the parent table\nthat was 85,188ms!!!.\n\n\n\nGreg Jaman\n\nI have a problem with partitioning and I'm wondering if anyone can provide some insight.   I'm trying to find the max value of a column across multiple partitions.  The query against the partition set is quite slow while queries against child partitions is very fast!\nI setup a basic Range Partition table definition.    A parent table:  Data {  dataID, sensorID, value, ts }   child tables   Data_YYYY_WEEKNO { dataID, sensorID, value, ts}  inherited from Data          Each child tables has a primary key index on dataID and a composite index on (sensorID, ts).\n          Each child has check constraints for the week range identified in the table name (non overlapping)I want to perform a simple operation:  select the max ts (timestamp) giving a sensorID.  Given my indexs on the table, this should be a simple and fast operation.\nDB=# EXPLAIN ANALYZE  select max(ts) from \"Data\" where valid=true and \"sensorID\"=8293 ;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=334862.92..334862.93 rows=1 width=8) (actual time=85183.381..85183.383 rows=1 loops=1)   ->  Append  (cost=2.30..329397.68 rows=2186096 width=8) (actual time=1.263..76592.755 rows=2205408 loops=1)\n         ->  Bitmap Heap Scan on \"Data\"  (cost=2.30..8.84 rows=3 width=8) (actual time=0.027..0.027 rows=0 loops=1)               Recheck Cond: (\"sensorID\" = 8293)               Filter: valid\n               ->  Bitmap Index Scan on \"def_data_sensorID_ts\"  (cost=0.00..2.30 rows=6 width=0) (actual time=0.021..0.021 rows=0 loops=1)                     Index Cond: (\"sensorID\" = 8293)\n         ->  Index Scan using \"Data_2008_01_sensorID_ts_index\" on \"Data_2008_01\" \"Data\"  (cost=0.00..4.27 rows=1 width=8) (actual time=0.014..0.014 rows=0 loops=1)               Index Cond: (\"sensorID\" = 8293)\n               Filter: valid         ->  Bitmap Heap Scan on \"Data_2008_02\" \"Data\"  (cost=3.01..121.08 rows=98 width=8) (actual time=0.017..0.017 rows=0 loops=1)               Recheck Cond: (\"sensorID\" = 8293)\n               Filter: valid               ->  Bitmap Index Scan on \"Data_2008_02_sensorID_ts_index\"  (cost=0.00..2.99 rows=98 width=0) (actual time=0.011..0.011 rows=0 loops=1)                     Index Cond: (\"sensorID\" = 8293)\n... (omitted a list of all partitions with same as data above). Total runtime: 85188.694 msWhen I query against a specific partition:DB=# EXPLAIN ANALYZE  select max(ts) from \"Data_2008_48\" where valid=true and \"sensorID\"=8293 ;\n                                                                                   QUERY PLAN                                                                                    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result  (cost=0.10..0.11 rows=1 width=0) (actual time=3.830..3.832 rows=1 loops=1)   InitPlan     ->  Limit  (cost=0.00..0.10 rows=1 width=8) (actual time=3.817..3.819 rows=1 loops=1)           ->  Index Scan Backward using \"Data_2008_48_sensorID_ts_index\" on \"Data_2008_48\"  (cost=0.00..15304.55 rows=148959 width=8) (actual time=3.813..3.813 rows=1 loops=1)\n                 Index Cond: (\"sensorID\" = 8293)                 Filter: ((ts IS NOT NULL) AND valid) Total runtime: 0.225 msThe query plan against the child partition makes sense - Uses the index to find the max value.  The query plan for the partitions uses a combination of bitmap heap scans and index scans.  \nWhy would the query plan choose to use a bitmap heap scan after bitmap index scan or is that the best choice?  (what is it doing?) and what can I do to speed up this query?As a sanity check I did a union query of all partitions to find the max(ts). My manual union query executed in 13ms vs the query against the parent table that was 85,188ms!!!.   \nGreg Jaman", "msg_date": "Tue, 25 Nov 2008 20:07:46 -0800", "msg_from": "\"Greg Jaman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Partition table query performance" }, { "msg_contents": "\"Greg Jaman\" <[email protected]> writes:\n\n> I have a problem with partitioning and I'm wondering if anyone can provide\n> some insight. I'm trying to find the max value of a column across multiple\n> partitions. The query against the partition set is quite slow while queries\n> against child partitions is very fast!\n\nI'm afraid this is a known problematic use case of Postgres's current\npartitioning support. Postgres is not capable of finding the plan which you're\nundoubtedly looking for where it uses the same plan as your child table query\niterating over the partitions.\n\nThere are several groups working to improve this in different ways but none of\nthem appear to be on track to be in 8.4 so it will be 8.5 or later before they\nappear. Sorry.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Thu, 27 Nov 2008 00:48:14 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition table query performance" }, { "msg_contents": "Thanks Gregory,\n\nI was on IRC yesterday and a few people indicated the same thing...\n\nSearching for the last reading is a very important function for our\ndatabase. I wrote the below function searches all child tables for the\nmax. It is not optimization because it doesn't omit tables by look at the\ncheck constraints on child tables to see if the last found max is greater\nthan the constraints. Right now this function executes in 50ms vs the 80+\nfor the same query against the partition set.\n\n\ncreate or replace function Data_max(in_sensorID integer) returns bigint AS\n$$\nDECLARE\nchildtable RECORD;\nchildres RECORD;\nmax_dataID bigint := NULL;\nmax_ts timestamp without time zone;\nBEGIN\n FOR childtable in select pc.relname as relname from pg_class pc join\npg_inherits pi on pc.oid=pi.inhrelid where inhparent=(select oid from\npg_class where relname='Data')\n LOOP\n EXECUTE ' SELECT \"dataID\", ts FROM ' || quote_ident(\nchildtable.relname )\n || ' WHERE \"sensorID\"=' || quote_literal(in_sensorID) || ' order\nby ts desc limit 1 ' INTO childres;\n IF childres is not NULL THEN\n IF max_ts is NULL OR childres.ts > max_ts THEN\n max_ts:= childres.ts;\n max_dataID:= childres.\"dataID\";\n END IF;\n END IF;\n END LOOP;\n return max_dataID;\nEND;\n$$\nlanguage 'plpgsql';\n\n\n\nOn Wed, Nov 26, 2008 at 4:48 PM, Gregory Stark <[email protected]>wrote:\n\n> \"Greg Jaman\" <[email protected]> writes:\n>\n> > I have a problem with partitioning and I'm wondering if anyone can\n> provide\n> > some insight. I'm trying to find the max value of a column across\n> multiple\n> > partitions. The query against the partition set is quite slow while\n> queries\n> > against child partitions is very fast!\n>\n> I'm afraid this is a known problematic use case of Postgres's current\n> partitioning support. Postgres is not capable of finding the plan which\n> you're\n> undoubtedly looking for where it uses the same plan as your child table\n> query\n> iterating over the partitions.\n>\n> There are several groups working to improve this in different ways but none\n> of\n> them appear to be on track to be in 8.4 so it will be 8.5 or later before\n> they\n> appear. Sorry.\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's 24x7 Postgres support!\n>\n\nThanks Gregory,I was on IRC yesterday and a few people indicated the same thing...  Searching for the last reading is a very important function for our database.  I  wrote the below function searches all child tables for the max.  It is not optimization because it doesn't omit tables by look at the check constraints on child tables to see if the last found max is greater than the constraints.  Right now this function executes in 50ms vs the 80+ for the same query against the partition set.\ncreate or replace function Data_max(in_sensorID integer) returns bigint AS$$DECLAREchildtable RECORD;childres RECORD;max_dataID bigint := NULL;max_ts timestamp without time zone;BEGIN\n    FOR childtable in select pc.relname as relname from pg_class pc join pg_inherits pi on pc.oid=pi.inhrelid where inhparent=(select oid from pg_class where relname='Data')     LOOP        EXECUTE ' SELECT \"dataID\", ts  FROM ' || quote_ident( childtable.relname )\n            || ' WHERE \"sensorID\"=' || quote_literal(in_sensorID) || ' order by ts desc limit 1 ' INTO childres;        IF childres is not NULL  THEN            IF max_ts is NULL OR  childres.ts > max_ts THEN\n                max_ts:= childres.ts;                max_dataID:= childres.\"dataID\";            END IF;        END IF;    END LOOP;    return max_dataID;END;$$language 'plpgsql';\nOn Wed, Nov 26, 2008 at 4:48 PM, Gregory Stark <[email protected]> wrote:\n\"Greg Jaman\" <[email protected]> writes:\n\n> I have a problem with partitioning and I'm wondering if anyone can provide\n> some insight.   I'm trying to find the max value of a column across multiple\n> partitions.  The query against the partition set is quite slow while queries\n> against child partitions is very fast!\n\nI'm afraid this is a known problematic use case of Postgres's current\npartitioning support. Postgres is not capable of finding the plan which you're\nundoubtedly looking for where it uses the same plan as your child table query\niterating over the partitions.\n\nThere are several groups working to improve this in different ways but none of\nthem appear to be on track to be in 8.4 so it will be 8.5 or later before they\nappear. Sorry.\n\n--\n  Gregory Stark\n  EnterpriseDB          http://www.enterprisedb.com\n  Ask me about EnterpriseDB's 24x7 Postgres support!", "msg_date": "Thu, 27 Nov 2008 09:25:47 -0800", "msg_from": "\"Greg Jaman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partition table query performance" }, { "msg_contents": "Maybe this is an obviously dumb thing to do, but it looked reasonable to me. The problem is, the seemingly simple sort below causes a fairly powerful computer to completely freeze for 5-10 minutes. During the sort, you can't login, you can't use any shell sessions you already have open, the Apache server barely works, and even if you do \"nice -20 top\" before you start the sort, the top(1) command comes to a halt while the sort is proceeding! As nearly as I can tell, the sort operation is causing a swap storm of some sort -- nothing else in my many years of UNIX/Linux experience can cause a \"nice -20\" process to freeze.\n\nThe sort operation never finishes -- it's always killed by the system. Once it dies, everything returns to normal.\n\nThis is 8.3.0. (Yes, I'll upgrade soon.) Is this a known bug, or do I have to rewrite this query somehow? Maybe add indexes to all four columns being sorted?\n\nThanks!\nCraig\n\n\n=> explain select * from plus order by supplier_id, compound_id, units, price;\n QUERY PLAN \n-----------------------------------------------------------------------\n Sort (cost=5517200.48..5587870.73 rows=28268100 width=65)\n Sort Key: supplier_id, compound_id, units, price\n -> Seq Scan on plus (cost=0.00..859211.00 rows=28268100 width=65)\n\n=> \\d plus Table \"emol_warehouse_1.plus\"\n Column | Type | Modifiers \n---------------+---------------+-----------\n supplier_id | integer | \n supplier_name | text | \n compound_id | text | \n amount | text | \n units | text | \n price | numeric(12,2) | \n currency | text | \n description | text | \n sku | text | \nIndexes:\n \"i_plus_compound_id\" btree (supplier_id, compound_id)\n \"i_plus_supplier_id\" btree (supplier_id)\n\n\nmax_connections = 1000\nshared_buffers = 2000MB\nwork_mem = 256MB\nmax_fsm_pages = 1000000\nmax_fsm_relations = 5000\nsynchronous_commit = off\n#wal_sync_method = fdatasync\nwal_buffers = 256kB\ncheckpoint_segments = 30\neffective_cache_size = 4GB\n\nMachine: Dell, 8x64-bit CPUs, 8GB ram, Perc6i battery-backed RAID controller, 8 disks as RAID10\n", "msg_date": "Mon, 01 Dec 2008 21:49:12 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Sort causes system to freeze" }, { "msg_contents": "Craig James wrote:\n> Maybe this is an obviously dumb thing to do,\n\n... and it was. I answered my own question: The problem came from using psql(1) to do something I should have done with pg_dump.\n\n> but it looked reasonable to \n> me. The problem is, the seemingly simple sort below causes a fairly \n> powerful computer to completely freeze for 5-10 minutes. During the \n> sort, you can't login, you can't use any shell sessions you already have \n> open, the Apache server barely works, and even if you do \"nice -20 top\" \n> before you start the sort, the top(1) command comes to a halt while the \n> sort is proceeding! As nearly as I can tell, the sort operation is \n> causing a swap storm of some sort -- nothing else in my many years of \n> UNIX/Linux experience can cause a \"nice -20\" process to freeze.\n> \n> The sort operation never finishes -- it's always killed by the system. \n> Once it dies, everything returns to normal.\n> \n> This is 8.3.0. (Yes, I'll upgrade soon.) Is this a known bug, or do I \n> have to rewrite this query somehow? Maybe add indexes to all four \n> columns being sorted?\n> \n> Thanks!\n> Craig\n> \n> \n> => explain select * from plus order by supplier_id, compound_id, units, \n> price;\n> QUERY PLAN \n> -----------------------------------------------------------------------\n> Sort (cost=5517200.48..5587870.73 rows=28268100 width=65)\n> Sort Key: supplier_id, compound_id, units, price\n> -> Seq Scan on plus (cost=0.00..859211.00 rows=28268100 width=65)\n> \n> => \\d plus Table \"emol_warehouse_1.plus\"\n> Column | Type | Modifiers \n> ---------------+---------------+-----------\n> supplier_id | integer | supplier_name | text | \n> compound_id | text | amount | text | \n> units | text | price | numeric(12,2) | \n> currency | text | description | text | \n> sku | text | Indexes:\n> \"i_plus_compound_id\" btree (supplier_id, compound_id)\n> \"i_plus_supplier_id\" btree (supplier_id)\n> \n> \n> max_connections = 1000\n> shared_buffers = 2000MB\n> work_mem = 256MB\n> max_fsm_pages = 1000000\n> max_fsm_relations = 5000\n> synchronous_commit = off\n> #wal_sync_method = fdatasync\n> wal_buffers = 256kB\n> checkpoint_segments = 30\n> effective_cache_size = 4GB\n> \n> Machine: Dell, 8x64-bit CPUs, 8GB ram, Perc6i battery-backed RAID \n> controller, 8 disks as RAID10\n\nCraig\n\n", "msg_date": "Tue, 02 Dec 2008 00:27:32 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort causes system to freeze" }, { "msg_contents": "Don't reply to another message when starting a new thread. People will\nmiss your message.\n\nCraig James wrote:\n> Maybe this is an obviously dumb thing to do, but it looked reasonable to\n> me.\n\nLooks reasonable here too - except I'm not sure what I'd do with 2\nmillion rows of sorted table in my console. I'm guessing you're piping\nthe output into something.\n\n> The problem is, the seemingly simple sort below causes a fairly\n> powerful computer to completely freeze for 5-10 minutes. During the\n> sort, you can't login, you can't use any shell sessions you already have\n> open, the Apache server barely works, and even if you do \"nice -20 top\"\n> before you start the sort, the top(1) command comes to a halt while the\n> sort is proceeding! As nearly as I can tell, the sort operation is\n> causing a swap storm of some sort -- nothing else in my many years of\n> UNIX/Linux experience can cause a \"nice -20\" process to freeze.\n\nNothing should cause that to your machine. I've never seen \"top\" just\nfreeze unless you set up some sort of fork-bomb and ramp the load up so\nfast it can't cope. Oh, and nice-ing the client isn't going to do\nanything to the backend actually doing the sorting.\n\n> The sort operation never finishes -- it's always killed by the system. \n> Once it dies, everything returns to normal.\n\nYou're running out of memory then. It'll be the out-of-memory killer\n(assuming you're on Linux).\n\n> This is 8.3.0. (Yes, I'll upgrade soon.)\n\nMake \"soon\" more urgent than it has been up to now - no point in risking\nall your data to some already fixed bug is there? Unless you've been\ncarefully tracking the release notes and have established that there's\nno need in your precise scenario.\n\n> Is this a known bug, or do I\n> have to rewrite this query somehow? Maybe add indexes to all four\n> columns being sorted?\n\nIndexes won't necessarily help if you're sorting the whole table. Maybe\nif you had one on all four columns.\n\n> => explain select * from plus order by supplier_id, compound_id, units,\n> price;\n\n> max_connections = 1000\n> shared_buffers = 2000MB\n> work_mem = 256MB\n\nSo can you support (1000 * 256 * 2) + 2000 MB of RAM?\n\n> effective_cache_size = 4GB\n\n...while leaving 4GB free for disk caching?\n\n> Machine: Dell, 8x64-bit CPUs, 8GB ram, Perc6i battery-backed RAID\n> controller, 8 disks as RAID10\n\nIt appears not. Remember that work_mem is not only per-connection, a\nsingle query can use multiples of it (hence the *2 above). If you\ngenuinely have a lot of connections I'd drop it down to (say) 4MB to\nmake sure you don't swap on a regular basis (should probably be even\nlower to be truly safe).\n\nThen, for the odd case when you need a large value, issue a SET work_mem\nbefore the query.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 02 Dec 2008 09:59:04 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort causes system to freeze" }, { "msg_contents": "Craig James <[email protected]> writes:\n> Maybe this is an obviously dumb thing to do, but it looked reasonable\n> to me. The problem is, the seemingly simple sort below causes a\n> fairly powerful computer to completely freeze for 5-10 minutes.\n\ntrace_sort output might be informative.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Dec 2008 05:34:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort causes system to freeze " }, { "msg_contents": "\n>> Maybe this is an obviously dumb thing to do, but it looked reasonable to\n>> me.\n>\n> Looks reasonable here too - except I'm not sure what I'd do with 2\n> million rows of sorted table in my console. I'm guessing you're piping\n> the output into something.\n\n\tProbably it's psql that is choking from buffering the rows.\n\tIf you want to fetch that huge amount of data into a user application, a \nCURSOR is the best way to do so.\n", "msg_date": "Tue, 02 Dec 2008 12:54:22 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort causes system to freeze" } ]
[ { "msg_contents": "Hello,\n\nI have following common situation:\n\nCategory IDs: about 50 000\nDocument IDs: about 3 000 000\nMany to many relationship.\nA document id have a relation with 10 up to 1000 category ids\nOne query, with input set of document ids, resulting set of category ids, having relation with input ids. (very often this query is called with more than 500k of input document ids)\n\nI use custom written datawarehouse, file storage, and memory kept \"id offset\" like indecies. So a query for all 3 million document ids, resulting almost all category ids take less than a second on desktop machine. Abstracting from concrete realization, query plan is like:\n1. for each input document id: look up an array by document id and retrive a list of ralated category ids.\n1.1 for each category id in the list: look up an array value by category id and mark it as found\n2. traverse category array to extract category ids marked as found\n\nI want to use as a data storage postgresql. Tried several data structures, testing btree, gin, gist indecies over them, but best achieved performance for a 10 times smaller dataset (10k cat ids, 100k doc ids, 1m relations) is slower more than 5 times.\n\nI read about postgresql bitmap indecies and \"data lookup\" when scanning indecies to get a value for current transaction. Downloaded latest v8.4 snapshot, compiled it, but as I see there is no bitmap index in it. Maybe if I download HEAD revision I will find them there, dont know.\n\nAnyway, I want to ask, have anyone faced similar situation, and is there any way to achive closer to optimal performance using postgresql functionality and extensibility?\n\nRegards,\nChavdar Kopoev\n\n", "msg_date": "Wed, 26 Nov 2008 17:01:40 +0200", "msg_from": "\"Chavdar Kopoev\" <[email protected]>", "msg_from_op": true, "msg_subject": "many to many performance" } ]
[ { "msg_contents": "Hello,\n\nI have following common situation:\n\nCategory IDs: about 50 000\nDocument IDs: about 3 000 000\nMany to many relationship.\nA document id have a relation with 10 up to 1000 category ids\nOne query, with input set of document ids, resulting set of category ids, having relation with input ids. (very often this query is called with more than 500k of input document ids)\n\nI use custom written datawarehouse, file storage, and memory kept \"id offset\" like indecies. So a query for all 3 million document ids, resulting almost all category ids take less than a second on desktop machine. Abstracting from concrete realization, query plan is like:\n1. for each input document id: look up an array by document id and retrive a list of ralated category ids.\n1.1 for each category id in the list: look up an array value by category id and mark it as found\n2. traverse category array to extract category ids marked as found\n\nI want to use as a data storage postgresql. Tried several data structures, testing btree, gin, gist indecies over them, but best achieved performance for a 10 times smaller dataset (10k cat ids, 100k doc ids, 1m relations) is slower more than 5 times.\n\nI read about postgresql bitmap indecies and \"data lookup\" when scanning indecies to get a value for current transaction. Downloaded latest v8.4 snapshot, compiled it, but as I see there is no bitmap index in it. Maybe if I download HEAD revision I will find them there, dont know.\n\nAnyway, I want to ask, have anyone faced similar situation, and is there any way to achive closer to optimal performance using postgresql functionality and extensibility?\n\nRegards,\nChavdar Kopoev\n\n", "msg_date": "Wed, 26 Nov 2008 17:38:07 +0200", "msg_from": "\"Chavdar Kopoev\" <[email protected]>", "msg_from_op": true, "msg_subject": "many to many performance" }, { "msg_contents": "Chavdar Kopoev wrote:\n\n> I want to use as a data storage postgresql. Tried several data structures, testing btree, gin, gist indecies over them, but best achieved performance for a 10 times smaller dataset (10k cat ids, 100k doc ids, 1m relations) is slower more than 5 times.\n\nCan you post your queries and table definitions so people trying to help\nyou know what you did / tried to do? A downloadable self contained\nexample might also be useful.\n\nPlease also post the output of `EXPLAIN' on your queries, eg:\n\nEXPLAIN SELECT blah, ... FROM blah;\n\n> I read about postgresql bitmap indecies and \"data lookup\" when scanning indecies to get a value for current transaction. Downloaded latest v8.4 snapshot, compiled it, but as I see there is no bitmap index in it. Maybe if I download HEAD revision I will find them there, dont know.\n\nBitmap index scans are an internal function that's used to combine two\nindexes on the fly during a query (or at least use both of them in one\ntable scan). You don't make a bitmap index, you just make two ordinary\nbtree indexes and let Pg take care of this for you.\n\nIf you query on the columns of interest a lot, you might want to use a\nmulti-column index instead.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 27 Nov 2008 02:40:07 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: many to many performance" } ]
[ { "msg_contents": "Hey all,\n\n \n\nThis may be more of a Linux question than a PG question, but I'm wondering\nif any of you have successfully allocated more than 8 GB of memory to PG\nbefore.\n\n \n\nI have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory,\nand I've tried to commit half the memory to PG's shared buffer, but it seems\nto fail. I'm setting the kernel shared memory accordingly using sysctl,\nwhich seems to work fine, but when I set the shared buffer in PG and restart\nthe service, it fails if it's above about 8 GB. I actually have it\ncurrently set at 6 GB.\n\n \n\nI don't have the exact failure message handy, but I can certainly get it if\nthat helps. Mostly I'm just looking to know if there's any general reason\nwhy it would fail, some inherent kernel or db limitation that I'm unaware\nof. \n\n \n\nIf it matters, this DB is going to be hosting and processing hundreds of GB\nand eventually TB of data, it's a heavy read-write system, not transactional\nprocessing, just a lot of data file parsing (python/bash) and bulk loading.\nObviously the disks get hit pretty hard already, so I want to make the most\nof the large amount of available memory wherever possible. So I'm trying to\ntune in that direction.\n\n \n\nAny info is appreciated.\n\n \n\nThanks!\n\n\n\n\n\n\n\n\n\n\n\nHey all,\n \nThis may be more of a Linux question than a PG question, but\nI’m wondering if any of you have successfully allocated more than 8 GB of\nmemory to PG before.\n \nI have a fairly robust server running Ubuntu Hardy Heron, 24\nGB of memory, and I’ve tried to commit half the memory to PG’s\nshared buffer, but it seems to fail.  I’m setting the kernel shared\nmemory accordingly using sysctl, which seems to work fine, but when I set the\nshared buffer in PG and restart the service, it fails if it’s above about\n8 GB.  I actually have it currently set at 6 GB.\n \nI don’t have the exact failure message handy, but I\ncan certainly get it if that helps.  Mostly I’m just looking to know\nif there’s any general reason why it would fail, some inherent kernel or\ndb limitation that I’m unaware of.  \n \nIf it matters, this DB is going to be hosting and processing\nhundreds of GB and eventually TB of data, it’s a heavy read-write system,\nnot transactional processing, just a lot of data file parsing (python/bash) and\nbulk loading.  Obviously the disks get hit pretty hard already, so I want\nto make the most of the large amount of available memory wherever possible. \nSo I’m trying to tune in that direction.\n \nAny info is appreciated.\n \nThanks!", "msg_date": "Wed, 26 Nov 2008 15:09:55 -0700", "msg_from": "\"Ryan Hansen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Memory Allocation" }, { "msg_contents": "On Wednesday 26 November 2008, \"Ryan Hansen\" \n<[email protected]> wrote:\n> This may be more of a Linux question than a PG question, but I'm\n> wondering if any of you have successfully allocated more than 8 GB of\n> memory to PG before.\n>\n\nCentOS 5, 24GB shared_buffers on one server here. No problems.\n\n-- \nAlan\n", "msg_date": "Wed, 26 Nov 2008 14:18:12 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Allocation" }, { "msg_contents": "Ryan Hansen wrote:\n>\n> Hey all,\n>\n> This may be more of a Linux question than a PG question, but I�m\n> wondering if any of you have successfully allocated more than 8 GB of\n> memory to PG before.\n>\n> I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of\n> memory, and I�ve tried to commit half the memory to PG�s shared\n> buffer, but it seems to fail.\n>\n\nThough not sure why this is happening or whether it is normal, I would\nsuggest that such setting is maybe too high. From the Annotated\npostgresql.conf document at\n\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.html,\n\nthe suggested range is 8 to 400MB. They specifically say that it\nshould never be set to more than 1/3 of the available memory, which\nin your case is precisely the 8GB figure (I guess that's just a\ncoincidence --- I doubt that the server would be written so that it\nfails to start if shared_buffers is more than 1/3 of available RAM)\n\nAnother important parameter that you don't mention is the\neffective_cache_size, which that same document suggests should\nbe about 2/3 of available memory. (this tells the planner the amount\nof data that it can \"probabilistically\" expect to reside in memory due\nto caching, and as such, the planner is likely to produce more\naccurate estimates and thus better query optimizations).\n\nMaybe you could set shared_buffers to, say, 1 or 2GB (that's already\nbeyond the recommended figure, but given that you have 24GB, it\nmay not hurt), and then effective_cache_size to 16GB or so?\n\nHTH,\n\nCarlos\n--\n\n", "msg_date": "Wed, 26 Nov 2008 17:47:06 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Allocation" }, { "msg_contents": "\"Ryan Hansen\" <[email protected]> writes:\n> I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory,\n> and I've tried to commit half the memory to PG's shared buffer, but it seems\n> to fail. I'm setting the kernel shared memory accordingly using sysctl,\n> which seems to work fine, but when I set the shared buffer in PG and restart\n> the service, it fails if it's above about 8 GB.\n\nFails how? And what PG version is that?\n\nFWIW, while there are various schools of thought on how large to make\nshared_buffers, pretty much everybody agrees that half of physical RAM\nis not the sweet spot. What you're likely to get is maximal\ninefficiency with every active disk page cached twice --- once in kernel\nspace and once in shared_buffers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Nov 2008 17:58:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Allocation " }, { "msg_contents": "Tuning for bulk loading:\n\nMake sure the Linux kernel paramters in /proc/sys/vm related to the page cache are set well.\nSet swappiness to 0 or 1.\nMake sure you understand and configure /proc/sys/vm/dirty_background_ratio\nand /proc/sys/vm/dirty_ratio well.\nWith enough RAM the default on some kernel versions is way, way off (40% of RAM with dirty pages! yuck).\nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\nIf postgres is doing a lot of caching for you you probably want dirty_ratio at 10% or less, and you'll want the OS to start flushing to disk sooner rather than later. A dirty_background_ratio of 3% with 24GB of RAM is 720MB -- a pretty big buffer. I would not personally want this buffer to be larger than 5 seconds of max write speed of the disk I/O.\n\nYou'll need to tune your background writer to be aggressive enough to actually write data fast enough so that checkpoints don't suck, and tune your checkpoint size and settings as well. Turn on checkpoint logging on the database and run tests while looking at the output of those. Ideally, most of your batch writes have made it to the OS before the checkpoint, and the OS has actually started moving most of it to disk. If your settings are wrong, you'll have the data buffered twice, and most or nearly all of it will be in memory when the checkpoint happens, and the checkpoint will take a LONG time. The default Linux settings + default postgres settings + large shared_buffers will almost guarantee this situation for bulk loads. Both have to be configured with complementary settings. If you have a large postgres buffer, the OS buffer should be small and write more aggressively. If you have a small postgres buffer, the OS can be more lazy and cache much more.\n\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Ryan Hansen\nSent: Wednesday, November 26, 2008 2:10 PM\nTo: [email protected]\nSubject: [PERFORM] Memory Allocation\n\nHey all,\n\nThis may be more of a Linux question than a PG question, but I'm wondering if any of you have successfully allocated more than 8 GB of memory to PG before.\n\nI have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory, and I've tried to commit half the memory to PG's shared buffer, but it seems to fail. I'm setting the kernel shared memory accordingly using sysctl, which seems to work fine, but when I set the shared buffer in PG and restart the service, it fails if it's above about 8 GB. I actually have it currently set at 6 GB.\n\nI don't have the exact failure message handy, but I can certainly get it if that helps. Mostly I'm just looking to know if there's any general reason why it would fail, some inherent kernel or db limitation that I'm unaware of.\n\nIf it matters, this DB is going to be hosting and processing hundreds of GB and eventually TB of data, it's a heavy read-write system, not transactional processing, just a lot of data file parsing (python/bash) and bulk loading. Obviously the disks get hit pretty hard already, so I want to make the most of the large amount of available memory wherever possible. So I'm trying to tune in that direction.\n\nAny info is appreciated.\n\nThanks!\n\n\n\n\n\n\n\n\n\n\nTuning for bulk loading:\n \nMake sure the Linux kernel\nparamters in /proc/sys/vm related to the page cache are set well.\nSet swappiness to 0 or 1.\nMake sure you understand and\nconfigure /proc/sys/vm/dirty_background_ratio\nand /proc/sys/vm/dirty_ratio\nwell.  \nWith enough RAM the default on\nsome kernel versions is way, way off (40% of RAM with dirty pages!  yuck).\nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\nIf postgres is doing a lot of\ncaching for you you probably want dirty_ratio at 10% or less, and you'll want\nthe OS to start flushing to disk sooner rather than later.  A\ndirty_background_ratio of 3% with 24GB of RAM  is 720MB -- a pretty big\nbuffer.  I would not personally want this buffer to be larger than 5 seconds\nof max write speed of the disk I/O.\n \nYou'll need to tune your\nbackground writer to be aggressive enough to actually write data fast enough so\nthat checkpoints don't suck, and tune your checkpoint size and settings as\nwell.  Turn on checkpoint logging on the database and run tests while\nlooking at the output of those.  Ideally, most of your batch writes have\nmade it to the OS before the checkpoint, and the OS has actually started moving\nmost of it to disk.  If your settings are wrong,  you'll have the\ndata buffered twice, and most or nearly all of it will be in memory when the\ncheckpoint happens, and the checkpoint will take a LONG time.  The default\nLinux settings + default postgres settings + large shared_buffers will almost\nguarantee this situation for bulk loads.  Both have to be configured with\ncomplementary settings.  If you have a large postgres buffer, the OS\nbuffer should be small and write more aggressively.  If you have a small\npostgres buffer, the OS can be more lazy and cache much more.\n \n \n\n\nFrom:\[email protected] [mailto:[email protected]]\nOn Behalf Of Ryan Hansen\nSent: Wednesday, November 26, 2008 2:10 PM\nTo: [email protected]\nSubject: [PERFORM] Memory Allocation\n\n\n \nHey all,\n \nThis may be more of a Linux question than a PG question, but\nI’m wondering if any of you have successfully allocated more than 8 GB of\nmemory to PG before.\n \nI have a fairly robust server running Ubuntu Hardy Heron, 24\nGB of memory, and I’ve tried to commit half the memory to PG’s\nshared buffer, but it seems to fail.  I’m setting the kernel shared\nmemory accordingly using sysctl, which seems to work fine, but when I set the\nshared buffer in PG and restart the service, it fails if it’s above about\n8 GB.  I actually have it currently set at 6 GB.\n \nI don’t have the exact failure message handy, but I\ncan certainly get it if that helps.  Mostly I’m just looking to know\nif there’s any general reason why it would fail, some inherent kernel or\ndb limitation that I’m unaware of.  \n \nIf it matters, this DB is going to be hosting and processing\nhundreds of GB and eventually TB of data, it’s a heavy read-write system,\nnot transactional processing, just a lot of data file parsing (python/bash) and\nbulk loading.  Obviously the disks get hit pretty hard already, so I want\nto make the most of the large amount of available memory wherever\npossible.  So I’m trying to tune in that direction.\n \nAny info is appreciated.\n \nThanks!", "msg_date": "Wed, 26 Nov 2008 15:00:26 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Allocation" }, { "msg_contents": ">>> Scott Carey <[email protected]> wrote: \n> Set swappiness to 0 or 1.\n \nWe recently converted all 72 remote county databases from 8.2.5 to\n8.3.4. In preparation we ran a test conversion of a large county over\nand over with different settings to see what got us the best\nperformance. Setting swappiness below the default degraded\nperformance for us in those tests for identical data, same hardware,\nno other changes.\n \nOur best guess is that code which really wasn't getting called got\nswapped out leaving more space in the OS cache, but that's just a\nguess. Of course, I'm sure people would not be recommending it if\nthey hadn't done their own benchmarks to confirm that this setting\nactually improved things in their environments, so the lesson here is\nto test for your environment when possible.\n \n-Kevin\n", "msg_date": "Wed, 26 Nov 2008 17:09:15 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Allocation" }, { "msg_contents": "Swappiness optimization is going to vary. Definitely test on your own.\n\nFor a bulk load database, with large page cache, swappines = 60 (default) is _GUARANTEED_ to force the OS to swap out some of Postgres while in heavy use. This is heavily dependent on the page cache size, work_mem size, and concurrency.\nI've had significantly increased performance setting this value low (1000x ! -- if your DB starts swapping postgres, you're performance-DEAD). The default has the OS targeting close to 60% of the memory for page cache. On a 32GB server, with 7GB postgres buffer cache, several concurrent queries reading GB's of data and using 500MB + work_mem (huge aggregates), the default swappiness will choose to page out postgres with about 19GB of disk page cache left to evict, with disastrous results. And that is a read-only test. Tests with writes can trigger it earlier if combined with bad dirty_buffers settings.\n\nThe root of the problem is that the Linux paging algorithm estimates that I/O for file read access is as costly as I/O for paging. A reasonable assumption for a desktop, a ridiculously false assumption for a large database with high capacity DB file I/O and a much lower capability swap file. Not only that -- page in is almost always near pure random reads, but DB I/O is often sequential. So losing 100M of cached db file takes a lot less time to scan back in than 100MB of the application.\n\nIf you do have enough other applications that are idle that take up RAM that should be pushed out to disk from time to time (perhaps your programs that are doing the bulk loading?) a higher value is useful. Although it is not exact, think of the swappiness value as the percentage of RAM that the OS would prefer page cache to applications (very roughly).\n\nThe more RAM you have and the larger your postgres memory usage, the lower the swappiness value should be. 60% of 24GB is ~14.5GB, If you have that much stuff that is in RAM that should be paged out to save space, try it.\n\nI currently use a value of 1, on a 32GB machine, and about 600MB of 'stuff' gets paged out normally, 1400MB under heavy load. This is a dedicated machine. Higher values page out more stuff that increases the cache size and helps performance a little, but under the heavy load, it hits the paging wall and falls over. The small improvement in performance when the system is not completely stressed is not worth risking hitting the wall for me.\n\n***For a bulk load database, one is optimizing for _writes_ and extra page cache doesn't help writes like it does reads.***\n\nWhen I use a machine with misc. other lower priority apps and less RAM, I have found larger values to be helpful.\n\nIf your DB is configured with a low shared_buffers and small work_mem, you probably want the OS to use that much memory for disk pages, and again a higher swappiness may be more optimal.\n\nLike all of these settings, tune to your application and test. Many of these settings are things that go hand in hand with others, but alone don't make as much sense. Tuning Postgres to do most of the caching and making the OS get out of the way is far different than tuning the OS to do as much caching work as possible and minimizing postgres. Which of those two strategies is best is highly application dependent, somewhat O/S dependent, and also hardware dependent.\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]]\nSent: Wednesday, November 26, 2008 3:09 PM\nTo: Ryan Hansen; [email protected]; Scott Carey\nSubject: Re: [PERFORM] Memory Allocation\n\n>>> Scott Carey <[email protected]> wrote:\n> Set swappiness to 0 or 1.\n\nWe recently converted all 72 remote county databases from 8.2.5 to\n8.3.4. In preparation we ran a test conversion of a large county over\nand over with different settings to see what got us the best\nperformance. Setting swappiness below the default degraded\nperformance for us in those tests for identical data, same hardware,\nno other changes.\n\nOur best guess is that code which really wasn't getting called got\nswapped out leaving more space in the OS cache, but that's just a\nguess. Of course, I'm sure people would not be recommending it if\nthey hadn't done their own benchmarks to confirm that this setting\nactually improved things in their environments, so the lesson here is\nto test for your environment when possible.\n\n-Kevin\n", "msg_date": "Wed, 26 Nov 2008 15:52:44 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Allocation" }, { "msg_contents": "I'm hoping that through compare/contrast we might help someone start\ncloser to their own best values....\n \n>>> Scott Carey <[email protected]> wrote: \n> Tests with writes can trigger it earlier if combined with bad\ndirty_buffers \n> settings.\n \nWe've never, ever modified dirty_buffers settings from defaults.\n \n> The root of the problem is that the Linux paging algorithm estimates\nthat \n> I/O for file read access is as costly as I/O for paging. A\nreasonable \n> assumption for a desktop, a ridiculously false assumption for a large\n\n> database with high capacity DB file I/O and a much lower capability\nswap \n> file.\n \nOur swap file is not on lower speed drives.\n \n> If you do have enough other applications that are idle that take up\nRAM that \n> should be pushed out to disk from time to time (perhaps your programs\nthat \n> are doing the bulk loading?) a higher value is useful.\n \nBulk loading was ssh cat | psql.\n \n> The more RAM you have and the larger your postgres memory usage, the\nlower \n> the swappiness value should be.\n \nI think the test environment had 8 GB RAM with 256 MB in\nshared_buffers. For the conversion we had high work_mem and\nmaintenance_work_mem settings, and we turned fsync off, along with a\nfew other settings we would never using during production.\n \n> I currently use a value of 1, on a 32GB machine, and about 600MB of\n'stuff' \n> gets paged out normally, 1400MB under heavy load.\n \nOutside of bulk load, we've rarely seen anything swap, even under\nload.\n \n> ***For a bulk load database, one is optimizing for _writes_ and extra\npage \n> cache doesn't help writes like it does reads.***\n \nI'm thinking that it likely helps when indexing tables for which data\nhas recently been loaded. It also might help minimize head movement\nand/or avoid the initial disk hit for a page which subsequently get\nhint bits set .\n \n> Like all of these settings, tune to your application and test.\n \nWe sure seem to agree on that.\n \n-Kevin\n", "msg_date": "Fri, 28 Nov 2008 16:28:42 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Allocation" } ]
[ { "msg_contents": "Hi All;\n\nI'm looking for tips / ideas per performance tuning some specific queries. \nThese are generally large tables on a highly active OLTP system \n(100,000 - 200,000 plus queries per day)\n\nFirst off, any thoughts per tuning inserts into large tables. I have a large \ntable with an insert like this:\n\ninsert into public.bigtab1 (text_col1, text_col2, id) values ...\n\n QUERY PLAN \n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0)\n(1 row)\n\nThe query cost is low but this is one of the slowest statements per pgfouine\n\n\n\n\n\n\n\nNext we have a select count(*) that also one of the top offenders:\n\nselect count(*) from public.tab3 where user_id=31 \nand state='A' \nand amount>0;\n\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------\n Aggregate (cost=3836.53..3836.54 rows=1 width=0)\n -> Index Scan using order_user_indx ontab3 user_id (cost=0.00..3834.29 \nrows=897 width=0)\n Index Cond: (idx_user_id = 31406948::numeric)\n Filter: ((state = 'A'::bpchar) AND (amount > 0::numeric))\n(4 rows)\n\nWe have an index on the user_id but not on the state or amount, \n\nadd index to amount ?\n\n\n\nThoughts ?\n\n\n\n\n\n\n", "msg_date": "Wed, 26 Nov 2008 21:21:04 -0700", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "performance tuning queries" }, { "msg_contents": "am Wed, dem 26.11.2008, um 21:21:04 -0700 mailte Kevin Kempter folgendes:\n> Next we have a select count(*) that also one of the top offenders:\n> \n> select count(*) from public.tab3 where user_id=31 \n> and state='A' \n> and amount>0;\n> \n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------\n> Aggregate (cost=3836.53..3836.54 rows=1 width=0)\n> -> Index Scan using order_user_indx ontab3 user_id (cost=0.00..3834.29 \n> rows=897 width=0)\n> Index Cond: (idx_user_id = 31406948::numeric)\n> Filter: ((state = 'A'::bpchar) AND (amount > 0::numeric))\n> (4 rows)\n> \n> We have an index on the user_id but not on the state or amount, \n> \n> add index to amount ?\n\nDepends.\n\n- Is the index on user_id a unique index?\n- how many different values are in the table for state, i.e., maybe an\n index on state can help\n- how many rows in the table with amount > 0? If almost all rows\n contains an amount > 0 an index can't help in this case\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 27 Nov 2008 07:12:26 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance tuning queries" }, { "msg_contents": "\n\n> First off, any thoughts per tuning inserts into large tables. I have a \n> large\n> table with an insert like this:\n>\n> insert into public.bigtab1 (text_col1, text_col2, id) values ...\n>\n> QUERY PLAN\n> ------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0)\n> (1 row)\n>\n> The query cost is low but this is one of the slowest statements per \n> pgfouine\n\n\tPossible Causes of slow inserts :\n\n\t- slow triggers ?\n\t- slow foreign key checks ? (missing index on referenced table ?)\n\t- functional index on a slow function ?\n\t- crummy hardware (5 MB/s RAID cards, etc)\n\t- too many indexes ?\n\n> Next we have a select count(*) that also one of the top offenders:\n>\n> select count(*) from public.tab3 where user_id=31\n> and state='A'\n> and amount>0;\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------\n> Aggregate (cost=3836.53..3836.54 rows=1 width=0)\n> -> Index Scan using order_user_indx ontab3 user_id \n> (cost=0.00..3834.29\n> rows=897 width=0)\n> Index Cond: (idx_user_id = 31406948::numeric)\n> Filter: ((state = 'A'::bpchar) AND (amount > 0::numeric))\n> (4 rows)\n>\n> We have an index on the user_id but not on the state or amount,\n>\n> add index to amount ?\n\n\tCan we see EXPLAIN ANALYZE ?\n\n\tIn this case the ideal index would be multicolumn (user_id, state) or \n(user_id,amount) or (user_id,state,amount) but choosing between the 3 \ndepends on your data...\n\n\tYou could do :\n\nSELECT count(*), state, amount>0 FROM public.tab3 where user_id=31 GROUP \nBY state, amount>0;\n\n\tAnd post the results.\n", "msg_date": "Thu, 27 Nov 2008 09:38:36 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance tuning queries" }, { "msg_contents": "Kevin Kempter schrieb:\n> Hi All;\n>\n> I'm looking for tips / ideas per performance tuning some specific queries. \n> These are generally large tables on a highly active OLTP system \n> (100,000 - 200,000 plus queries per day)\n>\n> First off, any thoughts per tuning inserts into large tables. I have a large \n> table with an insert like this:\n>\n> insert into public.bigtab1 (text_col1, text_col2, id) values ...\n>\n> QUERY PLAN \n> ------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0)\n> (1 row)\n>\n> The query cost is low but this is one of the slowest statements per pgfouine\n> \nDo you insert multiple values in one transaction, or one transaction per \ninsert?\n\n", "msg_date": "Thu, 27 Nov 2008 11:09:28 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance tuning queries" } ]
[ { "msg_contents": "Hi Craig,\n\nThank you for your answer.\n\nHere are my test table and indecies definitions:\n\n-- document id to category id\ncreate table doc_to_cat (\n doc_id integer not null,\n cat_id integer not null\n) with (oids=false);\n\n-- Total 1m rows. 500k unique document ids. 20k unique category ids. Each doc_id refers to exactly two cat_id. \ninsert into doc_to_cat\nselect i/50*25 + i%50 as doc_id, i/50 as cat_id from generate_series(0, 1000000) i\n\ncreate index doc_to_cat__doc_id on doc_to_cat using btree (doc_id);\ncreate index doc_to_cat__cat_id on doc_to_cat using btree (cat_id);\n\n-- document id to array of category ids\ncreate table doc_to_cat_arr (\n doc_id integer not null,\n cat_arr integer[] not null\n) with (oids=false);\n\ninsert into doc_to_cat_arr\nselect doc_id, int_array_aggregate(cat_id) as cat_arr\nfrom doc_to_cat\ngroup by doc_id\n\ncreate index doc_to_cat_arr__doc_id on doc_to_cat_arr using btree (doc_id);\n\n-- category id to array of document ids\ncreate table cat_to_doc_arr (\n cat_id integer not null,\n doc_arr integer[] not null\n) with (oids=false);\n\ninsert into cat_to_doc_arr\nselect cat_id, int_array_aggregate(doc_id) as doc_arr\nfrom doc_to_cat\ngroup by cat_id\n\ncreate index cat_to_doc_arr__doc_arr__gin on cat_to_doc_arr using gin (doc_arr gin__int_ops);\n\n-- Query Ids\ncreate table query_ids (\n doc_id integer not null\n) with (oids=false);\n\n-- 200k test ids for query with\ninsert into query_ids\nselect generate_series(100000, 300000);\n\ncreate index query_ids__doc_id on query_ids using btree (doc_id);\n\nNow queries plans. We are looking for cat_id having relations with 200k doc ids:\n\nexplain analyze\nselect distinct cat_id from doc_to_cat join query_ids using (doc_id)\n\n\"Unique (cost=19303.84..19602.03 rows=20544 width=4) (actual time=1006.745..1190.472 rows=8002 loops=1)\"\n\" -> Sort (cost=19303.84..19452.93 rows=372735 width=4) (actual time=1006.743..1094.908 rows=400002 loops=1)\"\n\" Sort Key: doc_to_cat.cat_id\"\n\" Sort Method: quicksort Memory: 31039kB\"\n\" -> Merge Join (cost=2972.22..13785.04 rows=372735 width=4) (actual time=167.591..750.177 rows=400002 loops=1)\"\n\" Merge Cond: (query_ids.doc_id = doc_to_cat.doc_id)\"\n\" -> Index Scan using query_ids_doc_id on query_ids (cost=0.00..2912.05 rows=200001 width=4) (actual time=0.021..81.291 rows=200001 loops=1)\"\n\" -> Index Scan using doc_to_cat_doc_id on doc_to_cat (cost=0.00..14543.09 rows=1000001 width=8) (actual time=0.017..281.769 rows=599978 loops=1)\"\n\"Total runtime: 1195.397 ms\"\n\nexplain analyze\nselect distinct int_array_enum(cat_arr) from doc_to_cat_arr join query_ids using (doc_id)\n\"Unique (cost=13676.57..13836.57 rows=19732 width=29) (actual time=1061.490..1246.595 rows=8002 loops=1)\"\n\" -> Sort (cost=13676.57..13756.57 rows=200001 width=29) (actual time=1061.488..1150.451 rows=400002 loops=1)\"\n\" Sort Key: (int_array_enum(doc_to_cat_arr.cat_arr))\"\n\" Sort Method: quicksort Memory: 31039kB\"\n\" -> Merge Join (cost=2318.98..10859.01 rows=200001 width=29) (actual time=163.840..816.697 rows=400002 loops=1)\"\n\" Merge Cond: (doc_to_cat_arr.doc_id = query_ids.doc_id)\"\n\" -> Index Scan using doc_to_cat_arr_doc_id on doc_to_cat_arr (cost=0.00..11311.10 rows=500025 width=33) (actual time=0.022..359.673 rows=300002 loops=1)\"\n\" -> Index Scan using query_ids_doc_id on query_ids (cost=0.00..2912.05 rows=200001 width=4) (actual time=0.016..81.370 rows=200001 loops=1)\"\n\"Total runtime: 1251.613 ms\"\n\nexplain analyze\nselect cat_id from cat_to_doc_arr\nwhere doc_arr && (select int_array_aggregate(q.doc_id) from (select doc_id from query_ids limit 20000) as q)\nThis query should never be run - too slow even with limit 20k of input ids.\n\nSo .. final best result is more than 1 second (on fast machine) for test dataset 5 times less than needed. So I am far away from achieving good results.\nI have to ask again if anyone faced similar situation, and is there any way to achive closer to optimal performance using postgresql functionality and extensibility?\n\nChavdar Kopoev\n\n-----Original Message-----\nFrom: Craig Ringer [mailto:[email protected]]\nSent: 2008-11-26, 19:40:47\nTo: Chavdar Kopoev [mailto:[email protected]]\nSubject: Re: [PERFORM] many to many performance\n\nChavdar Kopoev wrote:\n\n> I want to use as a data storage postgresql. Tried several data structures, testing btree, gin, gist indecies over them, but best achieved performance for a 10 times smaller dataset (10k cat ids, 100k doc ids, 1m relations) is slower more than 5 times.\n\nCan you post your queries and table definitions so people trying to help\nyou know what you did / tried to do? A downloadable self contained\nexample might also be useful.\n\nPlease also post the output of `EXPLAIN' on your queries, eg:\n\nEXPLAIN SELECT blah, ... FROM blah;\n\n> I read about postgresql bitmap indecies and \"data lookup\" when scanning indecies to get a value for current transaction. Downloaded latest v8.4 snapshot, compiled it, but as I see there is no bitmap index in it. Maybe if I download HEAD revision I will find them there, dont know.\n\nBitmap index scans are an internal function that's used to combine two\nindexes on the fly during a query (or at least use both of them in one\ntable scan). You don't make a bitmap index, you just make two ordinary\nbtree indexes and let Pg take care of this for you.\n\nIf you query on the columns of interest a lot, you might want to use a\nmulti-column index instead.\n\n--\nCraig Ringer\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 27 Nov 2008 18:39:51 +0200", "msg_from": "\"Chavdar Kopoev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: many to many performance" } ]