threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi,\n\nI know there exist Bitmap Index Scan and Bitmap Heap Scan in Postgres.\nWhat about implementing a bitmap index for explicit use (CREATE INDEX ...)?\nAny use cases?\nBitmap indexes work best on values with low cardinality (categorical\ndata), would be efficient in space and ready for logic operations.\n\nStefan\n\nP.S. Disclaimer (referring to my other thread about Hash): I'm not a\nbtree opposer :-> I'm just evaluating index alternatives.\n",
"msg_date": "Sun, 18 Sep 2011 21:45:50 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "What about implementing a bitmap index? Any use cases?"
},
{
"msg_contents": "On 18 September 2011 20:45, Stefan Keller <[email protected]> wrote:\n> Hi,\n>\n> I know there exist Bitmap Index Scan and Bitmap Heap Scan in Postgres.\n> What about implementing a bitmap index for explicit use (CREATE INDEX ...)?\n> Any use cases?\n> Bitmap indexes work best on values with low cardinality (categorical\n> data), would be efficient in space and ready for logic operations.\n>\n> Stefan\n>\n> P.S. Disclaimer (referring to my other thread about Hash): I'm not a\n> btree opposer :-> I'm just evaluating index alternatives.\n\nSearch the pgsql-hackers archives to read about an unsuccessful\nattempt to introduce on-disk bitmap indexes to Postgres.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n",
"msg_date": "Sun, 18 Sep 2011 21:06:32 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What about implementing a bitmap index? Any use cases?"
}
] |
[
{
"msg_contents": "Hi,\n\nSorry if this is an odd question:\nI assume that Postgres indexes don't store records but only pointers\nto the data.\nThis means, that there is always an additional access needed (real table I/O).\nWould an index containing data records make sense?\n\nStefan\n",
"msg_date": "Sun, 18 Sep 2011 22:18:45 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index containing records instead of pointers to the data?"
},
{
"msg_contents": "On 18 September 2011 21:18, Stefan Keller <[email protected]> wrote:\n> Hi,\n>\n> Sorry if this is an odd question:\n> I assume that Postgres indexes don't store records but only pointers\n> to the data.\n> This means, that there is always an additional access needed (real table I/O).\n> Would an index containing data records make sense?\n\nYes, it's called a covering index, where the data required to produce\nresults for the query are entirely contained in the index. That\nshould be hopefully coming in 9.2.\n\nSee http://wiki.postgresql.org/wiki/Index-only_scans\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Sun, 18 Sep 2011 21:21:52 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index containing records instead of pointers to the data?"
},
{
"msg_contents": "On 9/18/11 1:18 PM, Stefan Keller wrote:\n> Hi,\n>\n> Sorry if this is an odd question:\n> I assume that Postgres indexes don't store records but only pointers\n> to the data.\n> This means, that there is always an additional access needed (real table I/O).\n> Would an index containing data records make sense?\nSee my post entitled, \"How to make hash indexes fast\" for a solution to the additional-disk-access problem.\n\nCraig\n>\n> Stefan\n>\n\n",
"msg_date": "Sun, 18 Sep 2011 18:15:45 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index containing records instead of pointers to the\n data?"
}
] |
[
{
"msg_contents": "Let's say that I want to do INSERT SELECT of 1,000 items into a table. The\ntable has some ints, varchars, TEXT and BLOB fields.\n\nWould the time that it takes, differ a great deal, depending on whether the\ntable has only 100,000 or 5,000,000 records?\n\nThanks\n\ni\n\nLet's say that I want to do INSERT SELECT of 1,000 items into a table. The table has some ints, varchars, TEXT and BLOB fields. Would the time that it takes, differ a great deal, depending on whether the table has only 100,000 or 5,000,000 records?\nThanksi",
"msg_date": "Mon, 19 Sep 2011 17:11:11 -0500",
"msg_from": "Igor Chudov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres INSERT performance and scalability"
},
{
"msg_contents": "On Mon, Sep 19, 2011 at 4:11 PM, Igor Chudov <[email protected]> wrote:\n> Let's say that I want to do INSERT SELECT of 1,000 items into a table. The\n> table has some ints, varchars, TEXT and BLOB fields.\n> Would the time that it takes, differ a great deal, depending on whether the\n> table has only 100,000 or 5,000,000 records?\n\nDepends. Got any indexes? The more indexes you have to update the\nslower it's gonna be. You can test this, it's easy to create test\nrows like so:\n\ninsert into test select generate_series(1,10000);\n\netc. There's lots of examples floating around on how to do that. So,\nmake yourself a couple of tables and test it.\n",
"msg_date": "Mon, 19 Sep 2011 17:32:29 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres INSERT performance and scalability"
},
{
"msg_contents": "Igor,\n\n* Igor Chudov ([email protected]) wrote:\n> Would the time that it takes, differ a great deal, depending on whether the\n> table has only 100,000 or 5,000,000 records?\n\nYes, because PostgreSQL is going to copy the data. If you don't need or\nwant it to be copied, just use a view. I've never heard of any\nrelational database implementing 'copy on write' type semantics, if\nthat's what you're asking about. Databases, unlike applications with\ncode in memory that's constantly copied, are typically focused around\nminimizing duplication of data (since it all has to end up on disk at\nsome point). Not much point in having the overhead of COW for that kind\nof environment, I wouldn't think.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Mon, 19 Sep 2011 20:53:42 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres INSERT performance and scalability"
},
{
"msg_contents": "On Mon, Sep 19, 2011 at 6:32 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Sep 19, 2011 at 4:11 PM, Igor Chudov <[email protected]> wrote:\n> > Let's say that I want to do INSERT SELECT of 1,000 items into a table.\n> The\n> > table has some ints, varchars, TEXT and BLOB fields.\n> > Would the time that it takes, differ a great deal, depending on whether\n> the\n> > table has only 100,000 or 5,000,000 records?\n>\n> Depends. Got any indexes? The more indexes you have to update the\n> slower it's gonna be. You can test this, it's easy to create test\n> rows like so:\n>\n> insert into test select generate_series(1,10000);\n>\n> etc. There's lots of examples floating around on how to do that. So,\n> make yourself a couple of tables and test it.\n>\n\nWell, my question is, rather, whether the time to do a bulk INSERT of N\nrecords into a large table, would take substantially longer than a bulk\ninsert of N records into a small table. In other words, does the populating\ntime grow as the table gets more and more rows?\n\ni\n\nOn Mon, Sep 19, 2011 at 6:32 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Sep 19, 2011 at 4:11 PM, Igor Chudov <[email protected]> wrote:\n> Let's say that I want to do INSERT SELECT of 1,000 items into a table. The\n> table has some ints, varchars, TEXT and BLOB fields.\n> Would the time that it takes, differ a great deal, depending on whether the\n> table has only 100,000 or 5,000,000 records?\n\nDepends. Got any indexes? The more indexes you have to update the\nslower it's gonna be. You can test this, it's easy to create test\nrows like so:\n\ninsert into test select generate_series(1,10000);\n\netc. There's lots of examples floating around on how to do that. So,\nmake yourself a couple of tables and test it.\nWell, my question is, rather, whether the time to do a bulk INSERT of N records into a large table, would take substantially longer than a bulk insert of N records into a small table. In other words, does the populating time grow as the table gets more and more rows?\ni",
"msg_date": "Mon, 19 Sep 2011 20:11:44 -0500",
"msg_from": "Igor Chudov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres INSERT performance and scalability"
},
{
"msg_contents": "* Igor Chudov ([email protected]) wrote:\n> Well, my question is, rather, whether the time to do a bulk INSERT of N\n> records into a large table, would take substantially longer than a bulk\n> insert of N records into a small table. In other words, does the populating\n> time grow as the table gets more and more rows?\n\nOh, in that regard, the answer would generally be 'no'. PostgreSQL\nmaintains a table known as the 'free space map', where it keeps track of\nwhere there is 'free space' to insert data into a table. As someone\nelse mentioned, if there's a lot of indexes then it's possible that the\nincreased depth in the index due to the larger number of tuples might\nmean the larger table is slower, but I don't think it'd make a huge\ndifference, to be honest...\n\nAre you seeing that behavior? There's nothing like testing it to see\nexactly what happens, of course..\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Mon, 19 Sep 2011 21:15:51 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres INSERT performance and scalability"
},
{
"msg_contents": "On Mon, Sep 19, 2011 at 7:53 PM, Stephen Frost <[email protected]> wrote:\n> Igor,\n>\n> * Igor Chudov ([email protected]) wrote:\n>> Would the time that it takes, differ a great deal, depending on whether the\n>> table has only 100,000 or 5,000,000 records?\n>\n> Yes, because PostgreSQL is going to copy the data. If you don't need or\n> want it to be copied, just use a view. I've never heard of any\n> relational database implementing 'copy on write' type semantics, if\n> that's what you're asking about. Databases, unlike applications with\n> code in memory that's constantly copied, are typically focused around\n> minimizing duplication of data (since it all has to end up on disk at\n> some point). Not much point in having the overhead of COW for that kind\n> of environment, I wouldn't think.\n\nIsn't the WAL basically COW?\n\n-- \nJon\n",
"msg_date": "Mon, 19 Sep 2011 20:21:38 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres INSERT performance and scalability"
},
{
"msg_contents": "* Jon Nelson ([email protected]) wrote:\n> Isn't the WAL basically COW?\n\neh..? No.. The WAL is used to record what changes are made to the\nvarious files in the database, it certainly isn't an kind of\n\"copy-on-write\" system, where we wait until a change is made to data\nbefore copying it..\n\nIf you INSERT .. SELECT, you're going to get the real data in the WAL,\nand also in the heap of the new table..\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Mon, 19 Sep 2011 21:32:41 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres INSERT performance and scalability"
},
{
"msg_contents": "On 09/20/2011 09:21 AM, Jon Nelson wrote:\n> Isn't the WAL basically COW?\n\nNope, it's a lot more like a filesystem journal - though it includes all \ndata, not just metadata like filesystem journals usually do.\n\nNow, you could argue that PostgreSQL uses a copy-on-write like system to \nmaintain row versions so that already-running statements (or \nSERIALIZABLE transactions) don't see data from the future and to \nmaintain rollback data for uncommitted transactions. It's the *new* data \nthat gets written to the WAL, though, not the old data.\n\n(OK, if you have full_page_writes enabled you might get a mix of old and \nnew data written to WAL, but that's an implementation detail).\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 20 Sep 2011 13:04:56 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres INSERT performance and scalability"
}
] |
[
{
"msg_contents": "I have a lot of wasted bytes in some tables.\n\nSomewhere I read that maybe auto-vacuum can't release space due to a low\nmax_fsm_pages setting.\n\n \n\nI want to increase it, but I don't found the param in the postgres.conf.\n\n \n\nThis param exists? If not? How can I deal with bloated tables?\n\n \n\nI have many tables with lot of insert/updates/delete (aprox. 5,000,000 per\nday)\n\n \n\nThanks!\n\n \n\n \n\n\nI have a lot of wasted bytes in some tables.Somewhere I read that maybe auto-vacuum can’t release space due to a low max_fsm_pages setting. I want to increase it, but I don’t found the param in the postgres.conf. This param exists? If not? How can I deal with bloated tables? I have many tables with lot of insert/updates/delete (aprox. 5,000,000 per day) Thanks!",
"msg_date": "Tue, 20 Sep 2011 00:28:43 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "where is max_fsm_pages in PG9.0?"
},
{
"msg_contents": "On Mon, Sep 19, 2011 at 10:28 PM, Anibal David Acosta <[email protected]> wrote:\n> I have a lot of wasted bytes in some tables.\n>\n> Somewhere I read that maybe auto-vacuum can’t release space due to a low\n> max_fsm_pages setting.\n\nIt's no longer there, as fsm was moved from memory (which was limited\nby max fsm pages) to disk, which is limited by the size of your disks.\n\nMost likely you aren't vacuuming aggresively enough. Lower\nautovacuum_vacuum_cost_delay and raise autovacuum_vacuum_cost_limit,\nand possibly raise max threads for vacuum works (forget the name)\n",
"msg_date": "Mon, 19 Sep 2011 22:49:50 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where is max_fsm_pages in PG9.0?"
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nI had posted a query in \"GENERAL\" category, not sure if that was the\ncorrect category to post.\n\nPlease help me understand how to calculate free space in Tables and Indexes\neven after vacuuming and analyzing is performed.\n\nWhat i understand is that, even if we perform VACUUM ANALYZE regularly, the\nfree space generated is not filled up.\n\nI see lot of free spaces or free pages in Tables and Indexes. But, I need to\ngive an exact calculation on how much space will be reclaimed after VACUUM\nFULL and RE-INDEXING.\n\nIs there any standard procedure or process to calculate the same ?\n\nPlease help !\n\nThanks\nVenkat\n\nHello Everyone,I had posted a query in \"GENERAL\" category, not sure if that was the correct category to post.Please help me understand how to calculate free space in Tables and Indexes even after vacuuming and analyzing is performed.\nWhat i understand is that, even if we perform VACUUM ANALYZE regularly, the free space generated is not filled up.I see lot of free spaces or free pages in Tables and Indexes. But, I need to give an exact calculation on how much space will be reclaimed after VACUUM FULL and RE-INDEXING.\nIs there any standard procedure or process to calculate the same ?Please help !ThanksVenkat",
"msg_date": "Tue, 20 Sep 2011 21:52:42 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": ": Performance Improvement Strategy"
},
{
"msg_contents": "W dniu 2011-09-20 18:22, Venkat Balaji pisze:\n> Hello Everyone,\n> \n> I had posted a query in \"GENERAL\" category, not sure if that was the\n> correct category to post.\n> \n> Please help me understand how to calculate free space in Tables and\n> Indexes even after vacuuming and analyzing is performed.\n> \n> What i understand is that, even if we perform VACUUM ANALYZE regularly,\n> the free space generated is not filled up.\n> \n> I see lot of free spaces or free pages in Tables and Indexes. But, I\n> need to give an exact calculation on how much space will be reclaimed\n> after VACUUM FULL and RE-INDEXING.\n> \n> Is there any standard procedure or process to calculate the same ?\n\nHello!\nI hope this link will be usefull for you :\nhttp://wiki.postgresql.org/wiki/Show_database_bloat\nRegards\n",
"msg_date": "Tue, 20 Sep 2011 19:51:36 +0200",
"msg_from": "=?ISO-8859-2?Q?Marcin_Miros=B3aw?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Venkat,\n\n> I see lot of free spaces or free pages in Tables and Indexes. But, I need to\n> give an exact calculation on how much space will be reclaimed after VACUUM\n> FULL and RE-INDEXING.\n\nAt present, there is no way to calculate this precisely. You can only\nestimate, and estimates have significant error factors. The query which\nMarcin linked for you, for example, can be as much as 500% off (although\nusually only 50% off).\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Tue, 20 Sep 2011 11:09:48 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "On 21/09/11 06:09, Josh Berkus wrote:\n> Venkat,\n>\n>> I see lot of free spaces or free pages in Tables and Indexes. But, I need to\n>> give an exact calculation on how much space will be reclaimed after VACUUM\n>> FULL and RE-INDEXING.\n> At present, there is no way to calculate this precisely. You can only\n> estimate, and estimates have significant error factors. The query which\n> Marcin linked for you, for example, can be as much as 500% off (although\n> usually only 50% off).\n>\n\nIf you have autovacuum on (which should be typical thee days), then \nusing the freespacemap contrib module should give very accurate results:\n\nSELECT oid::regclass,\n pg_relation_size(oid)/(1024*1024) AS mb,\n sum(free)/(1024*1024) AS free_mb\n FROM\n (SELECT oid, (pg_freespace(oid)).avail AS free\n FROM pg_class) AS a\n GROUP BY a.oid ORDER BY free_mb DESC;\n\n\nregards\n\nMark\n",
"msg_date": "Wed, 21 Sep 2011 10:05:07 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "On 21/09/11 10:05, Mark Kirkwood wrote:\n>\n> ...then using the freespacemap contrib module should give very \n> accurate results:\n>\n\nSorry, should have said - for 8.4 and later!\n",
"msg_date": "Wed, 21 Sep 2011 10:10:49 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Thank Everyone for your inputs !\n\nMark,\n\nWe are using 9.0, so, i should be able to make use of this \"freespacemap\"\ncontrib module and would get back to you with the results.\n\nI was using below query (which i got it by googling)..\n\nBut, was not sure, if its picking up the correct information. I want to\navoid mis-prediction cost after whole production has been scheduled for\ndowntime for maintenance.\n\nSELECT\n current_database(), schemaname, tablename, /*reltuples::bigint,\nrelpages::bigint, otta,*/\n ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS\ntbloat,\n CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END\nAS wastedbytes,\n iname, /*ituples::bigint, ipages::bigint, iotta,*/\n ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric\nEND,1) AS ibloat,\n CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS\nwastedibytes\nFROM (\n SELECT\n schemaname, tablename, cc.reltuples, cc.relpages, bs,\n CEIL((cc.reltuples*((datahdr+ma-\n (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma\nEND))+nullhdr2+4))/(bs-20::float)) AS otta,\n COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples,\nCOALESCE(c2.relpages,0) AS ipages,\n COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta --\nvery rough approximation, assumes all cols\n FROM (\n SELECT\n ma,bs,schemaname,tablename,\n (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma\nEND)))::numeric AS datahdr,\n (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE\nnullhdr%ma END))) AS nullhdr2\n FROM (\n SELECT\n schemaname, tablename, hdr, ma, bs,\n SUM((1-null_frac)*avg_width) AS datawidth,\n MAX(null_frac) AS maxfracsum,\n hdr+(\n SELECT 1+count(*)/8\n FROM pg_stats s2\n WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND\ns2.tablename = s.tablename\n ) AS nullhdr\n FROM pg_stats s, (\n SELECT\n (SELECT current_setting('block_size')::numeric) AS bs,\n CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23\nEND AS hdr,\n CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma\n FROM (SELECT version() AS v) AS foo\n ) AS constants\n GROUP BY 1,2,3,4,5\n ) AS foo\n ) AS rs\n JOIN pg_class cc ON cc.relname = rs.tablename\n JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname =\nrs.schemaname AND nn.nspname <> 'information_schema'\n LEFT JOIN pg_index i ON indrelid = cc.oid\n LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid\n) AS sml\nORDER BY wastedbytes DESC\n\nThanks\nVenkat\n\nOn Wed, Sep 21, 2011 at 3:40 AM, Mark Kirkwood <\[email protected]> wrote:\n\n> On 21/09/11 10:05, Mark Kirkwood wrote:\n>\n>>\n>> ...then using the freespacemap contrib module should give very accurate\n>> results:\n>>\n>>\n> Sorry, should have said - for 8.4 and later!\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nThank Everyone for your inputs !Mark,We are using 9.0, so, i should be able to make use of this \"freespacemap\" contrib module and would get back to you with the results.\nI was using below query (which i got it by googling)..But, was not sure, if its picking up the correct information. I want to avoid mis-prediction cost after whole production has been scheduled for downtime for maintenance.\nSELECT current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/ \n ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat, CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes, \n iname, /*ituples::bigint, ipages::bigint, iotta,*/ ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat, \n CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes FROM ( \n SELECT schemaname, tablename, cc.reltuples, cc.relpages, bs, CEIL((cc.reltuples*((datahdr+ma- \n (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta, COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages, \n COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols FROM ( \n SELECT ma,bs,schemaname,tablename, (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr, \n (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2 FROM ( \n SELECT schemaname, tablename, hdr, ma, bs, SUM((1-null_frac)*avg_width) AS datawidth, \n MAX(null_frac) AS maxfracsum, hdr+( SELECT 1+count(*)/8 \n FROM pg_stats s2 WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename \n ) AS nullhdr FROM pg_stats s, ( SELECT \n (SELECT current_setting('block_size')::numeric) AS bs, CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr, \n CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma FROM (SELECT version() AS v) AS foo \n ) AS constants GROUP BY 1,2,3,4,5 ) AS foo \n ) AS rs JOIN pg_class cc ON cc.relname = rs.tablename JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema' \n LEFT JOIN pg_index i ON indrelid = cc.oid LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid \n) AS sml ORDER BY wastedbytes DESC ThanksVenkat\nOn Wed, Sep 21, 2011 at 3:40 AM, Mark Kirkwood <[email protected]> wrote:\nOn 21/09/11 10:05, Mark Kirkwood wrote:\n\n\n...then using the freespacemap contrib module should give very accurate results:\n\n\n\nSorry, should have said - for 8.4 and later!\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 21 Sep 2011 11:08:13 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Can you please help me understand what \"blkno\" column refers to ?\n\nThanks\nVenkat\n\nOn Wed, Sep 21, 2011 at 11:08 AM, Venkat Balaji <[email protected]>wrote:\n\n> Thank Everyone for your inputs !\n>\n> Mark,\n>\n> We are using 9.0, so, i should be able to make use of this \"freespacemap\"\n> contrib module and would get back to you with the results.\n>\n> I was using below query (which i got it by googling)..\n>\n> But, was not sure, if its picking up the correct information. I want to\n> avoid mis-prediction cost after whole production has been scheduled for\n> downtime for maintenance.\n>\n> SELECT\n> current_database(), schemaname, tablename, /*reltuples::bigint,\n> relpages::bigint, otta,*/\n> ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS\n> tbloat,\n> CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END\n> AS wastedbytes,\n> iname, /*ituples::bigint, ipages::bigint, iotta,*/\n> ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric\n> END,1) AS ibloat,\n> CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS\n> wastedibytes\n> FROM (\n> SELECT\n> schemaname, tablename, cc.reltuples, cc.relpages, bs,\n> CEIL((cc.reltuples*((datahdr+ma-\n> (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma\n> END))+nullhdr2+4))/(bs-20::float)) AS otta,\n> COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples,\n> COALESCE(c2.relpages,0) AS ipages,\n> COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta\n> -- very rough approximation, assumes all cols\n> FROM (\n> SELECT\n> ma,bs,schemaname,tablename,\n> (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma\n> END)))::numeric AS datahdr,\n> (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE\n> nullhdr%ma END))) AS nullhdr2\n> FROM (\n> SELECT\n> schemaname, tablename, hdr, ma, bs,\n> SUM((1-null_frac)*avg_width) AS datawidth,\n> MAX(null_frac) AS maxfracsum,\n> hdr+(\n> SELECT 1+count(*)/8\n> FROM pg_stats s2\n> WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND\n> s2.tablename = s.tablename\n> ) AS nullhdr\n> FROM pg_stats s, (\n> SELECT\n> (SELECT current_setting('block_size')::numeric) AS bs,\n> CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE\n> 23 END AS hdr,\n> CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma\n> FROM (SELECT version() AS v) AS foo\n> ) AS constants\n> GROUP BY 1,2,3,4,5\n> ) AS foo\n> ) AS rs\n> JOIN pg_class cc ON cc.relname = rs.tablename\n> JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname =\n> rs.schemaname AND nn.nspname <> 'information_schema'\n> LEFT JOIN pg_index i ON indrelid = cc.oid\n> LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid\n> ) AS sml\n> ORDER BY wastedbytes DESC\n>\n> Thanks\n> Venkat\n>\n>\n> On Wed, Sep 21, 2011 at 3:40 AM, Mark Kirkwood <\n> [email protected]> wrote:\n>\n>> On 21/09/11 10:05, Mark Kirkwood wrote:\n>>\n>>>\n>>> ...then using the freespacemap contrib module should give very accurate\n>>> results:\n>>>\n>>>\n>> Sorry, should have said - for 8.4 and later!\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n>> org <[email protected]>)\n>> To make changes to your subscription:\n>> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>>\n>\n>\n\nCan you please help me understand what \"blkno\" column refers to ?ThanksVenkatOn Wed, Sep 21, 2011 at 11:08 AM, Venkat Balaji <[email protected]> wrote:\nThank Everyone for your inputs !Mark,We are using 9.0, so, i should be able to make use of this \"freespacemap\" contrib module and would get back to you with the results.\nI was using below query (which i got it by googling)..But, was not sure, if its picking up the correct information. I want to avoid mis-prediction cost after whole production has been scheduled for downtime for maintenance.\nSELECT current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/ \n ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat, CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes, \n iname, /*ituples::bigint, ipages::bigint, iotta,*/ ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat, \n CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes FROM ( \n SELECT schemaname, tablename, cc.reltuples, cc.relpages, bs, CEIL((cc.reltuples*((datahdr+ma- \n (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta, COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages, \n COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols FROM ( \n SELECT ma,bs,schemaname,tablename, (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr, \n (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2 FROM ( \n SELECT schemaname, tablename, hdr, ma, bs, SUM((1-null_frac)*avg_width) AS datawidth, \n MAX(null_frac) AS maxfracsum, hdr+( SELECT 1+count(*)/8 \n FROM pg_stats s2 WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename \n ) AS nullhdr FROM pg_stats s, ( SELECT \n (SELECT current_setting('block_size')::numeric) AS bs, CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr, \n CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma FROM (SELECT version() AS v) AS foo \n ) AS constants GROUP BY 1,2,3,4,5 ) AS foo \n ) AS rs JOIN pg_class cc ON cc.relname = rs.tablename JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema' \n LEFT JOIN pg_index i ON indrelid = cc.oid LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid \n) AS sml ORDER BY wastedbytes DESC ThanksVenkat\nOn Wed, Sep 21, 2011 at 3:40 AM, Mark Kirkwood <[email protected]> wrote:\n\nOn 21/09/11 10:05, Mark Kirkwood wrote:\n\n\n...then using the freespacemap contrib module should give very accurate results:\n\n\n\nSorry, should have said - for 8.4 and later!\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 21 Sep 2011 11:29:23 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "On 09/20/2011 11:22 AM, Venkat Balaji wrote:\n\n> Please help me understand how to calculate free space in Tables and\n> Indexes even after vacuuming and analyzing is performed.\n\nBesides the query Mark gave you using freespacemap, there's also the \npgstattuple contrib module. You'd use it like this:\n\nSELECT pg_size_pretty(free_space) AS mb_free\n FROM pgstattuple('some_table');\n\nQuery must be run as a super-user, and I wouldn't recommend running it \non huge tables, since it scans the actual data files to get its \ninformation. There's a lot of other useful information in that function, \nsuch as the number of dead rows.\n\n> What i understand is that, even if we perform VACUUM ANALYZE\n> regularly, the free space generated is not filled up.\n\nVACUUM does not actually generate free space. It locates and marks \nreusable tuples. Any future updates or inserts on that table will be put \nin those newly reclaimed spots, instead of being bolted onto the end of \nthe table.\n\n> I see lot of free spaces or free pages in Tables and Indexes. But, I\n> need to give an exact calculation on how much space will be reclaimed\n> after VACUUM FULL and RE-INDEXING.\n\nWhy? If your database is so desperate for space, VACUUM and REINDEX \nwon't really help you. A properly maintained database will still have a \ncertain amount of \"bloat\" equal to the number of rows that change \nbetween maintenance intervals. One way or another, that space is going \nto be used by *something*.\n\nIt sounds more like you need to tweak your autovacuum settings to be \nmore aggressive if you're seeing significant enough turnover that your \ntables are bloating significantly. One of our tables, for instance, gets \nvacuumed more than once per hour because it experiences 1,000% turnover \ndaily.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Wed, 21 Sep 2011 08:03:13 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n> Venkat Balaji wrote:\n \n>> I see lot of free spaces or free pages in Tables and Indexes. \n>> But, I need to give an exact calculation on how much space will\n>> be reclaimed after VACUUM FULL and RE-INDEXING.\n> \n> Why?\n \nI've been wondering that, too. And talking about the space being\n\"reclaimed\" seems to be at odds with your subject line. The space\nis given up by the database engine to the file system free space,\nwhere reuse by the database will be much more expensive. For good\nperformance you want some free space in the tables and indexes,\nwhere it can be allocated to new tuples without going out through OS\ncalls to the file system.\n \nClearly, if free space gets higher than necessary to support\ncreation of new tuples, it can start to harm performance, and you\nmay need to take aggressive action (such as CLUSTER) to reclaim it;\nbut any time you find it necessary to do *that* you should be\ninvestigating what went wrong to put you in such a spot. Either\nyour autovacuum is (as Shaun suggested) not aggressive enough, or\nyou have some long running transaction (possibly \"idle in\ntransaction\") which is preventing vacuums from doing their work\neffectively. Investigating that is going to help more than\ncalculating just how much space the database is going to give up to\nfile system free space.\n \n-Kevin\n",
"msg_date": "Wed, 21 Sep 2011 08:57:35 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Thank you very much for your detailed explanation !\n\nI will be working on our existing \"auto-vacuuming\" strategy to see\nif that's optimal. But, we do have VACUUM VERBOSE ANALYZE running at the\ncluster level every day and auto-vacuum is aggressive for highly active\ntables.\n\nToday, we have vacuumed a 10GB table and the table size decreased to 5 GB.\n\nI understand that, it would very expensive for the table to reclaim the\nspace back from the filesystem. We have decided to do the maintenance after\na thorough analysis and our databases were not subjected to any kind of\nmaintenance activity since 2 yrs (with downtime).\n\nI as a DBA, suggested to perform VACUUM FULL and RE-INDEXING + ANALYZE to\nensure that IO performance and Indexing performance would be good and the PG\noptimizer would pick up the optimal plan. As said earlier, our databases\nhave never been part of any re-organization since 2 years and are highly\ntransactional databases. I believe that, performing VACUUM FULL and\nRE-INDEXING would have tightly packed rows (in every page) would ensure good\nIOs.\n\nI might have not put across the explanation in an understandable manner.\n\nPlease help me know the following -\n\n1. When would pg_stat_user_tables will be updated and what would the\ninformation show ?\n2. Will the information about dead-rows and live-rows vanish after VACUUM or\nANALYZE or VACUUM FULL ?\n\nI am just preparing a monitoring system which would help us know the rate of\nbloats and data generation on daily basis.\n\nSorry for the long email !\n\nLooking forward for your help !\n\nThanks\nVenkat\n\n\n\n\nOn Wed, Sep 21, 2011 at 7:27 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Shaun Thomas <[email protected]> wrote:\n> > Venkat Balaji wrote:\n>\n> >> I see lot of free spaces or free pages in Tables and Indexes.\n> >> But, I need to give an exact calculation on how much space will\n> >> be reclaimed after VACUUM FULL and RE-INDEXING.\n> >\n> > Why?\n>\n> I've been wondering that, too. And talking about the space being\n> \"reclaimed\" seems to be at odds with your subject line. The space\n> is given up by the database engine to the file system free space,\n> where reuse by the database will be much more expensive. For good\n> performance you want some free space in the tables and indexes,\n> where it can be allocated to new tuples without going out through OS\n> calls to the file system.\n>\n> Clearly, if free space gets higher than necessary to support\n> creation of new tuples, it can start to harm performance, and you\n> may need to take aggressive action (such as CLUSTER) to reclaim it;\n> but any time you find it necessary to do *that* you should be\n> investigating what went wrong to put you in such a spot. Either\n> your autovacuum is (as Shaun suggested) not aggressive enough, or\n> you have some long running transaction (possibly \"idle in\n> transaction\") which is preventing vacuums from doing their work\n> effectively. Investigating that is going to help more than\n> calculating just how much space the database is going to give up to\n> file system free space.\n>\n> -Kevin\n>\n\nThank you very much for your detailed explanation !I will be working on our existing \"auto-vacuuming\" strategy to see if that's optimal. But, we do have VACUUM VERBOSE ANALYZE running at the cluster level every day and auto-vacuum is aggressive for highly active tables.\nToday, we have vacuumed a 10GB table and the table size decreased to 5 GB.I understand that, it would very expensive for the table to reclaim the space back from the filesystem. We have decided to do the maintenance after a thorough analysis and our databases were not subjected to any kind of maintenance activity since 2 yrs (with downtime).\nI as a DBA, suggested to perform VACUUM FULL and RE-INDEXING + ANALYZE to ensure that IO performance and Indexing performance would be good and the PG optimizer would pick up the optimal plan. As said earlier, our databases have never been part of any re-organization since 2 years and are highly transactional databases. I believe that, performing VACUUM FULL and RE-INDEXING would have tightly packed rows (in every page) would ensure good IOs.\nI might have not put across the explanation in an understandable manner.Please help me know the following -1. When would pg_stat_user_tables will be updated and what would the information show ?\n2. Will the information about dead-rows and live-rows vanish after VACUUM or ANALYZE or VACUUM FULL ?I am just preparing a monitoring system which would help us know the rate of bloats and data generation on daily basis.\nSorry for the long email !Looking forward for your help !ThanksVenkat\n\nOn Wed, Sep 21, 2011 at 7:27 PM, Kevin Grittner <[email protected]> wrote:\n\nShaun Thomas <[email protected]> wrote:\n> Venkat Balaji wrote:\n\n>> I see lot of free spaces or free pages in Tables and Indexes.\n>> But, I need to give an exact calculation on how much space will\n>> be reclaimed after VACUUM FULL and RE-INDEXING.\n>\n> Why?\n\nI've been wondering that, too. And talking about the space being\n\"reclaimed\" seems to be at odds with your subject line. The space\nis given up by the database engine to the file system free space,\nwhere reuse by the database will be much more expensive. For good\nperformance you want some free space in the tables and indexes,\nwhere it can be allocated to new tuples without going out through OS\ncalls to the file system.\n\nClearly, if free space gets higher than necessary to support\ncreation of new tuples, it can start to harm performance, and you\nmay need to take aggressive action (such as CLUSTER) to reclaim it;\nbut any time you find it necessary to do *that* you should be\ninvestigating what went wrong to put you in such a spot. Either\nyour autovacuum is (as Shaun suggested) not aggressive enough, or\nyou have some long running transaction (possibly \"idle in\ntransaction\") which is preventing vacuums from doing their work\neffectively. Investigating that is going to help more than\ncalculating just how much space the database is going to give up to\nfile system free space.\n\n-Kevin",
"msg_date": "Wed, 21 Sep 2011 21:43:45 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "On 09/21/2011 12:13 PM, Venkat Balaji wrote:\n> I as a DBA, suggested to perform VACUUM FULL and RE-INDEXING + ANALYZE \n> to ensure that IO performance and Indexing performance would be good\n\n\nRead http://wiki.postgresql.org/wiki/VACUUM_FULL before you run VACUUM \nFULL. You probably don't want to do that. A multi-gigabyte table can \neasily be unavailable for several hours if you execute VACUUM FULL \nagainst it. CLUSTER is almost always faster.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 21 Sep 2011 13:57:51 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Thanks Greg !\n\nIf i got it correct, CLUSTER would do the same what VACUUM FULL does (except\nbeing fast).\n\nCLUSTER is recommended only because it is faster ? As per the link, the\ntable would be unavailable (for shorter period compared to VACUUM FULL) when\nCLUSTER is executed as well. Hope i got it correct !\n\nThanks\nVenkat\n\nOn Wed, Sep 21, 2011 at 11:27 PM, Greg Smith <[email protected]> wrote:\n\n> On 09/21/2011 12:13 PM, Venkat Balaji wrote:\n>\n>> I as a DBA, suggested to perform VACUUM FULL and RE-INDEXING + ANALYZE to\n>> ensure that IO performance and Indexing performance would be good\n>>\n>\n>\n> Read http://wiki.postgresql.org/**wiki/VACUUM_FULL<http://wiki.postgresql.org/wiki/VACUUM_FULL>before you run VACUUM FULL. You probably don't want to do that. A\n> multi-gigabyte table can easily be unavailable for several hours if you\n> execute VACUUM FULL against it. CLUSTER is almost always faster.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nThanks Greg !If i got it correct, CLUSTER would do the same what VACUUM FULL does (except being fast).CLUSTER is recommended only because it is faster ? As per the link, the table would be unavailable (for shorter period compared to VACUUM FULL) when CLUSTER is executed as well. Hope i got it correct !\nThanksVenkatOn Wed, Sep 21, 2011 at 11:27 PM, Greg Smith <[email protected]> wrote:\nOn 09/21/2011 12:13 PM, Venkat Balaji wrote:\n\nI as a DBA, suggested to perform VACUUM FULL and RE-INDEXING + ANALYZE to ensure that IO performance and Indexing performance would be good\n\n\n\nRead http://wiki.postgresql.org/wiki/VACUUM_FULL before you run VACUUM FULL. You probably don't want to do that. A multi-gigabyte table can easily be unavailable for several hours if you execute VACUUM FULL against it. CLUSTER is almost always faster.\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 21 Sep 2011 23:48:39 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Venkat Balaji <[email protected]> wrote:\n \n> If i got it correct, CLUSTER would do the same what VACUUM FULL\n> does (except being fast)\n \nCLUSTER copies the table (in the sequence of the specified index) to\na new set of files, builds fresh indexes, and then replaces the\noriginal set of files with the new ones. So you do need room on\ndisk for a second copy of the table, but it tends to be much faster\nthen VACUUM FULL in PostgreSQL versions before 9.0. (Starting in\n9.0, VACUUM FULL does the same thing as CLUSTER except that it scans\nthe table data rather than using an index.) REINDEX is not needed\nwhen using CLUSTER or 9.x VACUUM FULL. Older versions of VACUUM\nFULL would tend to bloat indexes, so a REINDEX after VACUUM FULL was\ngenerally a good idea.\n \nWhen choosing an index for CLUSTER, pick one on which you often\nsearch for a *range* of rows, if possible. Like a name column if\nyou do a lot of name searches.\n \n-Kevin\n",
"msg_date": "Wed, 21 Sep 2011 13:41:17 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "We had performed VACUUM FULL on our production and performance has improved\na lot !\n\nI started using pg_stattuple and pg_freespacemap for tracking freespace in\nthe tables and Indexes and is helping us a lot.\n\nThanks for all your inputs and help !\n\nRegards,\nVB\n\nOn Thu, Sep 22, 2011 at 12:11 AM, Kevin Grittner <\[email protected]> wrote:\n\n> Venkat Balaji <[email protected]> wrote:\n>\n> > If i got it correct, CLUSTER would do the same what VACUUM FULL\n> > does (except being fast)\n>\n> CLUSTER copies the table (in the sequence of the specified index) to\n> a new set of files, builds fresh indexes, and then replaces the\n> original set of files with the new ones. So you do need room on\n> disk for a second copy of the table, but it tends to be much faster\n> then VACUUM FULL in PostgreSQL versions before 9.0. (Starting in\n> 9.0, VACUUM FULL does the same thing as CLUSTER except that it scans\n> the table data rather than using an index.) REINDEX is not needed\n> when using CLUSTER or 9.x VACUUM FULL. Older versions of VACUUM\n> FULL would tend to bloat indexes, so a REINDEX after VACUUM FULL was\n> generally a good idea.\n>\n> When choosing an index for CLUSTER, pick one on which you often\n> search for a *range* of rows, if possible. Like a name column if\n> you do a lot of name searches.\n>\n> -Kevin\n>\n\nWe had performed VACUUM FULL on our production and performance has improved a lot !I started using pg_stattuple and pg_freespacemap for tracking freespace in the tables and Indexes and is helping us a lot.\nThanks for all your inputs and help !Regards,VBOn Thu, Sep 22, 2011 at 12:11 AM, Kevin Grittner <[email protected]> wrote:\nVenkat Balaji <[email protected]> wrote:\n\n> If i got it correct, CLUSTER would do the same what VACUUM FULL\n> does (except being fast)\n\nCLUSTER copies the table (in the sequence of the specified index) to\na new set of files, builds fresh indexes, and then replaces the\noriginal set of files with the new ones. So you do need room on\ndisk for a second copy of the table, but it tends to be much faster\nthen VACUUM FULL in PostgreSQL versions before 9.0. (Starting in\n9.0, VACUUM FULL does the same thing as CLUSTER except that it scans\nthe table data rather than using an index.) REINDEX is not needed\nwhen using CLUSTER or 9.x VACUUM FULL. Older versions of VACUUM\nFULL would tend to bloat indexes, so a REINDEX after VACUUM FULL was\ngenerally a good idea.\n\nWhen choosing an index for CLUSTER, pick one on which you often\nsearch for a *range* of rows, if possible. Like a name column if\nyou do a lot of name searches.\n\n-Kevin",
"msg_date": "Tue, 27 Sep 2011 17:59:06 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Forgot to mention -\n\nKevin,\n\nCLUSTER seems to be an very interesting concept to me.\n\nI am thinking to test the CLUSTER TABLE on our production according to the\nIndex usage on the table.\n\nWill let you know once i get the results.\n\nRegards,\nVB\n\nOn Tue, Sep 27, 2011 at 5:59 PM, Venkat Balaji <[email protected]>wrote:\n\n> We had performed VACUUM FULL on our production and performance has improved\n> a lot !\n>\n> I started using pg_stattuple and pg_freespacemap for tracking freespace in\n> the tables and Indexes and is helping us a lot.\n>\n> Thanks for all your inputs and help !\n>\n> Regards,\n> VB\n>\n>\n> On Thu, Sep 22, 2011 at 12:11 AM, Kevin Grittner <\n> [email protected]> wrote:\n>\n>> Venkat Balaji <[email protected]> wrote:\n>>\n>> > If i got it correct, CLUSTER would do the same what VACUUM FULL\n>> > does (except being fast)\n>>\n>> CLUSTER copies the table (in the sequence of the specified index) to\n>> a new set of files, builds fresh indexes, and then replaces the\n>> original set of files with the new ones. So you do need room on\n>> disk for a second copy of the table, but it tends to be much faster\n>> then VACUUM FULL in PostgreSQL versions before 9.0. (Starting in\n>> 9.0, VACUUM FULL does the same thing as CLUSTER except that it scans\n>> the table data rather than using an index.) REINDEX is not needed\n>> when using CLUSTER or 9.x VACUUM FULL. Older versions of VACUUM\n>> FULL would tend to bloat indexes, so a REINDEX after VACUUM FULL was\n>> generally a good idea.\n>>\n>> When choosing an index for CLUSTER, pick one on which you often\n>> search for a *range* of rows, if possible. Like a name column if\n>> you do a lot of name searches.\n>>\n>> -Kevin\n>>\n>\n>\n\nForgot to mention -Kevin,CLUSTER seems to be an very interesting concept to me.I am thinking to test the CLUSTER TABLE on our production according to the Index usage on the table.\nWill let you know once i get the results.Regards,VBOn Tue, Sep 27, 2011 at 5:59 PM, Venkat Balaji <[email protected]> wrote:\nWe had performed VACUUM FULL on our production and performance has improved a lot !I started using pg_stattuple and pg_freespacemap for tracking freespace in the tables and Indexes and is helping us a lot.\nThanks for all your inputs and help !Regards,VBOn Thu, Sep 22, 2011 at 12:11 AM, Kevin Grittner <[email protected]> wrote:\nVenkat Balaji <[email protected]> wrote:\n\n> If i got it correct, CLUSTER would do the same what VACUUM FULL\n> does (except being fast)\n\nCLUSTER copies the table (in the sequence of the specified index) to\na new set of files, builds fresh indexes, and then replaces the\noriginal set of files with the new ones. So you do need room on\ndisk for a second copy of the table, but it tends to be much faster\nthen VACUUM FULL in PostgreSQL versions before 9.0. (Starting in\n9.0, VACUUM FULL does the same thing as CLUSTER except that it scans\nthe table data rather than using an index.) REINDEX is not needed\nwhen using CLUSTER or 9.x VACUUM FULL. Older versions of VACUUM\nFULL would tend to bloat indexes, so a REINDEX after VACUUM FULL was\ngenerally a good idea.\n\nWhen choosing an index for CLUSTER, pick one on which you often\nsearch for a *range* of rows, if possible. Like a name column if\nyou do a lot of name searches.\n\n-Kevin",
"msg_date": "Tue, 27 Sep 2011 18:01:01 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Hello,\n\nThanks for your suggestions !\n\nWe CLUSTERED a table using mostly used Index. Application is performing\nbetter now.\n\nThanks\nVB\n\nOn Tue, Sep 27, 2011 at 6:01 PM, Venkat Balaji <[email protected]>wrote:\n\n> Forgot to mention -\n>\n> Kevin,\n>\n> CLUSTER seems to be an very interesting concept to me.\n>\n> I am thinking to test the CLUSTER TABLE on our production according to the\n> Index usage on the table.\n>\n> Will let you know once i get the results.\n>\n> Regards,\n> VB\n>\n> On Tue, Sep 27, 2011 at 5:59 PM, Venkat Balaji <[email protected]>wrote:\n>\n>> We had performed VACUUM FULL on our production and performance has\n>> improved a lot !\n>>\n>> I started using pg_stattuple and pg_freespacemap for tracking freespace in\n>> the tables and Indexes and is helping us a lot.\n>>\n>> Thanks for all your inputs and help !\n>>\n>> Regards,\n>> VB\n>>\n>>\n>> On Thu, Sep 22, 2011 at 12:11 AM, Kevin Grittner <\n>> [email protected]> wrote:\n>>\n>>> Venkat Balaji <[email protected]> wrote:\n>>>\n>>> > If i got it correct, CLUSTER would do the same what VACUUM FULL\n>>> > does (except being fast)\n>>>\n>>> CLUSTER copies the table (in the sequence of the specified index) to\n>>> a new set of files, builds fresh indexes, and then replaces the\n>>> original set of files with the new ones. So you do need room on\n>>> disk for a second copy of the table, but it tends to be much faster\n>>> then VACUUM FULL in PostgreSQL versions before 9.0. (Starting in\n>>> 9.0, VACUUM FULL does the same thing as CLUSTER except that it scans\n>>> the table data rather than using an index.) REINDEX is not needed\n>>> when using CLUSTER or 9.x VACUUM FULL. Older versions of VACUUM\n>>> FULL would tend to bloat indexes, so a REINDEX after VACUUM FULL was\n>>> generally a good idea.\n>>>\n>>> When choosing an index for CLUSTER, pick one on which you often\n>>> search for a *range* of rows, if possible. Like a name column if\n>>> you do a lot of name searches.\n>>>\n>>> -Kevin\n>>>\n>>\n>>\n>\n\nHello,Thanks for your suggestions !We CLUSTERED a table using mostly used Index. Application is performing better now.ThanksVB\nOn Tue, Sep 27, 2011 at 6:01 PM, Venkat Balaji <[email protected]> wrote:\nForgot to mention -Kevin,CLUSTER seems to be an very interesting concept to me.I am thinking to test the CLUSTER TABLE on our production according to the Index usage on the table.\nWill let you know once i get the results.Regards,VBOn Tue, Sep 27, 2011 at 5:59 PM, Venkat Balaji <[email protected]> wrote:\nWe had performed VACUUM FULL on our production and performance has improved a lot !I started using pg_stattuple and pg_freespacemap for tracking freespace in the tables and Indexes and is helping us a lot.\nThanks for all your inputs and help !Regards,VBOn Thu, Sep 22, 2011 at 12:11 AM, Kevin Grittner <[email protected]> wrote:\nVenkat Balaji <[email protected]> wrote:\n\n> If i got it correct, CLUSTER would do the same what VACUUM FULL\n> does (except being fast)\n\nCLUSTER copies the table (in the sequence of the specified index) to\na new set of files, builds fresh indexes, and then replaces the\noriginal set of files with the new ones. So you do need room on\ndisk for a second copy of the table, but it tends to be much faster\nthen VACUUM FULL in PostgreSQL versions before 9.0. (Starting in\n9.0, VACUUM FULL does the same thing as CLUSTER except that it scans\nthe table data rather than using an index.) REINDEX is not needed\nwhen using CLUSTER or 9.x VACUUM FULL. Older versions of VACUUM\nFULL would tend to bloat indexes, so a REINDEX after VACUUM FULL was\ngenerally a good idea.\n\nWhen choosing an index for CLUSTER, pick one on which you often\nsearch for a *range* of rows, if possible. Like a name column if\nyou do a lot of name searches.\n\n-Kevin",
"msg_date": "Mon, 3 Oct 2011 16:29:42 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Venkat Balaji <[email protected]> wrote:\n \n> We CLUSTERED a table using mostly used Index. Application is\n> performing better now.\n \nCLUSTER can help in at least four ways:\n \n(1) It eliminates bloat in the table heap.\n \n(2) It eliminates bloat in the indexes.\n \n(3) It can correct fragmentation in the underlying disk files.\n \n(4) It can put tuples which are accessed by the same query into\nadjacent locations on disk, reducing physical disk access.\n \nAn aggressive autovacuum configuration can generally prevent the\nfirst two from coming back to haunt you, and the third may not be a\nbig problem (depending on your OS and file system), but that last\none is a benefit which will degrade over time in most use cases --\nthe order in the heap is set by the cluster, but not maintained\nafter that. If this ordering is a significant part of the\nperformance improvement you're seeing, you may want to schedule some\nregular CLUSTER run. It's hard to say what frequency would make\nsense, but if performance gradually deteriorates and a CLUSTER fixes\nit, you'll get a sense of how often it pays to do it.\n \n-Kevin\n",
"msg_date": "Mon, 03 Oct 2011 11:15:17 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Thanks a lot Kevin !\n\nThis email has deepened my understanding on the clustering concept.\n\nKeeping this in mind, I have recommended a new disk layout at the OS level\nfor our production servers so that IOs will be balanced on the disks as\nwell.\n\nCurrently, we do not have mount points divided according to the type of IOs.\n\nI will share my recommended plan in an different email thread.\n\nThanks again for this detailed explanation.\n\nRegards,\nVB\n\nOn Mon, Oct 3, 2011 at 9:45 PM, Kevin Grittner\n<[email protected]>wrote:\n\n> Venkat Balaji <[email protected]> wrote:\n>\n> > We CLUSTERED a table using mostly used Index. Application is\n> > performing better now.\n>\n> CLUSTER can help in at least four ways:\n>\n> (1) It eliminates bloat in the table heap.\n>\n> (2) It eliminates bloat in the indexes.\n>\n> (3) It can correct fragmentation in the underlying disk files.\n>\n> (4) It can put tuples which are accessed by the same query into\n> adjacent locations on disk, reducing physical disk access.\n>\n> An aggressive autovacuum configuration can generally prevent the\n> first two from coming back to haunt you, and the third may not be a\n> big problem (depending on your OS and file system), but that last\n> one is a benefit which will degrade over time in most use cases --\n> the order in the heap is set by the cluster, but not maintained\n> after that. If this ordering is a significant part of the\n> performance improvement you're seeing, you may want to schedule some\n> regular CLUSTER run. It's hard to say what frequency would make\n> sense, but if performance gradually deteriorates and a CLUSTER fixes\n> it, you'll get a sense of how often it pays to do it.\n>\n> -Kevin\n>\n\nThanks a lot Kevin !This email has deepened my understanding on the clustering concept.Keeping this in mind, I have recommended a new disk layout at the OS level for our production servers so that IOs will be balanced on the disks as well.\nCurrently, we do not have mount points divided according to the type of IOs.I will share my recommended plan in an different email thread.Thanks again for this detailed explanation.\nRegards,VBOn Mon, Oct 3, 2011 at 9:45 PM, Kevin Grittner <[email protected]> wrote:\nVenkat Balaji <[email protected]> wrote:\n\n> We CLUSTERED a table using mostly used Index. Application is\n> performing better now.\n\nCLUSTER can help in at least four ways:\n\n(1) It eliminates bloat in the table heap.\n\n(2) It eliminates bloat in the indexes.\n\n(3) It can correct fragmentation in the underlying disk files.\n\n(4) It can put tuples which are accessed by the same query into\nadjacent locations on disk, reducing physical disk access.\n\nAn aggressive autovacuum configuration can generally prevent the\nfirst two from coming back to haunt you, and the third may not be a\nbig problem (depending on your OS and file system), but that last\none is a benefit which will degrade over time in most use cases --\nthe order in the heap is set by the cluster, but not maintained\nafter that. If this ordering is a significant part of the\nperformance improvement you're seeing, you may want to schedule some\nregular CLUSTER run. It's hard to say what frequency would make\nsense, but if performance gradually deteriorates and a CLUSTER fixes\nit, you'll get a sense of how often it pays to do it.\n\n-Kevin",
"msg_date": "Mon, 3 Oct 2011 22:24:19 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Hello,\n\nI was attempting to calculate the actual occupied space by a Table.\n\nBelow is what i did -\n\nI summed up the avg_width of each column of a table from pg_stats, which\ngives me the average size of a row (277 bytes).\n\nselect* sum(avg_width) as average_row_size from pg_stats *where\ntablename='tablename'\n\n average_row_size\n---------------------------\n 277\n\n(1 row)\n\nCalculated the actual occupied space by rows in the table as below -\n\n*Took the average_row_size * number_of_rows from pg_class*\n\nselect 277*reltuples/1024 as occupied_space from pg_class where\nrelname='tablename'; == 552 KB\n\n occupied_space\n-------------------------\n 552.6474609375\n\nCalculated the actual Table size (600 kb)\n\nselect pg_size_pretty(pg_relation_size('tablename'));\n\n\npg_size_pretty\n----------------\n 600 KB\n\n(1 row)\n\nCalculated the free space with in the table (by scanning the pages - as\nsuggested by Shaun Thomas) -- 14 KB\n\nSELECT pg_size_pretty(free_space) AS mb_free FROM pgstattuple('tablename');\n\n mb_free\n---------\n 14 KB\n\n(1 row)\n\n600 KB is the size of the table (taken through pg_size_pretty)\n14 KB is the free space (taken through contrib modules)\n600+14 = 586 KB -- is the occupied space by normal calculation through\ncontrib modules. This is based on number of pages allocated to the table.\n552 KB is the actual occupied size by the rows (taken by calculating avg row\nsize ). This is based on number of rows with in the pages.\n586-552 = 34 KB -- is still free some where with in the occupied pages (\ncalculated through pg_stats and pg_class )\n34 KB is still free within the pages ( each 8K ) which is basically taken as\noccupied space.\n\nThis is similar concept which i successfully applied in an other RDBMS\nTechnology to calculate space usage metrics on production.\nThis is all calculated after considering Vacuum and Analyze jobs are\nexecuted.\n\nPlease comment !\n\nSorry if this is too confusing and too long.\n\nThanks\nVB\n\nOn Wed, Sep 21, 2011 at 6:33 PM, Shaun Thomas <[email protected]> wrote:\n\n> On 09/20/2011 11:22 AM, Venkat Balaji wrote:\n>\n> Please help me understand how to calculate free space in Tables and\n>> Indexes even after vacuuming and analyzing is performed.\n>>\n>\n> Besides the query Mark gave you using freespacemap, there's also the\n> pgstattuple contrib module. You'd use it like this:\n>\n> SELECT pg_size_pretty(free_space) AS mb_free\n> FROM pgstattuple('some_table');\n>\n> Query must be run as a super-user, and I wouldn't recommend running it on\n> huge tables, since it scans the actual data files to get its information.\n> There's a lot of other useful information in that function, such as the\n> number of dead rows.\n>\n>\n> What i understand is that, even if we perform VACUUM ANALYZE\n>> regularly, the free space generated is not filled up.\n>>\n>\n> VACUUM does not actually generate free space. It locates and marks reusable\n> tuples. Any future updates or inserts on that table will be put in those\n> newly reclaimed spots, instead of being bolted onto the end of the table.\n>\n>\n> I see lot of free spaces or free pages in Tables and Indexes. But, I\n>> need to give an exact calculation on how much space will be reclaimed\n>> after VACUUM FULL and RE-INDEXING.\n>>\n>\n> Why? If your database is so desperate for space, VACUUM and REINDEX won't\n> really help you. A properly maintained database will still have a certain\n> amount of \"bloat\" equal to the number of rows that change between\n> maintenance intervals. One way or another, that space is going to be used by\n> *something*.\n>\n> It sounds more like you need to tweak your autovacuum settings to be more\n> aggressive if you're seeing significant enough turnover that your tables are\n> bloating significantly. One of our tables, for instance, gets vacuumed more\n> than once per hour because it experiences 1,000% turnover daily.\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________**________________\n>\n> See http://www.peak6.com/email-**disclaimer/<http://www.peak6.com/email-disclaimer/>for terms and conditions related to this email\n>\n\nHello,I was attempting to calculate the actual occupied space by a Table.Below is what i did -I summed up the avg_width of each column of a table from pg_stats, which gives me the average size of a row (277 bytes).\nselect sum(avg_width) as average_row_size from pg_stats where tablename='tablename' average_row_size--------------------------- 277\n(1 row)Calculated the actual occupied space by rows in the table as below -Took the average_row_size * number_of_rows from pg_class\nselect 277*reltuples/1024 as occupied_space from pg_class where relname='tablename'; == 552 KB occupied_space------------------------- 552.6474609375\nCalculated the actual Table size (600 kb)select pg_size_pretty(pg_relation_size('tablename')); \npg_size_pretty---------------- 600 KB(1 row)Calculated the free space with in the table (by scanning the pages - as suggested by Shaun Thomas) -- 14 KB\nSELECT pg_size_pretty(free_space) AS mb_free FROM pgstattuple('tablename'); mb_free--------- 14 KB(1 row)\n600 KB is the size of the table (taken through pg_size_pretty)14 KB is the free space (taken through contrib modules)600+14 = 586 KB -- is the occupied space by normal calculation through contrib modules. This is based on number of pages allocated to the table.\n552 KB is the actual occupied size by the rows (taken by calculating avg row size ). This is based on number of rows with in the pages.586-552 = 34 KB -- is still free some where with in the occupied pages ( calculated through pg_stats and pg_class )\n34 KB is still free within the pages ( each 8K ) which is basically taken as occupied space.This is similar concept which i successfully applied in an other RDBMS Technology to calculate space usage metrics on production.\nThis is all calculated after considering Vacuum and Analyze jobs are executed.Please comment !Sorry if this is too confusing and too long.\nThanksVBOn Wed, Sep 21, 2011 at 6:33 PM, Shaun Thomas <[email protected]> wrote:\nOn 09/20/2011 11:22 AM, Venkat Balaji wrote:\n\n\nPlease help me understand how to calculate free space in Tables and\nIndexes even after vacuuming and analyzing is performed.\n\n\nBesides the query Mark gave you using freespacemap, there's also the pgstattuple contrib module. You'd use it like this:\n\nSELECT pg_size_pretty(free_space) AS mb_free\n FROM pgstattuple('some_table');\n\nQuery must be run as a super-user, and I wouldn't recommend running it on huge tables, since it scans the actual data files to get its information. There's a lot of other useful information in that function, such as the number of dead rows.\n\n\n\nWhat i understand is that, even if we perform VACUUM ANALYZE\nregularly, the free space generated is not filled up.\n\n\nVACUUM does not actually generate free space. It locates and marks reusable tuples. Any future updates or inserts on that table will be put in those newly reclaimed spots, instead of being bolted onto the end of the table.\n\n\n\nI see lot of free spaces or free pages in Tables and Indexes. But, I\nneed to give an exact calculation on how much space will be reclaimed\nafter VACUUM FULL and RE-INDEXING.\n\n\nWhy? If your database is so desperate for space, VACUUM and REINDEX won't really help you. A properly maintained database will still have a certain amount of \"bloat\" equal to the number of rows that change between maintenance intervals. One way or another, that space is going to be used by *something*.\n\nIt sounds more like you need to tweak your autovacuum settings to be more aggressive if you're seeing significant enough turnover that your tables are bloating significantly. One of our tables, for instance, gets vacuumed more than once per hour because it experiences 1,000% turnover daily.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email",
"msg_date": "Wed, 5 Oct 2011 14:38:43 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "On Wed, Oct 5, 2011 at 2:38 PM, Venkat Balaji <[email protected]>wrote:\n\n> Hello,\n>\n> I was attempting to calculate the actual occupied space by a Table.\n>\n> Below is what i did -\n>\n> I summed up the avg_width of each column of a table from pg_stats, which\n> gives me the average size of a row (277 bytes).\n>\n> select* sum(avg_width) as average_row_size from pg_stats *where\n> tablename='tablename'\n>\n> average_row_size\n> ---------------------------\n> 277\n>\n> (1 row)\n>\n> Calculated the actual occupied space by rows in the table as below -\n>\n> *Took the average_row_size * number_of_rows from pg_class*\n>\n> select 277*reltuples/1024 as occupied_space from pg_class where\n> relname='tablename'; == 552 KB\n>\n> occupied_space\n> -------------------------\n> 552.6474609375\n>\n> Calculated the actual Table size (600 kb)\n>\n> select pg_size_pretty(pg_relation_size('tablename'));\n>\n>\n> pg_size_pretty\n> ----------------\n> 600 KB\n>\n> (1 row)\n>\n> Calculated the free space with in the table (by scanning the pages - as\n> suggested by Shaun Thomas) -- 14 KB\n>\n> SELECT pg_size_pretty(free_space) AS mb_free FROM pgstattuple('tablename');\n>\n> mb_free\n> ---------\n> 14 KB\n>\n> (1 row)\n>\n> 600 KB is the size of the table (taken through pg_size_pretty)\n> 14 KB is the free space (taken through contrib modules)\n> 600+14 = 586 KB -- is the occupied space by normal calculation through\n> contrib modules. This is based on number of pages allocated to the table.\n>\n\nIts typo 600 - 14 = 586 KB\n\n552 KB is the actual occupied size by the rows (taken by calculating avg row\n> size ). This is based on number of rows with in the pages.\n> 586-552 = 34 KB -- is still free some where with in the occupied pages (\n> calculated through pg_stats and pg_class )\n> 34 KB is still free within the pages ( each 8K ) which is basically taken\n> as occupied space.\n>\n>\nOne more point to add to this good discussion, each row header will occupy\n24 bytes + 4 bytes pointer on page to tuple.\n\n---\nRegards,\nRaghavendra\nEnterpriseDB Corporation\nBlog: http://raghavt.blogspot.com/\n\nOn Wed, Oct 5, 2011 at 2:38 PM, Venkat Balaji <[email protected]> wrote:\nHello,I was attempting to calculate the actual occupied space by a Table.\n\n\nBelow is what i did -I summed up the avg_width of each column of a table from pg_stats, which gives me the average size of a row (277 bytes).\nselect sum(avg_width) as average_row_size from pg_stats where tablename='tablename' average_row_size--------------------------- 277\n(1 row)Calculated the actual occupied space by rows in the table as below -Took the average_row_size * number_of_rows from pg_class\nselect 277*reltuples/1024 as occupied_space from pg_class where relname='tablename'; == 552 KB occupied_space------------------------- 552.6474609375\nCalculated the actual Table size (600 kb)select pg_size_pretty(pg_relation_size('tablename')); \npg_size_pretty---------------- 600 KB(1 row)Calculated the free space with in the table (by scanning the pages - as suggested by Shaun Thomas) -- 14 KB\nSELECT pg_size_pretty(free_space) AS mb_free FROM pgstattuple('tablename'); mb_free--------- 14 KB(1 row)\n600 KB is the size of the table (taken through pg_size_pretty)14 KB is the free space (taken through contrib modules)600+14 = 586 KB -- is the occupied space by normal calculation through contrib modules. This is based on number of pages allocated to the table.\nIts typo 600 - 14 = 586 KB 552 KB is the actual occupied size by the rows (taken by calculating avg row size ). This is based on number of rows with in the pages.\n586-552 = 34 KB -- is still free some where with in the occupied pages ( calculated through pg_stats and pg_class )\n34 KB is still free within the pages ( each 8K ) which is basically taken as occupied space.One more point to add to this good discussion, each row header will occupy 24 bytes + 4 bytes pointer on page to tuple.\n---Regards,RaghavendraEnterpriseDB CorporationBlog: http://raghavt.blogspot.com/",
"msg_date": "Wed, 5 Oct 2011 15:00:59 +0530",
"msg_from": "Raghavendra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Venkat Balaji <venkat.balaji 'at' verse.in> writes:\n\n> Hello,\n>\n> I was attempting to calculate the actual occupied space by a Table.\n\nSELECT relname, reltuples, pg_size_pretty(relpages*8*1024) as size FROM pg_class, pg_namespace WHERE pg_namespace.oid = pg_class.relnamespace AND relkind = 'r' AND nspname = 'public' ORDER BY relpages DESC;\n\nrelkind = 'i' for indexes.\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "Wed, 05 Oct 2011 11:39:58 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Raghavendra <[email protected]> wrote:\n> Venkat Balaji <[email protected]>wrote:\n \n>> [attempt to calculate file space from row layout and number of\n>> rows]\n \n> One more point to add to this good discussion, each row header\n> will occupy 24 bytes + 4 bytes pointer on page to tuple.\n \nNot to mention:\n \nhttp://www.postgresql.org/docs/9.1/interactive/storage-toast.html\n \n-Kevin\n",
"msg_date": "Wed, 05 Oct 2011 11:13:59 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "On Wed, Sep 21, 2011 at 11:57 AM, Greg Smith <[email protected]> wrote:\n> On 09/21/2011 12:13 PM, Venkat Balaji wrote:\n>>\n>> I as a DBA, suggested to perform VACUUM FULL and RE-INDEXING + ANALYZE to\n>> ensure that IO performance and Indexing performance would be good\n>\n>\n> Read http://wiki.postgresql.org/wiki/VACUUM_FULL before you run VACUUM FULL.\n> You probably don't want to do that. A multi-gigabyte table can easily be\n> unavailable for several hours if you execute VACUUM FULL against it.\n> CLUSTER is almost always faster.\n\nIt used to be that cluster on a very randomly ordered table was much\nslower than doing something like select * into newtable from oldtable\norder by col1, col2; Is that still the case in 9.0/9.1?\n",
"msg_date": "Wed, 5 Oct 2011 11:09:45 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> It used to be that cluster on a very randomly ordered table was much\n> slower than doing something like select * into newtable from oldtable\n> order by col1, col2; Is that still the case in 9.0/9.1?\n\nFixed in 9.1, per release notes:\n\n\t* Allow CLUSTER to sort the table rather than scanning the index when it seems likely to be cheaper (Leonardo Francalanci)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 05 Oct 2011 14:15:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy "
},
{
"msg_contents": "Thanks Kevin.\n\nAm in 9.1 and tested same scenario, how exactly storage metrics are\ncalculated. Please comment.\n\n*Table Structure:*\npostgres=# \\d test\n Table \"public.test\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer |\n name | text |\n\n*No. of rows:*\n\npostgres=# select relname,reltuples from pg_class where relname='test';\n relname | reltuples\n---------+-----------\n test | 1001\n(1 row)\n\n*Average Row size:*\n\npostgres=# select sum(avg_width) as average_row_size from pg_stats where\ntablename='test';\n average_row_size\n------------------\n 17\n(1 row)\n\n*Occupied Space:*\n\npostgres=# select 17*reltuples/1024 as \"No.of.Row_size * No.of.Rows =\nOccupied_Space\" from pg_class where relname='test';\n No.of.Row_size * No.of.Rows = Occupied_Space\n----------------------------------------------\n 16.6181640625\n\n*Actual Table Size:*\n\npostgres=# select pg_size_pretty(pg_relation_size('test'));\n pg_size_pretty\n----------------\n 48 kB\n(1 row)\n\nor\n\npostgres=# SELECT relname, reltuples, pg_size_pretty(relpages*8*1024) as\nsize FROM pg_class, pg_namespace WHERE pg_namespace.oid =\npg_class.relnamespace AND relkind = 'r' AND nspname = 'public' ORDER BY\nrelpages DESC;\n relname | reltuples | size\n---------+-----------+-------\n test | 1001 | 48 kB\n(1 row)\n\nIts different here:\n\npostgres=# \\dt+ test\n List of relations\n Schema | Name | Type | Owner | Size | Description\n--------+------+-------+----------+-------+-------------\n public | test | table | postgres | 88 kB |\n(1 row)\n\npostgres=# select pg_size_pretty(pg_total_relation_size('test'));\n pg_size_pretty\n----------------\n 88 kB\n(1 row)\n\n*Free Space:*\n\npostgres=# SELECT pg_size_pretty(free_space) AS mb_free FROM\npgstattuple('test');\n mb_free\n-----------\n 936 bytes\n(1 row)\n\nor\n\npostgres=# select * from pgstattuple('test');\n table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count |\ndead_tuple_len | dead_tuple_percent | free_space | free_percent\n-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n 49152 | 1001 | 41041 | 83.5 | 0 |\n 0 | 0 | 936 | 1.9\n(1 row)\n\n*OS Level Storage:*\n\nbash-4.1$ ll -h 16447*\n-rw------- 1 postgres postgres 48K Oct 2 17:40 16447\n-rw------- 1 postgres postgres 24K Oct 2 17:40 16447_fsm\n-rw------- 1 postgres postgres 8.0K Oct 2 17:40 16447_vm\n\nWhat has occupied in extra 8KB ?\n\npostgres=# select pg_size_pretty(pg_total_relation_size('test'));\n pg_size_pretty\n----------------\n 88 kB\n(1 row)\n\nThanks in advance.\n\n---\nRegards,\nRaghavendra\nEnterpriseDB Corporation\nBlog: http://raghavt.blogspot.com/\n\nThanks Kevin. Am in 9.1 and tested same scenario, how exactly storage metrics are calculated. Please comment.Table Structure:\npostgres=# \\d test Table \"public.test\" Column | Type | Modifiers--------+---------+----------- id | integer | name | text |\n\nNo. of rows:postgres=# select relname,reltuples from pg_class where relname='test'; relname | reltuples---------+----------- test | 1001\n(1 row)Average Row size:postgres=# select sum(avg_width) as average_row_size from pg_stats where tablename='test'; average_row_size\n------------------ 17(1 row)Occupied Space:postgres=# select 17*reltuples/1024 as \"No.of.Row_size * No.of.Rows = Occupied_Space\" from pg_class where relname='test';\n No.of.Row_size * No.of.Rows = Occupied_Space---------------------------------------------- 16.6181640625Actual Table Size:\npostgres=# select pg_size_pretty(pg_relation_size('test')); pg_size_pretty---------------- 48 kB(1 row)or \npostgres=# SELECT relname, reltuples, pg_size_pretty(relpages*8*1024) as size FROM pg_class, pg_namespace WHERE pg_namespace.oid = pg_class.relnamespace AND relkind = 'r' AND nspname = 'public' ORDER BY relpages DESC;\n relname | reltuples | size---------+-----------+------- test | 1001 | 48 kB(1 row)Its different here:\npostgres=# \\dt+ test List of relations\n Schema | Name | Type | Owner | Size | Description--------+------+-------+----------+-------+-------------\n public | test | table | postgres | 88 kB |(1 row)\npostgres=# select pg_size_pretty(pg_total_relation_size('test')); pg_size_pretty\n---------------- 88 kB(1 row)\nFree Space:postgres=# SELECT pg_size_pretty(free_space) AS mb_free FROM pgstattuple('test'); mb_free----------- 936 bytes\n(1 row)orpostgres=# select * from pgstattuple('test'); table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent\n-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+-------------- 49152 | 1001 | 41041 | 83.5 | 0 | 0 | 0 | 936 | 1.9\n(1 row)OS Level Storage:bash-4.1$ ll -h 16447*-rw------- 1 postgres postgres 48K Oct 2 17:40 16447\n-rw------- 1 postgres postgres 24K Oct 2 17:40 16447_fsm-rw------- 1 postgres postgres 8.0K Oct 2 17:40 16447_vmWhat has occupied in extra 8KB ?\npostgres=# select pg_size_pretty(pg_total_relation_size('test')); pg_size_pretty\n---------------- 88 kB(1 row)Thanks in advance.\n\n---Regards,RaghavendraEnterpriseDB CorporationBlog: http://raghavt.blogspot.com/",
"msg_date": "Thu, 6 Oct 2011 15:01:23 +0530",
"msg_from": "Raghavendra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": "On Wed, Oct 5, 2011 at 12:15 PM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> It used to be that cluster on a very randomly ordered table was much\n>> slower than doing something like select * into newtable from oldtable\n>> order by col1, col2; Is that still the case in 9.0/9.1?\n>\n> Fixed in 9.1, per release notes:\n>\n> * Allow CLUSTER to sort the table rather than scanning the index when it seems likely to be cheaper (Leonardo Francalanci)\n\nLooks like I owe Leonardo Francalanci a pizza.\n",
"msg_date": "Thu, 6 Oct 2011 13:20:51 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
},
{
"msg_contents": ">> * Allow CLUSTER to sort the table rather than scanning the index \n\n> when it seems likely to be cheaper (Leonardo Francalanci)\n> \n> Looks like I owe Leonardo Francalanci a pizza.\n\n\n\nWell, the patch started from a work by Gregory Stark, and Tom fixed\na nasty bug; but I'll take a slice ;)\n\n\nLeonardo\n\n",
"msg_date": "Mon, 10 Oct 2011 09:04:37 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Performance Improvement Strategy"
}
] |
[
{
"msg_contents": "[please CC, I'm not on the list]\n\nHi all,\n\nwe have one table that basically uses Postgres as a key-value store.\n\n Table \"public.termindex\"\nColumn | Type | Modifiers\n-------------+---------+-----------\n subject_id | integer |\n indextype | integer |\n cid | integer |\n\nThis is with Postgres 9.0.\n\nThe table has 96 million rows and an index on each column. It contains\nno NULLs and has no triggers.\n\nsubject_id has about 2m distinct values, cid about 200k, and indextype only six.\n\nThe table is *read-only* after the initial load.\n\nThe query we want to do is (with example values):\n\nselect t.cid, count(distinct t1.subject_id)\nfrom termindex t1, termindex t2\nwhere t1.cid=20642 and t1.indextype=636 and t2.indextype=636 and\nt2.subject_id=t1.subject_id\ngroup by t2.cid;\n\nThe goal is to select all subjects matching certain criteria, and for\nall cids (concept ids) that they have, determine the distribution over\nthe whole population, for a given indextype.\n\nFor instance, if the criteria are \"cid=1 and indextype=636\", let's say\nsubjects 1,2,3 match. The indextype we want the distribution for is\n999. So we get all cids where subject_id in (1,2,3) and indextype=999,\ngroup these by cid with count(subject_id) per cid. The result would\nlook like\n\ncid | count\n-----+-------\n 12 | 1\n 13 | 28\n...\n\nAnother way of asking this query is with an inner select:\n\nselect cid, count(subject_id) from termindex where subject_id in\n(select subject_id from termindex where cid=18869 and indextype=636)\nand indextype=636 group by cid;\n\nOn this 96m rows table, the join query takes about 50 seconds. EXPLAIN\nANALYZE output below. The inner select query takes much longer.\n\nPasting the explain analyze output into\nhttp://explain.depesz.com/s/Yr4 we see that Postgres is doing an\nexternal sort using about 150MB of data.\n\nNow, we're not Postgres experts, or even great at relational design.\nAre there better ways of doing that query, or designing the table? For\nthe latter we do have a number of constraints, though, that I don't\nwant to outline now because this mail is already long enough.\n\nThanks in advance!\nThomas\n\n\nQUERY PLAN\n---------------------------------------------------------------------\n GroupAggregate (cost=446092576.21..459395622.23 rows=200 width=8)\n(actual time=18927.047..19001.072 rows=2562 loops=1)\n -> Sort (cost=446092576.21..450526924.05 rows=1773739136 width=8)\n(actual time=18927.025..18952.726 rows=119429 loops=1)\n Sort Key: t.cid\n Sort Method: external merge Disk: 2088kB\n -> Merge Join (cost=1480064.68..28107420.08 rows=1773739136\nwidth=8) (actual time=14300.547..18836.386 rows=119429 loops=1)\n Merge Cond: (t1.subject_id = t.subject_id)\n -> Sort (cost=44173.64..44278.93 rows=42116 width=4)\n(actual time=30.148..33.965 rows=14466 loops=1)\n Sort Key: t1.subject_id\n Sort Method: external merge Disk: 200kB\n -> Bitmap Heap Scan on mcindex t1\n(cost=791.57..40361.19 rows=42116 width=4) (actual time=3.901..18.655\nrows=14466 loops=1)\n Recheck Cond: (cid = 20642)\n -> Bitmap Index Scan on mc2\n(cost=0.00..781.04 rows=42116 width=0) (actual time=3.319..3.319\nrows=14466 loops=1)\n Index Cond: (cid = 20642)\n -> Materialize (cost=1435891.04..1478006.60\nrows=8423113 width=8) (actual time=14270.211..17554.299 rows=8423170\nloops=1)\n -> Sort (cost=1435891.04..1456948.82\nrows=8423113 width=8) (actual time=14270.202..16346.835 rows=8423094\nloops=1)\n Sort Key: t.subject_id\n Sort Method: external merge Disk: 148232kB\n -> Seq Scan on mcindex t\n(cost=0.00..121502.13 rows=8423113 width=8) (actual\ntime=0.012..1381.282 rows=8423113 loops=1)\n Total runtime: 22095.280 ms\n(19 rows)\n",
"msg_date": "Tue, 20 Sep 2011 19:43:27 +0200",
"msg_from": "Thomas Kappler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query with self-join, group by, 100m rows"
},
{
"msg_contents": "On Tue, Sep 20, 2011 at 7:43 PM, Thomas Kappler <[email protected]> wrote:\n> [please CC, I'm not on the list]\n>\n> Hi all,\n>\n> we have one table that basically uses Postgres as a key-value store.\n>\n> Table \"public.termindex\"\n> Column | Type | Modifiers\n> -------------+---------+-----------\n> subject_id | integer |\n> indextype | integer |\n> cid | integer |\n>\n> This is with Postgres 9.0.\n>\n> The table has 96 million rows and an index on each column. It contains\n> no NULLs and has no triggers.\n>\n> subject_id has about 2m distinct values, cid about 200k, and indextype only six.\n>\n> The table is *read-only* after the initial load.\n>\n> The query we want to do is (with example values):\n>\n> select t.cid, count(distinct t1.subject_id)\n> from termindex t1, termindex t2\n> where t1.cid=20642 and t1.indextype=636 and t2.indextype=636 and\n> t2.subject_id=t1.subject_id\n> group by t2.cid;\n\nDo you have any multi column indexes? From the text of your query it\nseems it could benefit from these two indexes:\n\n(cid, indextype)\n(subject_id, indextype)\n\nI do not know whether PostgreSQL can avoid the table if you make the first index\n(cid, indextype, subject_id)\nin other words: append all the columns needed for the join. In theory\nthe query could then be satisfied from the indexes.\n\n> Pasting the explain analyze output into\n> http://explain.depesz.com/s/Yr4 we see that Postgres is doing an\n> external sort using about 150MB of data.\n>\n> Now, we're not Postgres experts, or even great at relational design.\n> Are there better ways of doing that query, or designing the table? For\n> the latter we do have a number of constraints, though, that I don't\n> want to outline now because this mail is already long enough.\n\nThose are probably important to know.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 21 Sep 2011 16:51:26 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with self-join, group by, 100m rows"
},
{
"msg_contents": "Thomas Kappler <[email protected]> writes:\n> The query we want to do is (with example values):\n\n> select t.cid, count(distinct t1.subject_id)\n> from termindex t1, termindex t2\n> where t1.cid=20642 and t1.indextype=636 and t2.indextype=636 and\n> t2.subject_id=t1.subject_id\n> group by t2.cid;\n\nThe EXPLAIN output you provided doesn't appear to match this query (in\nparticular, I don't see the indextype restrictions being checked\nanyplace in the plan).\n\nOne quick-and-dirty thing that might help is to raise work_mem enough\nso that (1) you get a hash aggregation not a sort/group one, and (2)\nif there are still any sorts being done, they don't spill to disk.\nThat will probably be a higher number than would be prudent to install\nas a global setting, but you could SET it locally in the current\nsession before issuing the expensive query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Sep 2011 11:02:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with self-join, group by, 100m rows "
}
] |
[
{
"msg_contents": "Hi all,\n\nIt looks like I've been hit with this well known issue. I have a complicated query that is intended to run every few minutes, I'm using JDBC's Connection.prepareStatement() mostly for nice parameterisation, but postgres produces a suboptimal plan due to its lack of information when the statement is prepared.\n\nI've been following the mailing list for a few years and I've seen this topic come up a bit. I've just done a quick google and I'm not quite sure how to fix this short of manually substituting my query parameters in to a query string -- avoiding prepared statements… An alternative might be to re-write the query and hope that the planner's general plan is a bit closer to optimal… but are these my only options? \n\nI notice that the non-prepared-statement (both below my sig) plan estimates 5500 rows output. I think that's out by a factor of up to 100, suggesting that I might want to increase my statistics and re-analyse… but as I understand the prepared-statement problem, this probably won't help here. Correct?\n\nWe've been worst hit by this query on an 8.3 site. Another site is running 8.4. Have there been improvements in this area recently? Upgrading to 9.0 might be viable for us.\n\nAny tips would be appreciated,\n\n--Royce\n\n\ntest=# PREPARE test (integer) as \n select \n sid, \n role, \n starttime::date, \n nasid, importer, \n max(eventbinding.biid) as biid, \n sum(bytesin) as bytesin, \n sum(bytesout) as bytesout, \n sum(seconds) as seconds, \n sum(coalesce(pages, 0)) as pages, \n sum(coalesce(count, 0)) as count, \n sum(coalesce(rate, 0.0)) as rate, \n sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, \n sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, \n count(*) as entries \n from billingItem, eventBinding , fqun \n where eventBinding.biid > $1 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid \n group by sid, starttime::date, nasid, importer, role;\nPREPARE\ntest=# explain EXECUTE test(57205899);\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=12338998.78..13770049.38 rows=18465169 width=148)\n -> Sort (cost=12338998.78..12385161.70 rows=18465169 width=148)\n Sort Key: fqun.sid, ((billingitem.starttime)::date), billingitem.nasid, billingitem.importer, eventbinding.role\n -> Hash Join (cost=1498473.48..7333418.55 rows=18465169 width=148)\n Hash Cond: (eventbinding.uid = fqun.uid)\n -> Hash Join (cost=1496916.06..6916394.83 rows=18465169 width=148)\n Hash Cond: (billingitem.biid = eventbinding.biid)\n -> Seq Scan on billingitem (cost=0.00..1433087.88 rows=56222688 width=142)\n -> Hash (cost=1175939.45..1175939.45 rows=18465169 width=10)\n -> Bitmap Heap Scan on eventbinding (cost=427409.84..1175939.45 rows=18465169 width=10)\n Recheck Cond: (biid > $1)\n -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..422793.55 rows=18465169 width=0)\n Index Cond: (biid > $1)\n -> Hash (cost=943.85..943.85 rows=49085 width=8)\n -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)\n(15 rows)\n\n\n\n\nAs a query on the psql command line:\n\ntest=# explain \n select \n sid, \n role, \n starttime::date, \n nasid, \n importer, \n max(eventbinding.biid) as biid, \n sum(bytesin) as bytesin, \n sum(bytesout) as bytesout, \n sum(seconds) as seconds, \n sum(coalesce(pages, 0)) as pages, \n sum(coalesce(count, 0)) as count, \n sum(coalesce(rate, 0.0)) as rate, \n sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, \n sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, \n count(*) as entries \n from billingItem, eventBinding , fqun \n where eventBinding.biid > 57205899 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid \n group by sid, starttime::date, nasid, importer, role;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=102496.80..102704.55 rows=5540 width=148)\n -> Hash Join (cost=1697.13..102289.05 rows=5540 width=148)\n Hash Cond: (eventbinding.uid = fqun.uid)\n -> Nested Loop (cost=139.71..100606.99 rows=5540 width=148)\n -> Bitmap Heap Scan on eventbinding (cost=139.71..20547.20 rows=5540 width=10)\n Recheck Cond: (biid > 57205899)\n -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..138.33 rows=5540 width=0)\n Index Cond: (biid > 57205899)\n -> Index Scan using billingitem_db52003_pkey on billingitem (cost=0.00..14.44 rows=1 width=142)\n Index Cond: (billingitem.biid = eventbinding.biid)\n -> Hash (cost=943.85..943.85 rows=49085 width=8)\n -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)\n(12 rows)\n\n\nHi all,It looks like I've been hit with this well known issue. I have a complicated query that is intended to run every few minutes, I'm using JDBC's Connection.prepareStatement() mostly for nice parameterisation, but postgres produces a suboptimal plan due to its lack of information when the statement is prepared.I've been following the mailing list for a few years and I've seen this topic come up a bit. I've just done a quick google and I'm not quite sure how to fix this short of manually substituting my query parameters in to a query string -- avoiding prepared statements… An alternative might be to re-write the query and hope that the planner's general plan is a bit closer to optimal… but are these my only options? I notice that the non-prepared-statement (both below my sig) plan estimates 5500 rows output. I think that's out by a factor of up to 100, suggesting that I might want to increase my statistics and re-analyse… but as I understand the prepared-statement problem, this probably won't help here. Correct?We've been worst hit by this query on an 8.3 site. Another site is running 8.4. Have there been improvements in this area recently? Upgrading to 9.0 might be viable for us.Any tips would be appreciated,--Royce\ntest=# PREPARE test (integer) as select sid, role, starttime::date, nasid, importer, max(eventbinding.biid) as biid, sum(bytesin) as bytesin, sum(bytesout) as bytesout, sum(seconds) as seconds, sum(coalesce(pages, 0)) as pages, sum(coalesce(count, 0)) as count, sum(coalesce(rate, 0.0)) as rate, sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, count(*) as entries from billingItem, eventBinding , fqun where eventBinding.biid > $1 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid group by sid, starttime::date, nasid, importer, role;PREPAREtest=# explain EXECUTE test(57205899); QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------- GroupAggregate (cost=12338998.78..13770049.38 rows=18465169 width=148) -> Sort (cost=12338998.78..12385161.70 rows=18465169 width=148) Sort Key: fqun.sid, ((billingitem.starttime)::date), billingitem.nasid, billingitem.importer, eventbinding.role -> Hash Join (cost=1498473.48..7333418.55 rows=18465169 width=148) Hash Cond: (eventbinding.uid = fqun.uid) -> Hash Join (cost=1496916.06..6916394.83 rows=18465169 width=148) Hash Cond: (billingitem.biid = eventbinding.biid) -> Seq Scan on billingitem (cost=0.00..1433087.88 rows=56222688 width=142) -> Hash (cost=1175939.45..1175939.45 rows=18465169 width=10) -> Bitmap Heap Scan on eventbinding (cost=427409.84..1175939.45 rows=18465169 width=10) Recheck Cond: (biid > $1) -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..422793.55 rows=18465169 width=0) Index Cond: (biid > $1) -> Hash (cost=943.85..943.85 rows=49085 width=8) -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)(15 rows)As a query on the psql command line:test=# explain select sid, role, starttime::date, nasid, importer, max(eventbinding.biid) as biid, sum(bytesin) as bytesin, sum(bytesout) as bytesout, sum(seconds) as seconds, sum(coalesce(pages, 0)) as pages, sum(coalesce(count, 0)) as count, sum(coalesce(rate, 0.0)) as rate, sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, count(*) as entries from billingItem, eventBinding , fqun where eventBinding.biid > 57205899 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid group by sid, starttime::date, nasid, importer, role; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=102496.80..102704.55 rows=5540 width=148) -> Hash Join (cost=1697.13..102289.05 rows=5540 width=148) Hash Cond: (eventbinding.uid = fqun.uid) -> Nested Loop (cost=139.71..100606.99 rows=5540 width=148) -> Bitmap Heap Scan on eventbinding (cost=139.71..20547.20 rows=5540 width=10) Recheck Cond: (biid > 57205899) -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..138.33 rows=5540 width=0) Index Cond: (biid > 57205899) -> Index Scan using billingitem_db52003_pkey on billingitem (cost=0.00..14.44 rows=1 width=142) Index Cond: (billingitem.biid = eventbinding.biid) -> Hash (cost=943.85..943.85 rows=49085 width=8) -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)(12 rows)",
"msg_date": "Wed, 21 Sep 2011 08:44:59 +1000",
"msg_from": "Royce Ausburn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Prepared statements and suboptimal plans"
},
{
"msg_contents": "one thing, in SUM() , you don't have to coalesce. Consider following example:\n\nfoo=# create table bar(id serial primary key, a float);\nNOTICE: CREATE TABLE will create implicit sequence \"bar_id_seq\" for\nserial column \"bar.id\"\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"bar_pkey\" for table \"bar\"\nCREATE TABLE\nTime: 666.094 ms\nfoo=# insert into bar(a) select random()*random()*random() from\ngenerate_series(1, 1000) x;\nINSERT 0 1000\nTime: 496.451 ms\nfoo=# update bar set a = NULL where random() < 0.1;\nUPDATE 97\nTime: 150.599 ms\nfoo=# select sum(a) from bar;\n sum\n------------------\n 108.757220804033\n(1 row)\n\nTime: 277.227 ms\nfoo=# select sum(coalesce(a, 0.0)) from bar;\n sum\n------------------\n 108.757220804033\n(1 row)\n\nTime: 0.709 ms\n\n\nBut that obviously isn't going to improve it a lot.\n",
"msg_date": "Wed, 21 Sep 2011 15:49:54 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements and suboptimal plans"
},
{
"msg_contents": "On Tue, Sep 20, 2011 at 5:44 PM, Royce Ausburn <[email protected]> wrote:\n> Hi all,\n> It looks like I've been hit with this well known issue. I have\n> a complicated query that is intended to run every few minutes, I'm using\n> JDBC's Connection.prepareStatement() mostly for nice parameterisation, but\n> postgres produces a suboptimal plan due to its lack of information when the\n> statement is prepared.\n\nPostgres has gotten incrementally smarter about this, but at the end\nof the day it's just working under what the jdbc driver is telling it\nwhat to do. One thing you can do is disable sever-side prepared\nstatements with the prepareThreshold=0 decoration to the jdbc url give\nthat a whirl and see how it turns out.\n\nmerlin\n",
"msg_date": "Wed, 21 Sep 2011 11:39:35 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements and suboptimal plans"
},
{
"msg_contents": "Sorry all - this was a duplicate from another of my addresses =( Thanks to all that have helped out on both threads.\n\n\n\n\n\nOn 21/09/2011, at 8:44 AM, Royce Ausburn wrote:\n\n> Hi all,\n> \n> It looks like I've been hit with this well known issue. I have a complicated query that is intended to run every few minutes, I'm using JDBC's Connection.prepareStatement() mostly for nice parameterisation, but postgres produces a suboptimal plan due to its lack of information when the statement is prepared.\n> \n> I've been following the mailing list for a few years and I've seen this topic come up a bit. I've just done a quick google and I'm not quite sure how to fix this short of manually substituting my query parameters in to a query string -- avoiding prepared statements… An alternative might be to re-write the query and hope that the planner's general plan is a bit closer to optimal… but are these my only options? \n> \n> I notice that the non-prepared-statement (both below my sig) plan estimates 5500 rows output. I think that's out by a factor of up to 100, suggesting that I might want to increase my statistics and re-analyse… but as I understand the prepared-statement problem, this probably won't help here. Correct?\n> \n> We've been worst hit by this query on an 8.3 site. Another site is running 8.4. Have there been improvements in this area recently? Upgrading to 9.0 might be viable for us.\n> \n> Any tips would be appreciated,\n> \n> --Royce\n> \n> \n> test=# PREPARE test (integer) as \n> select \n> sid, \n> role, \n> starttime::date, \n> nasid, importer, \n> max(eventbinding.biid) as biid, \n> sum(bytesin) as bytesin, \n> sum(bytesout) as bytesout, \n> sum(seconds) as seconds, \n> sum(coalesce(pages, 0)) as pages, \n> sum(coalesce(count, 0)) as count, \n> sum(coalesce(rate, 0.0)) as rate, \n> sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, \n> sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, \n> count(*) as entries \n> from billingItem, eventBinding , fqun \n> where eventBinding.biid > $1 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid \n> group by sid, starttime::date, nasid, importer, role;\n> PREPARE\n> test=# explain EXECUTE test(57205899);\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=12338998.78..13770049.38 rows=18465169 width=148)\n> -> Sort (cost=12338998.78..12385161.70 rows=18465169 width=148)\n> Sort Key: fqun.sid, ((billingitem.starttime)::date), billingitem.nasid, billingitem.importer, eventbinding.role\n> -> Hash Join (cost=1498473.48..7333418.55 rows=18465169 width=148)\n> Hash Cond: (eventbinding.uid = fqun.uid)\n> -> Hash Join (cost=1496916.06..6916394.83 rows=18465169 width=148)\n> Hash Cond: (billingitem.biid = eventbinding.biid)\n> -> Seq Scan on billingitem (cost=0.00..1433087.88 rows=56222688 width=142)\n> -> Hash (cost=1175939.45..1175939.45 rows=18465169 width=10)\n> -> Bitmap Heap Scan on eventbinding (cost=427409.84..1175939.45 rows=18465169 width=10)\n> Recheck Cond: (biid > $1)\n> -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..422793.55 rows=18465169 width=0)\n> Index Cond: (biid > $1)\n> -> Hash (cost=943.85..943.85 rows=49085 width=8)\n> -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)\n> (15 rows)\n> \n> \n> \n> \n> As a query on the psql command line:\n> \n> test=# explain \n> select \n> sid, \n> role, \n> starttime::date, \n> nasid, \n> importer, \n> max(eventbinding.biid) as biid, \n> sum(bytesin) as bytesin, \n> sum(bytesout) as bytesout, \n> sum(seconds) as seconds, \n> sum(coalesce(pages, 0)) as pages, \n> sum(coalesce(count, 0)) as count, \n> sum(coalesce(rate, 0.0)) as rate, \n> sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, \n> sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, \n> count(*) as entries \n> from billingItem, eventBinding , fqun \n> where eventBinding.biid > 57205899 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid \n> group by sid, starttime::date, nasid, importer, role;\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=102496.80..102704.55 rows=5540 width=148)\n> -> Hash Join (cost=1697.13..102289.05 rows=5540 width=148)\n> Hash Cond: (eventbinding.uid = fqun.uid)\n> -> Nested Loop (cost=139.71..100606.99 rows=5540 width=148)\n> -> Bitmap Heap Scan on eventbinding (cost=139.71..20547.20 rows=5540 width=10)\n> Recheck Cond: (biid > 57205899)\n> -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..138.33 rows=5540 width=0)\n> Index Cond: (biid > 57205899)\n> -> Index Scan using billingitem_db52003_pkey on billingitem (cost=0.00..14.44 rows=1 width=142)\n> Index Cond: (billingitem.biid = eventbinding.biid)\n> -> Hash (cost=943.85..943.85 rows=49085 width=8)\n> -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)\n> (12 rows)\n> \n\n\n\nSorry all - this was a duplicate from another of my addresses =( Thanks to all that have helped out on both threads.On 21/09/2011, at 8:44 AM, Royce Ausburn wrote:Hi all,It looks like I've been hit with this well known issue. I have a complicated query that is intended to run every few minutes, I'm using JDBC's Connection.prepareStatement() mostly for nice parameterisation, but postgres produces a suboptimal plan due to its lack of information when the statement is prepared.I've been following the mailing list for a few years and I've seen this topic come up a bit. I've just done a quick google and I'm not quite sure how to fix this short of manually substituting my query parameters in to a query string -- avoiding prepared statements… An alternative might be to re-write the query and hope that the planner's general plan is a bit closer to optimal… but are these my only options? I notice that the non-prepared-statement (both below my sig) plan estimates 5500 rows output. I think that's out by a factor of up to 100, suggesting that I might want to increase my statistics and re-analyse… but as I understand the prepared-statement problem, this probably won't help here. Correct?We've been worst hit by this query on an 8.3 site. Another site is running 8.4. Have there been improvements in this area recently? Upgrading to 9.0 might be viable for us.Any tips would be appreciated,--Royce\ntest=# PREPARE test (integer) as select sid, role, starttime::date, nasid, importer, max(eventbinding.biid) as biid, sum(bytesin) as bytesin, sum(bytesout) as bytesout, sum(seconds) as seconds, sum(coalesce(pages, 0)) as pages, sum(coalesce(count, 0)) as count, sum(coalesce(rate, 0.0)) as rate, sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, count(*) as entries from billingItem, eventBinding , fqun where eventBinding.biid > $1 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid group by sid, starttime::date, nasid, importer, role;PREPAREtest=# explain EXECUTE test(57205899); QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------- GroupAggregate (cost=12338998.78..13770049.38 rows=18465169 width=148) -> Sort (cost=12338998.78..12385161.70 rows=18465169 width=148) Sort Key: fqun.sid, ((billingitem.starttime)::date), billingitem.nasid, billingitem.importer, eventbinding.role -> Hash Join (cost=1498473.48..7333418.55 rows=18465169 width=148) Hash Cond: (eventbinding.uid = fqun.uid) -> Hash Join (cost=1496916.06..6916394.83 rows=18465169 width=148) Hash Cond: (billingitem.biid = eventbinding.biid) -> Seq Scan on billingitem (cost=0.00..1433087.88 rows=56222688 width=142) -> Hash (cost=1175939.45..1175939.45 rows=18465169 width=10) -> Bitmap Heap Scan on eventbinding (cost=427409.84..1175939.45 rows=18465169 width=10) Recheck Cond: (biid > $1) -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..422793.55 rows=18465169 width=0) Index Cond: (biid > $1) -> Hash (cost=943.85..943.85 rows=49085 width=8) -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)(15 rows)As a query on the psql command line:test=# explain select sid, role, starttime::date, nasid, importer, max(eventbinding.biid) as biid, sum(bytesin) as bytesin, sum(bytesout) as bytesout, sum(seconds) as seconds, sum(coalesce(pages, 0)) as pages, sum(coalesce(count, 0)) as count, sum(coalesce(rate, 0.0)) as rate, sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, count(*) as entries from billingItem, eventBinding , fqun where eventBinding.biid > 57205899 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid group by sid, starttime::date, nasid, importer, role; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=102496.80..102704.55 rows=5540 width=148) -> Hash Join (cost=1697.13..102289.05 rows=5540 width=148) Hash Cond: (eventbinding.uid = fqun.uid) -> Nested Loop (cost=139.71..100606.99 rows=5540 width=148) -> Bitmap Heap Scan on eventbinding (cost=139.71..20547.20 rows=5540 width=10) Recheck Cond: (biid > 57205899) -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..138.33 rows=5540 width=0) Index Cond: (biid > 57205899) -> Index Scan using billingitem_db52003_pkey on billingitem (cost=0.00..14.44 rows=1 width=142) Index Cond: (billingitem.biid = eventbinding.biid) -> Hash (cost=943.85..943.85 rows=49085 width=8) -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)(12 rows)",
"msg_date": "Thu, 22 Sep 2011 03:08:04 +1000",
"msg_from": "Royce Ausburn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Prepared statements and suboptimal plans"
}
] |
[
{
"msg_contents": "Hi all,\n\nIt looks like I've been hit with this well known issue. I have a complicated query that is intended to run every few minutes, I'm using JDBC's Connection.prepareStatement() mostly for nice parameterisation, but postgres produces a suboptimal plan due to its lack of information when the statement is prepared.\n\nI've been following the mailing list for a few years and I've seen this topic come up a bit. I've just done a quick google and I'm not quite sure how to fix this short of manually substituting my query parameters in to a query string -- avoiding prepared statements… An alternative might be to re-write the query and hope that the planner's general plan is a bit closer to optimal… but are these my only options? \n\nI notice that the non-prepared-statement (both below my sig) plan estimates 5500 rows output. I think that's out by a factor of up to 100, suggesting that I might want to increase my statistics and re-analyse… but as I understand the prepared-statement problem, this probably won't help here. Correct?\n\nWe've been worst hit by this query on an 8.3 site. Another site is running 8.4. Have there been improvements in this area recently? Upgrading to 9.0 might be viable for us.\n\nAny tips would be appreciated,\n\n--Royce\n\n\ntest=# PREPARE test (integer) as \n select \n sid, \n role, \n starttime::date, \n nasid, importer, \n max(eventbinding.biid) as biid, \n sum(bytesin) as bytesin, \n sum(bytesout) as bytesout, \n sum(seconds) as seconds, \n sum(coalesce(pages, 0)) as pages, \n sum(coalesce(count, 0)) as count, \n sum(coalesce(rate, 0.0)) as rate, \n sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, \n sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, \n count(*) as entries \n from billingItem, eventBinding , fqun \n where eventBinding.biid > $1 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid \n group by sid, starttime::date, nasid, importer, role;\nPREPARE\ntest=# explain EXECUTE test(57205899);\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=12338998.78..13770049.38 rows=18465169 width=148)\n -> Sort (cost=12338998.78..12385161.70 rows=18465169 width=148)\n Sort Key: fqun.sid, ((billingitem.starttime)::date), billingitem.nasid, billingitem.importer, eventbinding.role\n -> Hash Join (cost=1498473.48..7333418.55 rows=18465169 width=148)\n Hash Cond: (eventbinding.uid = fqun.uid)\n -> Hash Join (cost=1496916.06..6916394.83 rows=18465169 width=148)\n Hash Cond: (billingitem.biid = eventbinding.biid)\n -> Seq Scan on billingitem (cost=0.00..1433087.88 rows=56222688 width=142)\n -> Hash (cost=1175939.45..1175939.45 rows=18465169 width=10)\n -> Bitmap Heap Scan on eventbinding (cost=427409.84..1175939.45 rows=18465169 width=10)\n Recheck Cond: (biid > $1)\n -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..422793.55 rows=18465169 width=0)\n Index Cond: (biid > $1)\n -> Hash (cost=943.85..943.85 rows=49085 width=8)\n -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)\n(15 rows)\n\n\n\n\nAs a query on the psql command line:\n\ntest=# explain \n select \n sid, \n role, \n starttime::date, \n nasid, \n importer, \n max(eventbinding.biid) as biid, \n sum(bytesin) as bytesin, \n sum(bytesout) as bytesout, \n sum(seconds) as seconds, \n sum(coalesce(pages, 0)) as pages, \n sum(coalesce(count, 0)) as count, \n sum(coalesce(rate, 0.0)) as rate, \n sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, \n sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, \n count(*) as entries \n from billingItem, eventBinding , fqun \n where eventBinding.biid > 57205899 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid \n group by sid, starttime::date, nasid, importer, role;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=102496.80..102704.55 rows=5540 width=148)\n -> Hash Join (cost=1697.13..102289.05 rows=5540 width=148)\n Hash Cond: (eventbinding.uid = fqun.uid)\n -> Nested Loop (cost=139.71..100606.99 rows=5540 width=148)\n -> Bitmap Heap Scan on eventbinding (cost=139.71..20547.20 rows=5540 width=10)\n Recheck Cond: (biid > 57205899)\n -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..138.33 rows=5540 width=0)\n Index Cond: (biid > 57205899)\n -> Index Scan using billingitem_db52003_pkey on billingitem (cost=0.00..14.44 rows=1 width=142)\n Index Cond: (billingitem.biid = eventbinding.biid)\n -> Hash (cost=943.85..943.85 rows=49085 width=8)\n -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)\n(12 rows)\n\n\nHi all,It looks like I've been hit with this well known issue. I have a complicated query that is intended to run every few minutes, I'm using JDBC's Connection.prepareStatement() mostly for nice parameterisation, but postgres produces a suboptimal plan due to its lack of information when the statement is prepared.I've been following the mailing list for a few years and I've seen this topic come up a bit. I've just done a quick google and I'm not quite sure how to fix this short of manually substituting my query parameters in to a query string -- avoiding prepared statements… An alternative might be to re-write the query and hope that the planner's general plan is a bit closer to optimal… but are these my only options? I notice that the non-prepared-statement (both below my sig) plan estimates 5500 rows output. I think that's out by a factor of up to 100, suggesting that I might want to increase my statistics and re-analyse… but as I understand the prepared-statement problem, this probably won't help here. Correct?We've been worst hit by this query on an 8.3 site. Another site is running 8.4. Have there been improvements in this area recently? Upgrading to 9.0 might be viable for us.Any tips would be appreciated,--Royce\ntest=# PREPARE test (integer) as select sid, role, starttime::date, nasid, importer, max(eventbinding.biid) as biid, sum(bytesin) as bytesin, sum(bytesout) as bytesout, sum(seconds) as seconds, sum(coalesce(pages, 0)) as pages, sum(coalesce(count, 0)) as count, sum(coalesce(rate, 0.0)) as rate, sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, count(*) as entries from billingItem, eventBinding , fqun where eventBinding.biid > $1 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid group by sid, starttime::date, nasid, importer, role;PREPAREtest=# explain EXECUTE test(57205899); QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------- GroupAggregate (cost=12338998.78..13770049.38 rows=18465169 width=148) -> Sort (cost=12338998.78..12385161.70 rows=18465169 width=148) Sort Key: fqun.sid, ((billingitem.starttime)::date), billingitem.nasid, billingitem.importer, eventbinding.role -> Hash Join (cost=1498473.48..7333418.55 rows=18465169 width=148) Hash Cond: (eventbinding.uid = fqun.uid) -> Hash Join (cost=1496916.06..6916394.83 rows=18465169 width=148) Hash Cond: (billingitem.biid = eventbinding.biid) -> Seq Scan on billingitem (cost=0.00..1433087.88 rows=56222688 width=142) -> Hash (cost=1175939.45..1175939.45 rows=18465169 width=10) -> Bitmap Heap Scan on eventbinding (cost=427409.84..1175939.45 rows=18465169 width=10) Recheck Cond: (biid > $1) -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..422793.55 rows=18465169 width=0) Index Cond: (biid > $1) -> Hash (cost=943.85..943.85 rows=49085 width=8) -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)(15 rows)As a query on the psql command line:test=# explain select sid, role, starttime::date, nasid, importer, max(eventbinding.biid) as biid, sum(bytesin) as bytesin, sum(bytesout) as bytesout, sum(seconds) as seconds, sum(coalesce(pages, 0)) as pages, sum(coalesce(count, 0)) as count, sum(coalesce(rate, 0.0)) as rate, sum(coalesce(bytesSentRate, 0.0)) as bytesSentRate, sum(coalesce(bytesReceivedRate, 0.0)) as bytesReceivedRate, count(*) as entries from billingItem, eventBinding , fqun where eventBinding.biid > 57205899 and eventBinding.biid = billingItem.biid and fqun.uid = eventBinding.uid group by sid, starttime::date, nasid, importer, role; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=102496.80..102704.55 rows=5540 width=148) -> Hash Join (cost=1697.13..102289.05 rows=5540 width=148) Hash Cond: (eventbinding.uid = fqun.uid) -> Nested Loop (cost=139.71..100606.99 rows=5540 width=148) -> Bitmap Heap Scan on eventbinding (cost=139.71..20547.20 rows=5540 width=10) Recheck Cond: (biid > 57205899) -> Bitmap Index Scan on eventbinding_biid_uid_role_idx (cost=0.00..138.33 rows=5540 width=0) Index Cond: (biid > 57205899) -> Index Scan using billingitem_db52003_pkey on billingitem (cost=0.00..14.44 rows=1 width=142) Index Cond: (billingitem.biid = eventbinding.biid) -> Hash (cost=943.85..943.85 rows=49085 width=8) -> Seq Scan on fqun (cost=0.00..943.85 rows=49085 width=8)(12 rows)",
"msg_date": "Wed, 21 Sep 2011 09:27:10 +1000",
"msg_from": "Royce Ausburn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Prepared statements and suboptimal plans"
},
{
"msg_contents": "On 21/09/2011 7:27 AM, Royce Ausburn wrote:\n> Hi all,\n>\n> It looks like I've been hit with this well known issue. I have \n> a complicated query that is intended to run every few minutes, I'm \n> using JDBC's Connection.prepareStatement() mostly for nice \n> parameterisation, but postgres produces a suboptimal plan due to its \n> lack of information when the statement is prepared.\n>\n> [snip]\n>\n> We've been worst hit by this query on an 8.3 site. Another site is \n> running 8.4. Have there been improvements in this area recently? \n> Upgrading to 9.0 might be viable for us.\n\nTom just mentioned that 9.1 will be able to re-plan parameterized \nprepared statements, so this issue will go away. In the mean time you \ncan only really use the standard workaround of setting the prepare \ntheshold to 0 to disable server-side prepare, so you can continue to use \nJDBC prepared statements and have the driver do the parameter \nsubstitution for you.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 21 Sep 2011 07:39:03 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements and suboptimal plans"
},
{
"msg_contents": "\nOn 21/09/2011, at 9:39 AM, Craig Ringer wrote:\n\n> On 21/09/2011 7:27 AM, Royce Ausburn wrote:\n>> Hi all,\n>> \n>> It looks like I've been hit with this well known issue. I have a complicated query that is intended to run every few minutes, I'm using JDBC's Connection.prepareStatement() mostly for nice parameterisation, but postgres produces a suboptimal plan due to its lack of information when the statement is prepared.\n>> \n>> [snip]\n>> \n>> We've been worst hit by this query on an 8.3 site. Another site is running 8.4. Have there been improvements in this area recently? Upgrading to 9.0 might be viable for us.\n> \n> Tom just mentioned that 9.1 will be able to re-plan parameterized prepared statements, so this issue will go away. In the mean time you can only really use the standard workaround of setting the prepare theshold to 0 to disable server-side prepare, so you can continue to use JDBC prepared statements and have the driver do the parameter substitution for you.\n\nThanks Craig -- that trick helps a lot. \n\n--Royce",
"msg_date": "Wed, 21 Sep 2011 10:17:41 +1000",
"msg_from": "Royce Ausburn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Prepared statements and suboptimal plans"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> On 21/09/2011 7:27 AM, Royce Ausburn wrote:\n>> We've been worst hit by this query on an 8.3 site. Another site is \n>> running 8.4. Have there been improvements in this area recently? \n>> Upgrading to 9.0 might be viable for us.\n\n> Tom just mentioned that 9.1 will be able to re-plan parameterized \n> prepared statements, so this issue will go away.\n\n9.2, sorry, not 9.1. We could use some motivated people testing that\naspect of GIT HEAD, though, since I doubt the policy for when to re-plan\nis quite ideal yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Sep 2011 20:36:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements and suboptimal plans "
},
{
"msg_contents": "On Sep 20, 2011, at 7:36 PM, Tom Lane wrote:\n\n> 9.2, sorry, not 9.1. We could use some motivated people testing that\n> aspect of GIT HEAD, though, since I doubt the policy for when to re-plan\n> is quite ideal yet.\n\n\nIs motivation and a box enough? I have motivation, but not knowledge of internals enough to drive testing myself.\n\nxoa\n\n--\nAndy Lester => [email protected] => www.petdance.com => AIM:petdance\n\n\nOn Sep 20, 2011, at 7:36 PM, Tom Lane wrote:9.2, sorry, not 9.1. We could use some motivated people testing thataspect of GIT HEAD, though, since I doubt the policy for when to re-planis quite ideal yet.Is motivation and a box enough? I have motivation, but not knowledge of internals enough to drive testing myself.xoa\n--Andy Lester => [email protected] => www.petdance.com => AIM:petdance",
"msg_date": "Tue, 20 Sep 2011 19:42:41 -0500",
"msg_from": "Andy Lester <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements and suboptimal plans "
},
{
"msg_contents": "Andy Lester <[email protected]> writes:\n> On Sep 20, 2011, at 7:36 PM, Tom Lane wrote:\n>> 9.2, sorry, not 9.1. We could use some motivated people testing that\n>> aspect of GIT HEAD, though, since I doubt the policy for when to re-plan\n>> is quite ideal yet.\n\n> Is motivation and a box enough? I have motivation, but not knowledge of internals enough to drive testing myself.\n\nWell, you probably don't need much internals knowledge as long as you\ncan read a little C. The crux of what I'm worried about is\nchoose_custom_plan() in src/backend/utils/cache/plancache.c, which is\nreally pretty simple: decide whether to use a custom (parameter-aware)\nplan or a generic (not-parameter-aware) plan. You might want to shove\nsome elog() calls in there for tracing purposes so that you can log what\nit did, but then it's just a matter of throwing real-world workloads at\nit and seeing if it makes good decisions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Sep 2011 21:00:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements and suboptimal plans "
},
{
"msg_contents": "* Royce Ausburn ([email protected]) wrote:\n> > Tom just mentioned that 9.1 will be able to re-plan parameterized prepared statements, so this issue will go away. In the mean time you can only really use the standard workaround of setting the prepare theshold to 0 to disable server-side prepare, so you can continue to use JDBC prepared statements and have the driver do the parameter substitution for you.\n> \n> Thanks Craig -- that trick helps a lot. \n\nYou might also be able to bump up work_mem by a fair bit to get PG to\nuse a hashagg instead of groupagg/sort, even though its estimate is way\noff. That's what I've done in the past for similar situations and it's\nworked well. I'd recommend increasing it for just this query and then\nresetting it (assuming you don't just drop the connection, in which case\nyou don't need to reset it since a new connection will get the default).\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Tue, 20 Sep 2011 22:36:15 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements and suboptimal plans"
},
{
"msg_contents": "Tom,\n\n* Tom Lane ([email protected]) wrote:\n> really pretty simple: decide whether to use a custom (parameter-aware)\n> plan or a generic (not-parameter-aware) plan. \n\nBefore I go digging into this, I was wondering, is this going to address\nour current problem of not being able to use prepared queries and\nconstraint exclusion..?\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Tue, 20 Sep 2011 22:38:30 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements and suboptimal plans"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> * Tom Lane ([email protected]) wrote:\n>> really pretty simple: decide whether to use a custom (parameter-aware)\n>> plan or a generic (not-parameter-aware) plan. \n\n> Before I go digging into this, I was wondering, is this going to address\n> our current problem of not being able to use prepared queries and\n> constraint exclusion..?\n\nPossibly ... you didn't say exactly what your problem consists of.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Sep 2011 23:43:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements and suboptimal plans "
}
] |
[
{
"msg_contents": "Hi all,\n\nI am trying to update / refresh one table (history) only from prod. database\nto my test environment database\nmy query as follows:\n\npg_dump -h <hostname1> -U postgres -t history DATABASENAME | psql -h\nhostname2 -U postgres -t history DATABASENAME > db.sql\n\nbut I am getting the following error\n\npsql: FATAL: database \"history\" does not exist\n\n\nCan you help please?\n\nwhat would be the script if I want more than one table (3 tables to refresh)\n\nKind regards\n\nHi all,I am trying to update / refresh one table (history) only from prod. database to my test environment database\nmy query as follows:pg_dump -h <hostname1> -U postgres -t history DATABASENAME | psql -h hostname2 -U postgres -t history DATABASENAME > db.sqlbut I am getting the following error\npsql: FATAL: database \"history\" does not existCan you help please?what would be the script if I want more than one table (3 tables to refresh)\nKind regards",
"msg_date": "Wed, 21 Sep 2011 15:57:44 +1200",
"msg_from": "Hany ABOU-GHOURY <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG 9 adminstrations"
},
{
"msg_contents": "You don't need \"-t history\" on the psql part. It doesn't do what you think\nit does, and it's reading the next part (\"history\") as the database name.\n\ntry:\n\npg_dump -h <hostname1> -U postgres -t history DATABASENAME | psql -h\nhostname2 -U postgres DATABASENAME > db.sql\n\nDerrick\n\nOn Tue, Sep 20, 2011 at 11:57 PM, Hany ABOU-GHOURY <[email protected]>wrote:\n\n> Hi all,\n>\n> I am trying to update / refresh one table (history) only from prod.\n> database to my test environment database\n> my query as follows:\n>\n> pg_dump -h <hostname1> -U postgres -t history DATABASENAME | psql -h\n> hostname2 -U postgres -t history DATABASENAME > db.sql\n>\n> but I am getting the following error\n>\n> psql: FATAL: database \"history\" does not exist\n>\n>\n> Can you help please?\n>\n> what would be the script if I want more than one table (3 tables to\n> refresh)\n>\n> Kind regards\n>\n\nYou don't need \"-t history\" on the psql part. It doesn't do what you think it does, and it's reading the next part (\"history\") as the database name.try:pg_dump\n -h <hostname1> -U postgres -t history DATABASENAME | psql -h \nhostname2 -U postgres DATABASENAME > db.sqlDerrickOn Tue, Sep 20, 2011 at 11:57 PM, Hany ABOU-GHOURY <[email protected]> wrote:\nHi all,\nI am trying to update / refresh one table (history) only from prod. database to my test environment database\nmy query as follows:pg_dump -h <hostname1> -U postgres -t history DATABASENAME | psql -h hostname2 -U postgres -t history DATABASENAME > db.sqlbut I am getting the following error\npsql: FATAL: database \"history\" does not existCan you help please?what would be the script if I want more than one table (3 tables to refresh)\nKind regards",
"msg_date": "Wed, 21 Sep 2011 00:16:36 -0400",
"msg_from": "Derrick Rice <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG 9 adminstrations"
},
{
"msg_contents": "Thanks but...did not work different error though\n\nERROR: relation \"history\" already exists\nERROR: relation \"idx_history_pagegroupid\" already exists\nERROR: constraint \"cdocumentid\" for relation \"history\" already exists\n\n\n\nOn Wed, Sep 21, 2011 at 4:16 PM, Derrick Rice <[email protected]>wrote:\n\n> You don't need \"-t history\" on the psql part. It doesn't do what you think\n> it does, and it's reading the next part (\"history\") as the database name.\n>\n> try:\n>\n> pg_dump -h <hostname1> -U postgres -t history DATABASENAME | psql -h\n> hostname2 -U postgres DATABASENAME > db.sql\n>\n> Derrick\n>\n>\n> On Tue, Sep 20, 2011 at 11:57 PM, Hany ABOU-GHOURY <[email protected]>wrote:\n>\n>> Hi all,\n>>\n>> I am trying to update / refresh one table (history) only from prod.\n>> database to my test environment database\n>> my query as follows:\n>>\n>> pg_dump -h <hostname1> -U postgres -t history DATABASENAME | psql -h\n>> hostname2 -U postgres -t history DATABASENAME > db.sql\n>>\n>> but I am getting the following error\n>>\n>> psql: FATAL: database \"history\" does not exist\n>>\n>>\n>> Can you help please?\n>>\n>> what would be the script if I want more than one table (3 tables to\n>> refresh)\n>>\n>> Kind regards\n>>\n>\n>\n\nThanks but...did not work different error thoughERROR: relation \"history\" already existsERROR: relation \"idx_history_pagegroupid\" already existsERROR: constraint \"cdocumentid\" for relation \"history\" already exists\nOn Wed, Sep 21, 2011 at 4:16 PM, Derrick Rice <[email protected]> wrote:\nYou don't need \"-t history\" on the psql part. It doesn't do what you think it does, and it's reading the next part (\"history\") as the database name.\ntry:pg_dump\n -h <hostname1> -U postgres -t history DATABASENAME | psql -h \nhostname2 -U postgres DATABASENAME > db.sqlDerrickOn Tue, Sep 20, 2011 at 11:57 PM, Hany ABOU-GHOURY <[email protected]> wrote:\nHi all,\nI am trying to update / refresh one table (history) only from prod. database to my test environment database\nmy query as follows:pg_dump -h <hostname1> -U postgres -t history DATABASENAME | psql -h hostname2 -U postgres -t history DATABASENAME > db.sqlbut I am getting the following error\npsql: FATAL: database \"history\" does not existCan you help please?what would be the script if I want more than one table (3 tables to refresh)\nKind regards",
"msg_date": "Wed, 21 Sep 2011 16:29:39 +1200",
"msg_from": "Hany ABOU-GHOURY <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG 9 adminstrations"
},
{
"msg_contents": "On 09/20/2011 11:29 PM, Hany ABOU-GHOURY wrote:\n\n> Thanks but...did not work different error though\n>\n> ERROR: relation \"history\" already exists\n> ERROR: relation \"idx_history_pagegroupid\" already exists\n> ERROR: constraint \"cdocumentid\" for relation \"history\" already exists\n\nClearly the history table already exists on your second host. This may \nor may not matter, if the table definitions are the same. If the table \non host2 was empty, it'll be filled with the data from host1. If not, \nyou'll need to truncate the table on host2 first. If you don't want the \ncomplaints about tables or indexes or constraints that already exist, \nuse the -a option for data-only dumps.\n\nAlso, might I suggest using the pgsql-novice list? They're more likely \nto help with general issues like this.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Wed, 21 Sep 2011 07:48:13 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG 9 adminstrations"
},
{
"msg_contents": "Please help with below issue I am new to postgresql\n\nMany thanks for taking the time to read my email\n\nCheers\nHany\n\n---------- Forwarded message ----------\nFrom: Shaun Thomas <[email protected]>\nDate: Thu, Sep 22, 2011 at 12:48 AM\nSubject: Re: [PERFORM] PG 9 adminstrations\nTo: Hany ABOU-GHOURY <[email protected]>\nCc: Derrick Rice <[email protected]>, [email protected]\n\n\nOn 09/20/2011 11:29 PM, Hany ABOU-GHOURY wrote:\n\n Thanks but...did not work different error though\n>\n> ERROR: relation \"history\" already exists\n> ERROR: relation \"idx_history_pagegroupid\" already exists\n> ERROR: constraint \"cdocumentid\" for relation \"history\" already exists\n>\n\nClearly the history table already exists on your second host. This may or\nmay not matter, if the table definitions are the same. If the table on host2\nwas empty, it'll be filled with the data from host1. If not, you'll need to\ntruncate the table on host2 first. If you don't want the complaints about\ntables or indexes or constraints that already exist, use the -a option for\ndata-only dumps.\n\nAlso, might I suggest using the pgsql-novice list? They're more likely to\nhelp with general issues like this.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________**________________\n\nSee http://www.peak6.com/email-**disclaimer/<http://www.peak6.com/email-disclaimer/>for\nterms and conditions related to this email\n\nPlease help with below issue I am new to postgresqlMany thanks for taking the time to read my emailCheersHany---------- Forwarded message ----------\nFrom: Shaun Thomas <[email protected]>Date: Thu, Sep 22, 2011 at 12:48 AMSubject: Re: [PERFORM] PG 9 adminstrations\nTo: Hany ABOU-GHOURY <[email protected]>Cc: Derrick Rice <[email protected]>, [email protected]\nOn 09/20/2011 11:29 PM, Hany ABOU-GHOURY wrote:\n\n\nThanks but...did not work different error though\n\nERROR: relation \"history\" already exists\nERROR: relation \"idx_history_pagegroupid\" already exists\nERROR: constraint \"cdocumentid\" for relation \"history\" already exists\n\n\nClearly the history table already exists on your second host. This may or may not matter, if the table definitions are the same. If the table on host2 was empty, it'll be filled with the data from host1. If not, you'll need to truncate the table on host2 first. If you don't want the complaints about tables or indexes or constraints that already exist, use the -a option for data-only dumps.\n\nAlso, might I suggest using the pgsql-novice list? They're more likely to help with general issues like this.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email",
"msg_date": "Fri, 23 Sep 2011 13:17:37 +1200",
"msg_from": "Hany ABOU-GHOURY <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: [PERFORM] PG 9 adminstrations"
},
{
"msg_contents": "--As of September 23, 2011 1:17:37 PM +1200, Hany ABOU-GHOURY is alleged to \nhave said:\n\n> Please help with below issue I am new to postgresql\n>\n>\n> Many thanks for taking the time to read my email\n\n--As for the rest, it is mine.\n\nI'm sure we'd be glad to help, but I think we're going to need you to start \nat the beginning. What's the problem you are having? What are you trying \nto do?\n\nYou sent an output of an error, saying that it was a 'different error'. \nWhat did you try? What were you trying originally? This is a different \ngroup than you were sending to before, so we don't have the history.\n\nWe'll help as soon as we can. But we have to know what the problem is. ;)\n\nDaniel T. Staal\n\n---------------------------------------------------------------\nThis email copyright the author. Unless otherwise noted, you\nare expressly allowed to retransmit, quote, or otherwise use\nthe contents for non-commercial purposes. This copyright will\nexpire 5 years after the author's death, or in 30 years,\nwhichever is longer, unless such a period is in excess of\nlocal copyright law.\n---------------------------------------------------------------\n",
"msg_date": "Thu, 22 Sep 2011 21:36:59 -0400",
"msg_from": "Daniel Staal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: [PERFORM] PG 9 adminstrations"
},
{
"msg_contents": "I am trying to update / refresh one table (history) only from prod. database\nto test environment database.\nscript as follows:\n\npg_dump -h host1 -U postgres -t history DB1 | psql -h host2 -U postgres -t\nhistory DB2 > db.sql\n\nbut I am getting the following error\n\npsql: FATAL: database \"history\" does not exist\n\n\nCan you help please?\n\nwhat would be the script if I want more than one table (3 tables to refresh)\n\nKind regards\n\nOn Fri, Sep 23, 2011 at 1:36 PM, Daniel Staal <[email protected]> wrote:\n\n> --As of September 23, 2011 1:17:37 PM +1200, Hany ABOU-GHOURY is alleged to\n> have said:\n>\n> Please help with below issue I am new to postgresql\n>>\n>>\n>> Many thanks for taking the time to read my email\n>>\n>\n> --As for the rest, it is mine.\n>\n> I'm sure we'd be glad to help, but I think we're going to need you to start\n> at the beginning. What's the problem you are having? What are you trying\n> to do?\n>\n> You sent an output of an error, saying that it was a 'different error'.\n> What did you try? What were you trying originally? This is a different\n> group than you were sending to before, so we don't have the history.\n>\n> We'll help as soon as we can. But we have to know what the problem is. ;)\n>\n> Daniel T. Staal\n>\n> ------------------------------**------------------------------**---\n> This email copyright the author. Unless otherwise noted, you\n> are expressly allowed to retransmit, quote, or otherwise use\n> the contents for non-commercial purposes. This copyright will\n> expire 5 years after the author's death, or in 30 years,\n> whichever is longer, unless such a period is in excess of\n> local copyright law.\n> ------------------------------**------------------------------**---\n>\n\nI am trying to update / refresh one table (history) only from prod. database to test environment database.\nscript as follows:\npg_dump -h host1 -U postgres -t history DB1 | psql -h host2 -U postgres -t history DB2 > db.sql\nbut I am getting the following errorpsql: FATAL: database \"history\" does not existCan you help please?\nwhat would be the script if I want more than one table (3 tables to refresh)Kind regardsOn Fri, Sep 23, 2011 at 1:36 PM, Daniel Staal <[email protected]> wrote:\n--As of September 23, 2011 1:17:37 PM +1200, Hany ABOU-GHOURY is alleged to have said:\n\n\nPlease help with below issue I am new to postgresql\n\n\nMany thanks for taking the time to read my email\n\n\n--As for the rest, it is mine.\n\nI'm sure we'd be glad to help, but I think we're going to need you to start at the beginning. What's the problem you are having? What are you trying to do?\n\nYou sent an output of an error, saying that it was a 'different error'. What did you try? What were you trying originally? This is a different group than you were sending to before, so we don't have the history.\n\nWe'll help as soon as we can. But we have to know what the problem is. ;)\n\nDaniel T. Staal\n\n---------------------------------------------------------------\nThis email copyright the author. Unless otherwise noted, you\nare expressly allowed to retransmit, quote, or otherwise use\nthe contents for non-commercial purposes. This copyright will\nexpire 5 years after the author's death, or in 30 years,\nwhichever is longer, unless such a period is in excess of\nlocal copyright law.\n---------------------------------------------------------------",
"msg_date": "Fri, 23 Sep 2011 15:43:04 +1200",
"msg_from": "Hany ABOU-GHOURY <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: [PERFORM] PG 9 adminstrations"
}
] |
[
{
"msg_contents": "I am using Postgresql 9.0.1.\n\nUsing the query http://wiki.postgresql.org/wiki/Show_database_bloat, I got\nthe following result for a table:\n\n-[ RECORD 1 ]----+-----------------------------------------------\ncurrent_database | crm\nschemaname | public\ntablename | _attachments\ntbloat | 0.9\nwastedbytes | 0\niname | attachments_description_type_attachmentsid_idx\nibloat | 2.3\nwastedibytes | 5439488\n-[ RECORD 2 ]----+-----------------------------------------------\ncurrent_database | crm\nschemaname | public\ntablename | _attachments\ntbloat | 0.9\nwastedbytes | 0\niname | attachments_attachmentsid_idx\nibloat | 0.2\nwastedibytes | 0\n-[ RECORD 3 ]----+-----------------------------------------------\ncurrent_database | crm\nschemaname | public\ntablename | _attachments\ntbloat | 0.9\nwastedbytes | 0\niname | _attachments_pkey\nibloat | 0.2\nwastedibytes | 0\n\nI REINDEXED both the indexes and table, but I did not find any change in\nwastedspace or wastedispace.\nCould you please tell me why?\n\nI am using Postgresql 9.0.1.Using the query http://wiki.postgresql.org/wiki/Show_database_bloat, I got the following result for a table:-[ RECORD 1 ]----+-----------------------------------------------\ncurrent_database | crmschemaname | publictablename | _attachmentstbloat | 0.9wastedbytes | 0iname | attachments_description_type_attachmentsid_idxibloat | 2.3\nwastedibytes | 5439488-[ RECORD 2 ]----+-----------------------------------------------current_database | crmschemaname | publictablename | _attachmentstbloat | 0.9wastedbytes | 0\niname | attachments_attachmentsid_idxibloat | 0.2wastedibytes | 0-[ RECORD 3 ]----+-----------------------------------------------current_database | crmschemaname | public\ntablename | _attachmentstbloat | 0.9wastedbytes | 0iname | _attachments_pkeyibloat | 0.2wastedibytes | 0I REINDEXED both the indexes and table, but I did not find any change in wastedspace or wastedispace.\nCould you please tell me why?",
"msg_date": "Wed, 21 Sep 2011 13:01:01 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "REINDEX not working for wastedspace"
},
{
"msg_contents": "On Wed, 2011-09-21 at 13:01 +0600, AI Rumman wrote:\n> I am using Postgresql 9.0.1.\n> \n> Using the query http://wiki.postgresql.org/wiki/Show_database_bloat, I got\n> the following result for a table:\n> \n> -[ RECORD 1 ]----+-----------------------------------------------\n> current_database | crm\n> schemaname | public\n> tablename | _attachments\n> tbloat | 0.9\n> wastedbytes | 0\n> iname | attachments_description_type_attachmentsid_idx\n> ibloat | 2.3\n> wastedibytes | 5439488\n> -[ RECORD 2 ]----+-----------------------------------------------\n> current_database | crm\n> schemaname | public\n> tablename | _attachments\n> tbloat | 0.9\n> wastedbytes | 0\n> iname | attachments_attachmentsid_idx\n> ibloat | 0.2\n> wastedibytes | 0\n> -[ RECORD 3 ]----+-----------------------------------------------\n> current_database | crm\n> schemaname | public\n> tablename | _attachments\n> tbloat | 0.9\n> wastedbytes | 0\n> iname | _attachments_pkey\n> ibloat | 0.2\n> wastedibytes | 0\n> \n> I REINDEXED both the indexes and table, but I did not find any change in\n> wastedspace or wastedispace.\n> Could you please tell me why?\n\nREINDEX only rebuilds indexes. And you'll obviously have a bit of \"lost\nspace\" because of the FILLFACTOR value (90% on indexes IIRC).\n\n\n-- \nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com\n\n",
"msg_date": "Wed, 21 Sep 2011 09:06:59 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX not working for wastedspace"
},
{
"msg_contents": "Could you please let us know if you have analyzed after the re-indexing is\ndone ?\n\nThis must show differences for only Indexes not the Tables.\n\nFor Tables, you need to do VACUUM FULL to show the difference.\n\nThanks\nVenkat\n\nOn Wed, Sep 21, 2011 at 12:31 PM, AI Rumman <[email protected]> wrote:\n\n> I am using Postgresql 9.0.1.\n>\n> Using the query http://wiki.postgresql.org/wiki/Show_database_bloat, I got\n> the following result for a table:\n>\n> -[ RECORD 1 ]----+-----------------------------------------------\n> current_database | crm\n> schemaname | public\n> tablename | _attachments\n> tbloat | 0.9\n> wastedbytes | 0\n> iname | attachments_description_type_attachmentsid_idx\n> ibloat | 2.3\n> wastedibytes | 5439488\n> -[ RECORD 2 ]----+-----------------------------------------------\n> current_database | crm\n> schemaname | public\n> tablename | _attachments\n> tbloat | 0.9\n> wastedbytes | 0\n> iname | attachments_attachmentsid_idx\n> ibloat | 0.2\n> wastedibytes | 0\n> -[ RECORD 3 ]----+-----------------------------------------------\n> current_database | crm\n> schemaname | public\n> tablename | _attachments\n> tbloat | 0.9\n> wastedbytes | 0\n> iname | _attachments_pkey\n> ibloat | 0.2\n> wastedibytes | 0\n>\n> I REINDEXED both the indexes and table, but I did not find any change in\n> wastedspace or wastedispace.\n> Could you please tell me why?\n>\n\nCould you please let us know if you have analyzed after the re-indexing is done ?This must show differences for only Indexes not the Tables.For Tables, you need to do VACUUM FULL to show the difference.\nThanksVenkatOn Wed, Sep 21, 2011 at 12:31 PM, AI Rumman <[email protected]> wrote:\nI am using Postgresql 9.0.1.Using the query http://wiki.postgresql.org/wiki/Show_database_bloat, I got the following result for a table:\n-[ RECORD 1 ]----+-----------------------------------------------\ncurrent_database | crmschemaname | publictablename | _attachmentstbloat | 0.9wastedbytes | 0iname | attachments_description_type_attachmentsid_idxibloat | 2.3\n\nwastedibytes | 5439488-[ RECORD 2 ]----+-----------------------------------------------current_database | crmschemaname | publictablename | _attachmentstbloat | 0.9wastedbytes | 0\n\niname | attachments_attachmentsid_idxibloat | 0.2wastedibytes | 0-[ RECORD 3 ]----+-----------------------------------------------current_database | crmschemaname | public\n\ntablename | _attachmentstbloat | 0.9wastedbytes | 0iname | _attachments_pkeyibloat | 0.2wastedibytes | 0I REINDEXED both the indexes and table, but I did not find any change in wastedspace or wastedispace.\n\nCould you please tell me why?",
"msg_date": "Wed, 21 Sep 2011 12:37:46 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX not working for wastedspace"
},
{
"msg_contents": "Yes I ANALYZE the table, but no change for wastedispace.\n\nOn Wed, Sep 21, 2011 at 1:06 PM, Guillaume Lelarge\n<[email protected]>wrote:\n\n> On Wed, 2011-09-21 at 13:01 +0600, AI Rumman wrote:\n> > I am using Postgresql 9.0.1.\n> >\n> > Using the query http://wiki.postgresql.org/wiki/Show_database_bloat, I\n> got\n> > the following result for a table:\n> >\n> > -[ RECORD 1 ]----+-----------------------------------------------\n> > current_database | crm\n> > schemaname | public\n> > tablename | _attachments\n> > tbloat | 0.9\n> > wastedbytes | 0\n> > iname | attachments_description_type_attachmentsid_idx\n> > ibloat | 2.3\n> > wastedibytes | 5439488\n> > -[ RECORD 2 ]----+-----------------------------------------------\n> > current_database | crm\n> > schemaname | public\n> > tablename | _attachments\n> > tbloat | 0.9\n> > wastedbytes | 0\n> > iname | attachments_attachmentsid_idx\n> > ibloat | 0.2\n> > wastedibytes | 0\n> > -[ RECORD 3 ]----+-----------------------------------------------\n> > current_database | crm\n> > schemaname | public\n> > tablename | _attachments\n> > tbloat | 0.9\n> > wastedbytes | 0\n> > iname | _attachments_pkey\n> > ibloat | 0.2\n> > wastedibytes | 0\n> >\n> > I REINDEXED both the indexes and table, but I did not find any change in\n> > wastedspace or wastedispace.\n> > Could you please tell me why?\n>\n> REINDEX only rebuilds indexes. And you'll obviously have a bit of \"lost\n> space\" because of the FILLFACTOR value (90% on indexes IIRC).\n>\n>\n> --\n> Guillaume\n> http://blog.guillaume.lelarge.info\n> http://www.dalibo.com\n>\n>\n\nYes I ANALYZE the table, but no change for wastedispace.On Wed, Sep 21, 2011 at 1:06 PM, Guillaume Lelarge <[email protected]> wrote:\nOn Wed, 2011-09-21 at 13:01 +0600, AI Rumman wrote:\n> I am using Postgresql 9.0.1.\n>\n> Using the query http://wiki.postgresql.org/wiki/Show_database_bloat, I got\n> the following result for a table:\n>\n> -[ RECORD 1 ]----+-----------------------------------------------\n> current_database | crm\n> schemaname | public\n> tablename | _attachments\n> tbloat | 0.9\n> wastedbytes | 0\n> iname | attachments_description_type_attachmentsid_idx\n> ibloat | 2.3\n> wastedibytes | 5439488\n> -[ RECORD 2 ]----+-----------------------------------------------\n> current_database | crm\n> schemaname | public\n> tablename | _attachments\n> tbloat | 0.9\n> wastedbytes | 0\n> iname | attachments_attachmentsid_idx\n> ibloat | 0.2\n> wastedibytes | 0\n> -[ RECORD 3 ]----+-----------------------------------------------\n> current_database | crm\n> schemaname | public\n> tablename | _attachments\n> tbloat | 0.9\n> wastedbytes | 0\n> iname | _attachments_pkey\n> ibloat | 0.2\n> wastedibytes | 0\n>\n> I REINDEXED both the indexes and table, but I did not find any change in\n> wastedspace or wastedispace.\n> Could you please tell me why?\n\nREINDEX only rebuilds indexes. And you'll obviously have a bit of \"lost\nspace\" because of the FILLFACTOR value (90% on indexes IIRC).\n\n\n--\nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com",
"msg_date": "Wed, 21 Sep 2011 13:10:30 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX not working for wastedspace"
},
{
"msg_contents": "On Wed, 2011-09-21 at 13:01 +0600, AI Rumman wrote:\n> I am using Postgresql 9.0.1.\n> \n\n> I REINDEXED both the indexes and table, but I did not find any change\n> in wastedspace or wastedispace.\n> Could you please tell me why?\n\nyou need to \n\n1) either vacuum full or cluster the table\n2) analyze the table\n3) check bloat again\n\n",
"msg_date": "Wed, 21 Sep 2011 08:21:20 -0400",
"msg_from": "Reid Thompson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX not working for wastedspace"
},
{
"msg_contents": "AI Rumman <rummandba 'at' gmail.com> writes:\n\n> Using the query http://wiki.postgresql.org/wiki/Show_database_bloat, I got the\n\nIs this stuff to show database bloat reliable? After a VACUUM\nFULL of the table reported as the top responsible of bloat,\nperforming the same request again still gives the same result\n(still that table is the top responsible of bloat):\n\n current_database | schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes \n------------------+------------+-----------------------------------------+--------+-------------+------------------------------------------------------+--------+--------------\n test | public | requests | 1.1 | 14565376 | requests_pkey | 0.4 | 0\n test | public | requests | 1.1 | 14565376 | idx_whatever | 0.8 | 0\n test | public | requests | 1.1 | 14565376 | idx_whatever2 | 0.6 | 0\n...\n\nA few investigations show that when tbloat is close to 1.0 then\nit seems not reliable, otherwise it seems useful.\n\npg 8.4.7\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "Wed, 21 Sep 2011 14:43:07 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Show_database_bloat reliability? [was: Re: REINDEX not working for\n\twastedspace]"
},
{
"msg_contents": "On 09/21/2011 02:01 AM, AI Rumman wrote:\n\n> Using the query http://wiki.postgresql.org/wiki/Show_database_bloat,\n> I got the following result for a table:\n\nGod. I wish they would erase that Wiki page, or at least add a \ndisclaimer. That query is no better than a loose estimate. Never, ever, \never depend on it for critical information about your tables. Ever.\n\nEver.\n\nWith that said, there are a lot of ways which can get the information \nyou want. One is the pgstattuple contrib, the other is the \npg_freespacemap contrib (though that didn't get really useful until 8.4 \nand above).\n\nCheck out those documentation pages for usage info. More importantly, \nignore the results of that query. It's wrong. If you've just reindexed, \nthose indexes are about as small as they're ever going to be.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Wed, 21 Sep 2011 08:30:42 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX not working for wastedspace"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> On 09/21/2011 02:01 AM, AI Rumman wrote:\n>> Using the query http://wiki.postgresql.org/wiki/Show_database_bloat,\n>> I got the following result for a table:\n\n> God. I wish they would erase that Wiki page, or at least add a \n> disclaimer. That query is no better than a loose estimate. Never, ever, \n> ever depend on it for critical information about your tables. Ever.\n\nThe PG wiki is editable by anyone who signs up for an account. Feel\nfree to put in an appropriate disclaimer, or improve the sample query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Sep 2011 10:12:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX not working for wastedspace "
},
{
"msg_contents": "On Sep 21, 2011, at 8:30 AM, Shaun Thomas wrote:\n\n> I wish they would erase that Wiki page, or at least add a disclaimer.\n\n\nThe \"they\" that you refer to includes you. It's a wiki. You can write your own disclaimer.\n\nxoa\n\n--\nAndy Lester => [email protected] => www.petdance.com => AIM:petdance\n\n\nOn Sep 21, 2011, at 8:30 AM, Shaun Thomas wrote:I wish they would erase that Wiki page, or at least add a disclaimer.The \"they\" that you refer to includes you. It's a wiki. You can write your own disclaimer.xoa\n--Andy Lester => [email protected] => www.petdance.com => AIM:petdance",
"msg_date": "Wed, 21 Sep 2011 09:14:46 -0500",
"msg_from": "Andy Lester <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX not working for wastedspace"
},
{
"msg_contents": "On 09/21/2011 09:12 AM, Tom Lane wrote:\n\n> The PG wiki is editable by anyone who signs up for an account. Feel\n> free to put in an appropriate disclaimer, or improve the sample\n> query.\n\nAh, well then. I do have an account, but thought there were more \ngranular page restrictions than that. I may have to start wading into \nthem when I see stuff like this. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Wed, 21 Sep 2011 09:55:38 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX not working for wastedspace"
},
{
"msg_contents": "On 09/21/2011 08:43 AM, Guillaume Cottenceau wrote:\n> AI Rumman<rummandba 'at' gmail.com> writes:\n>\n> \n>> Using the query http://wiki.postgresql.org/wiki/Show_database_bloat, I got the\n>> \n> Is this stuff to show database bloat reliable?\n\nOnly in that increase and decreases of the number reported can be useful \nfor determining if bloat is likely increasing or decreasing--which is \nthe purpose of that query. The value returned is a rough estimate, and \nshould not be considered useful as any sort of absolute measurement.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 21 Sep 2011 12:12:50 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show_database_bloat reliability? [was: Re: REINDEX\n\tnot working for wastedspace]"
},
{
"msg_contents": "It is very important to remove it from the WIKI page.\n\nI ran it on production PG9.0 and it does not error out and displays numbered\noutput.\n\nI noticed that, this works till PG-8.2 (as per the message).\n\nVenkat\n\nOn Wed, Sep 21, 2011 at 8:25 PM, Shaun Thomas <[email protected]> wrote:\n\n> On 09/21/2011 09:12 AM, Tom Lane wrote:\n>\n> The PG wiki is editable by anyone who signs up for an account. Feel\n>> free to put in an appropriate disclaimer, or improve the sample\n>> query.\n>>\n>\n> Ah, well then. I do have an account, but thought there were more granular\n> page restrictions than that. I may have to start wading into them when I see\n> stuff like this. :)\n>\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________**________________\n>\n> See http://www.peak6.com/email-**disclaimer/<http://www.peak6.com/email-disclaimer/>for terms and conditions related to this email\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nIt is very important to remove it from the WIKI page.I ran it on production PG9.0 and it does not error out and displays numbered output.I noticed that, this works till PG-8.2 (as per the message).\nVenkatOn Wed, Sep 21, 2011 at 8:25 PM, Shaun Thomas <[email protected]> wrote:\nOn 09/21/2011 09:12 AM, Tom Lane wrote:\n\n\nThe PG wiki is editable by anyone who signs up for an account. Feel\nfree to put in an appropriate disclaimer, or improve the sample\nquery.\n\n\nAh, well then. I do have an account, but thought there were more granular page restrictions than that. I may have to start wading into them when I see stuff like this. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 21 Sep 2011 21:50:08 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX not working for wastedspace"
},
{
"msg_contents": "On 09/21/2011 11:20 AM, Venkat Balaji wrote:\n\n> It is very important to remove it from the WIKI page.\n\nRemoving it is a little premature. :) Definitely going to add a warning \nabout relying on its output, though. The query itself was created and \nintegrated into the check_postgres.pl nagios plugin as a very, very \ngross estimate of bloated tables.\n\nIt wasn't the most accurate thing in the world, but considering what it \nhad to work with, it did a pretty good job. Generally CLUSTER or VACUUM \nFULL would remove a table from the query output, but not always. It's \nthose edge cases that cause problems.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Wed, 21 Sep 2011 11:30:07 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX not working for wastedspace"
}
] |
[
{
"msg_contents": "buen dia\n\na partir de los siguientes artículos publicados en la primera edicion de\nPostgreSQL Magazine (http://pgmag.org/00/read) (Performance Tunning\nPostgreSQL y Tuning linux for PostgreSQL) me dispuse a implementar los mismo\nen mi servidor de Postgres pero cuento con la siguiente arquitectura\n\nSAN\nVMWare\nCentOS 6-64bits\nPostgreSQL 9-64bits\n\nTengo virtualizada la maquina para atender otros servicios, entonces\nmi pregunta es.. los parámetros y consideraciones que establecen en el\narticulo son aplicables al esquema visualizado especialmente en cuanto a\nla configuración del Linux? les agradezco de antemano su opinión\n\n-- \nCordialmente,\n\nIng. Hellmuth I. Vargas S.\nBogotá, Colombia\n\nbuen diaa partir de los siguientes artículos publicados en la primera edicion de PostgreSQL Magazine (http://pgmag.org/00/read) (Performance Tunning PostgreSQL y Tuning linux for PostgreSQL) me dispuse a implementar los mismo en mi servidor de Postgres pero cuento con la siguiente arquitectura \nSANVMWareCentOS 6-64bitsPostgreSQL 9-64bitsTengo virtualizada la maquina para atender otros servicios, entonces mi pregunta es.. los parámetros y consideraciones que establecen en el articulo son aplicables al esquema visualizado especialmente en cuanto a la configuración del Linux? les agradezco de antemano su opinión\n-- Cordialmente, Ing. Hellmuth I. Vargas S. Bogotá, Colombia",
"msg_date": "Wed, 21 Sep 2011 08:32:34 -0500",
"msg_from": "Hellmuth Vargas <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?ISO-8859-1?Q?par=E1metros_de_postgres_y_linux_en_maquinas_virtuale?=\n\t=?ISO-8859-1?Q?s?="
},
{
"msg_contents": "Para ello te recomiendo la charla de Jignesh Shah, quien es Staff Engineer\nen Vmware y ha impartido varias charlas sobre el tema. Aqui te dejo el link\nde su blog:\nhttp://jkshah.blogspot.com/\n\nUsing vPostgres - A DB User\nperspective<http://pgwest2011.sched.org/event/6ba83683a7284e72d8f6c7ae70bc24f2>\nvFabric Postgres Database\nInternals<http://pgwest2011.sched.org/event/2c071cc144a3224465d968ff5efc4e2e>\nRunning PostgreSQL on Virtual Environments - #pgopen\n2011<http://jkshah.blogspot.com/2011/09/running-postgresql-on-virtual.html>\n Saludos\n\nEl 21 de septiembre de 2011 09:32, Hellmuth Vargas <[email protected]>escribió:\n\n>\n> buen dia\n>\n> a partir de los siguientes artículos publicados en la primera edicion de\n> PostgreSQL Magazine (http://pgmag.org/00/read) (Performance Tunning\n> PostgreSQL y Tuning linux for PostgreSQL) me dispuse a implementar los mismo\n> en mi servidor de Postgres pero cuento con la siguiente arquitectura\n>\n> SAN\n> VMWare\n> CentOS 6-64bits\n> PostgreSQL 9-64bits\n>\n> Tengo virtualizada la maquina para atender otros servicios, entonces\n> mi pregunta es.. los parámetros y consideraciones que establecen en el\n> articulo son aplicables al esquema visualizado especialmente en cuanto a\n> la configuración del Linux? les agradezco de antemano su opinión\n>\n> --\n> Cordialmente,\n>\n> Ing. Hellmuth I. Vargas S.\n> Bogotá, Colombia\n\n\n\n\n-- \n-- \nMarcos Luis Ortíz Valmaseda\n Software Engineer (UCI)\n Linux User # 418229\n http://marcosluis2186.posterous.com\n http://www.linkedin.com/in/marcosluis2186\n https://fedoraproject.org/wiki/User:Marcosluis\n\nPara ello te recomiendo la charla de Jignesh Shah, quien es Staff Engineer en Vmware y ha impartido varias charlas sobre el tema. Aqui te dejo el link de su blog:http://jkshah.blogspot.com/\nUsing vPostgres - A DB User perspective\nvFabric Postgres Database Internals\n\nRunning PostgreSQL on Virtual Environments - #pgopen 2011\n SaludosEl 21 de septiembre de 2011 09:32, Hellmuth Vargas <[email protected]> escribió:\nbuen diaa partir de los siguientes artículos publicados en la primera edicion de PostgreSQL Magazine (http://pgmag.org/00/read) (Performance Tunning PostgreSQL y Tuning linux for PostgreSQL) me dispuse a implementar los mismo en mi servidor de Postgres pero cuento con la siguiente arquitectura \nSANVMWareCentOS 6-64bitsPostgreSQL 9-64bitsTengo virtualizada la maquina para atender otros servicios, entonces mi pregunta es.. los parámetros y consideraciones que establecen en el articulo son aplicables al esquema visualizado especialmente en cuanto a la configuración del Linux? les agradezco de antemano su opinión\n-- Cordialmente, Ing. Hellmuth I. Vargas S. Bogotá, Colombia\n-- -- Marcos Luis Ortíz Valmaseda Software Engineer (UCI) Linux User # 418229 http://marcosluis2186.posterous.com\n http://www.linkedin.com/in/marcosluis2186 https://fedoraproject.org/wiki/User:Marcosluis",
"msg_date": "Wed, 21 Sep 2011 09:52:29 -0400",
"msg_from": "Marcos Luis Ortiz Valmaseda <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?ISO-8859-1?Q?Re=3A_=5Bpgsql=2Des=2Dayuda=5D_par=E1metros_de_postgres_y_linux?=\n\t=?ISO-8859-1?Q?_en_maquinas_virtuales?="
},
{
"msg_contents": "2011/9/21 Hellmuth Vargas <[email protected]>:\n> SAN\n> VMWare\n> CentOS 6-64bits\n> PostgreSQL 9-64bits\n> Tengo virtualizada la maquina para atender otros servicios, entonces\n> mi pregunta es.. los parámetros y consideraciones que establecen en el\n> articulo son aplicables al esquema visualizado especialmente en cuanto a\n> la configuración del Linux? les agradezco de antemano su opinión\n\nNo es suficiente detalle para saber, pero seguramente SAN cambia las cosas.\nEn lo que se refiere a memoria, nada debería cambiar aunque sea una\nmáquina virtual. Pero todo lo que es disco sí.\n\nPor cierto, esta lista es en inglés.\n",
"msg_date": "Wed, 21 Sep 2011 16:19:27 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?ISO-8859-1?Q?Re=3A_=5BPERFORM=5D_par=E1metros_de_postgres_y_linux_en_maq?=\n\t=?ISO-8859-1?Q?uinas_virtuales?="
}
] |
[
{
"msg_contents": "First of all, thank you for taking the time to review my question. After\nattending the PostgresOpen conference in Chicago last week, I've been\npouring over explain logs for hours on end and although my system is MUCH\nbetter, I still can't resolve a few issues. Luckily my data is pretty well\nstructured so solving one issue will likely solve many more so I'll start\nwith this one.\n\nVersion: PostgreSQL 9.1rc1, compiled by Visual C++ build 1500, 64-bit\nOS: Windows 7 64-bit\nORM: SQLAlchemy\nPostgres table structure: I have daily partitioned tables for each of 4\n\"core tables\" (the tables with the majority of my application's data). Each\ndaily table inherits from its parent. I do not explicitly define a\nREFERENCE between these tables because I cannot guarantee the order in which\nthe events are inserted into the database, but where there are references,\nthe referenced row should exist in the other's daily table. The reason I\npartitioned the data in this manner is to increase query speed and make it\neasy to archive old data. (I'm new to high-end Postgres performance so\nthere's likely several fundamental flaws in my assumptions. I won't turn\ndown any recommendation.)\n\nAn example of a daily partitioned table follows:\n\ncb=# \\d osmoduleloads_2011_09_14;\n Table \"public.osmoduleloads_2011_09_14\"\n Column | Type |\n Modifiers\n-----------------------+-----------------------------+------------------------------------------------------------\n guid | numeric(20,0) | not null\n osprocess_guid | numeric(20,0) | not null\n filepath_guid | numeric(20,0) | not null\n firstloadtime | numeric(20,0) | not null\n md5hash | bytea | not null\n host_guid | numeric(20,0) | default NULL::numeric\n process_create_time | numeric(20,0) | default NULL::numeric\n process_filepath_guid | numeric(20,0) | default NULL::numeric\n event_time | timestamp without time zone | default '2011-09-14\n00:00:00'::timestamp without time zone\nIndexes:\n \"osmoduleloads_2011_09_14_pkey\" PRIMARY KEY, btree (guid)\n \"idx_osmoduleloads_2011_09_14_filepath_guid\" btree (filepath_guid)\n \"idx_osmoduleloads_2011_09_14_firstload_time\" btree (firstloadtime)\n \"idx_osmoduleloads_2011_09_14_host_guid\" btree (host_guid)\n \"idx_osmoduleloads_2011_09_14_md5hash\" btree (md5hash)\n \"idx_osmoduleloads_2011_09_14_osprocess_guid\" btree (osprocess_guid)\nCheck constraints:\n \"osmoduleloads_2011_09_14_event_time_check\" CHECK (event_time =\n'2011-09-14 00:00:00'::timestamp without time zone)\n \"osmoduleloads_2011_09_14_firstloadtime_check\" CHECK (firstloadtime >=\n129604464000000000::bigint::numeric AND firstloadtime <\n129605328000000000::bigint::numeric)\nInherits: osmoduleloads\n\nObjective: The firstloadtime check constraint ensures that the record is\napplicable to that daily table. (In case you were wondering, the large\nnumerics correspond to the Windows 100-nanosecond since the Epoch.) I'm\ninserting millions of records into each daily table so \"query slowness\" is\nquite easy to spot. Given that there is so much data per daily table, I was\nhoping to use the order by and limit clauses to \"stop out\" a query once it\nsufficed the limit clause and not be forced to visit each daily table.\n However, I'm spending way too much time in the older tables than I'd like -\nwhich leads me to believe that I;m doing something wrong. For ease of\nviewing, my explain analyze can be found at http://explain.depesz.com/s/tot\n\nI'm still very new to this so I'm not sure if explain.depesz.com saves the\noriginal query. It wasn't readily apparent that it did so here is the\noriginal query:\n\nSELECT osm_1.*, storefiles_1.*, filepaths_1.*, filepaths_2.* FROM (SELECT *\nFROM osmoduleloads JOIN hosts ON hosts.guid = osmoduleloads.host_guid WHERE\nhosts.guid = '2007075705813916178' AND osmoduleloads.firstloadtime >=\n129604320000000000 AND osmoduleloads.firstloadtime < 129610367990000000 AND\nhosts.enabled = true AND hosts.user_id = 111 ORDER BY\nosmoduleloads.firstloadtime DESC LIMIT 251) AS osm_1 LEFT OUTER JOIN\nstorefiles AS storefiles_1 ON osm_1.md5hash = storefiles_1.md5hash LEFT\nOUTER JOIN filepaths AS filepaths_1 ON osm_1.process_filepath_guid =\nfilepaths_1.guid AND osm_1.event_time = filepaths_1.event_time LEFT OUTER\nJOIN filepaths AS filepaths_2 ON osm_1.filepath_guid = filepaths_2.guid AND\nosm_1.event_time= filepaths_2.event_time ORDER BY osm_1.firstloadtime DESC;\n\nHopefully my assumptions about order by and limit are correct and this query\ncan be optimized.\n\nAgain, appreciate any help you can lend. Thanks in advance.\n\nMike\n\nFirst of all, thank you for taking the time to review my question. After attending the PostgresOpen conference in Chicago last week, I've been pouring over explain logs for hours on end and although my system is MUCH better, I still can't resolve a few issues. Luckily my data is pretty well structured so solving one issue will likely solve many more so I'll start with this one.\nVersion: PostgreSQL 9.1rc1, compiled by Visual C++ build 1500, 64-bit\nOS: Windows 7 64-bitORM: SQLAlchemyPostgres table structure: I have daily partitioned tables for each of 4 \"core tables\" (the tables with the majority of my application's data). Each daily table inherits from its parent. I do not explicitly define a REFERENCE between these tables because I cannot guarantee the order in which the events are inserted into the database, but where there are references, the referenced row should exist in the other's daily table. The reason I partitioned the data in this manner is to increase query speed and make it easy to archive old data. (I'm new to high-end Postgres performance so there's likely several fundamental flaws in my assumptions. I won't turn down any recommendation.) \nAn example of a daily partitioned table follows:\ncb=# \\d osmoduleloads_2011_09_14; Table \"public.osmoduleloads_2011_09_14\" Column | Type | Modifiers\n-----------------------+-----------------------------+------------------------------------------------------------ guid | numeric(20,0) | not null osprocess_guid | numeric(20,0) | not null\n filepath_guid | numeric(20,0) | not null firstloadtime | numeric(20,0) | not null md5hash | bytea | not null\n host_guid | numeric(20,0) | default NULL::numeric process_create_time | numeric(20,0) | default NULL::numeric process_filepath_guid | numeric(20,0) | default NULL::numeric\n event_time | timestamp without time zone | default '2011-09-14 00:00:00'::timestamp without time zoneIndexes: \"osmoduleloads_2011_09_14_pkey\" PRIMARY KEY, btree (guid)\n \"idx_osmoduleloads_2011_09_14_filepath_guid\" btree (filepath_guid) \"idx_osmoduleloads_2011_09_14_firstload_time\" btree (firstloadtime) \"idx_osmoduleloads_2011_09_14_host_guid\" btree (host_guid)\n \"idx_osmoduleloads_2011_09_14_md5hash\" btree (md5hash) \"idx_osmoduleloads_2011_09_14_osprocess_guid\" btree (osprocess_guid)Check constraints: \"osmoduleloads_2011_09_14_event_time_check\" CHECK (event_time = '2011-09-14 00:00:00'::timestamp without time zone)\n \"osmoduleloads_2011_09_14_firstloadtime_check\" CHECK (firstloadtime >= 129604464000000000::bigint::numeric AND firstloadtime < 129605328000000000::bigint::numeric)Inherits: osmoduleloads\nObjective: The firstloadtime check constraint ensures that the record is applicable to that daily table. (In case you were wondering, the large numerics correspond to the Windows 100-nanosecond since the Epoch.) I'm inserting millions of records into each daily table so \"query slowness\" is quite easy to spot. Given that there is so much data per daily table, I was hoping to use the order by and limit clauses to \"stop out\" a query once it sufficed the limit clause and not be forced to visit each daily table. However, I'm spending way too much time in the older tables than I'd like - which leads me to believe that I;m doing something wrong. For ease of viewing, my explain analyze can be found at http://explain.depesz.com/s/tot \nI'm still very new to this so I'm not sure if explain.depesz.com saves the original query. It wasn't readily apparent that it did so here is the original query:\nSELECT osm_1.*, storefiles_1.*, filepaths_1.*, filepaths_2.* FROM (SELECT * FROM osmoduleloads JOIN hosts ON hosts.guid = osmoduleloads.host_guid WHERE hosts.guid = '2007075705813916178' AND osmoduleloads.firstloadtime >= 129604320000000000 AND osmoduleloads.firstloadtime < 129610367990000000 AND hosts.enabled = true AND hosts.user_id = 111 ORDER BY osmoduleloads.firstloadtime DESC LIMIT 251) AS osm_1 LEFT OUTER JOIN storefiles AS storefiles_1 ON osm_1.md5hash = storefiles_1.md5hash LEFT OUTER JOIN filepaths AS filepaths_1 ON osm_1.process_filepath_guid = filepaths_1.guid AND osm_1.event_time = filepaths_1.event_time LEFT OUTER JOIN filepaths AS filepaths_2 ON osm_1.filepath_guid = filepaths_2.guid AND osm_1.event_time= filepaths_2.event_time ORDER BY osm_1.firstloadtime DESC;\nHopefully my assumptions about order by and limit are correct and this query can be optimized. Again, appreciate any help you can lend. Thanks in advance.\nMike",
"msg_date": "Wed, 21 Sep 2011 19:14:09 -0400",
"msg_from": "Michael Viscuso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query optimization using order by and limit"
},
{
"msg_contents": "On 09/21/2011 07:14 PM, Michael Viscuso wrote:\n> Check constraints:\n> \"osmoduleloads_2011_09_14_event_time_check\" CHECK (event_time = \n> '2011-09-14 00:00:00'::timestamp without time zone)\n> \"osmoduleloads_2011_09_14_firstloadtime_check\" CHECK \n> (firstloadtime >= 129604464000000000::bigint::numeric AND \n> firstloadtime < 129605328000000000::bigint::numeric)\n> Inherits: osmoduleloads\n\nThat weird casting can't be helping. I'm not sure if it's your problem \nhere, but the constraint exclusion code is pretty picky about matching \nthe thing you're looking for against the CHECK constraint, and this is a \nmessy one. The bigint conversion in the middle there isn't doing \nanything useful for you anyway; you really should simplify this to just \nlook like this:\n\nfirstloadtime >= 129604464000000000::numeric\n\n> SELECT osm_1.*, storefiles_1.*, filepaths_1.*, filepaths_2.* FROM \n> (SELECT * FROM osmoduleloads JOIN hosts ON hosts.guid = \n> osmoduleloads.host_guid WHERE hosts.guid = '2007075705813916178' AND \n> osmoduleloads.firstloadtime >= 129604320000000000 AND \n> osmoduleloads.firstloadtime < 129610367990000000 AND hosts.enabled = \n> true AND hosts.user_id = 111 ORDER BY osmoduleloads.firstloadtime DESC \n> LIMIT 251) AS osm_1 LEFT OUTER JOIN storefiles AS storefiles_1 ON \n> osm_1.md5hash = storefiles_1.md5hash LEFT OUTER JOIN filepaths AS \n> filepaths_1 ON osm_1.process_filepath_guid = filepaths_1.guid AND \n> osm_1.event_time = filepaths_1.event_time LEFT OUTER JOIN filepaths AS \n> filepaths_2 ON osm_1.filepath_guid = filepaths_2.guid AND \n> osm_1.event_time= filepaths_2.event_time ORDER BY osm_1.firstloadtime \n> DESC;\n>\n\nWhat you should start with here is confirming whether or not a simpler \nquery touches all of the partitions or just the ones you expect it to. \nA simpler one like this:\n\nSELECT * FROM osmoduleloads WHERE osmoduleloads.firstloadtime >= \n129604320000000000 AND osmoduleloads.firstloadtime < 129610367990000000;\n\nWould be the place to begin. Once you've got that working, then you can \nbuild up more pieces, and see if one of them results in the query not \nexcluding partitions anymore or not. I can't figure out if you're \nrunning into a basic error here, where constraint exclusion just isn't \nworking at all, or if you are only having this problem because the query \nis too complicated. Figuring that out will narrow the potential solutions.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n\n\n\n\n\nOn 09/21/2011 07:14 PM, Michael Viscuso wrote:\n\n\n\n\nCheck constraints:\n \"osmoduleloads_2011_09_14_event_time_check\" CHECK\n(event_time = '2011-09-14 00:00:00'::timestamp without time zone)\n \"osmoduleloads_2011_09_14_firstloadtime_check\" CHECK\n(firstloadtime >= 129604464000000000::bigint::numeric AND\nfirstloadtime < 129605328000000000::bigint::numeric)\nInherits: osmoduleloads\n\n\n\n\n\nThat weird casting can't be helping. I'm not sure if it's your problem\nhere, but the constraint exclusion code is pretty picky about matching\nthe thing you're looking for against the CHECK constraint, and this is\na messy one. The bigint conversion in the middle there isn't doing\nanything useful for you anyway; you really should simplify this to just\nlook like this:\n\nfirstloadtime\n>= 129604464000000000::numeric\n\n\n\nSELECT osm_1.*, storefiles_1.*, filepaths_1.*, filepaths_2.*\nFROM (SELECT * FROM osmoduleloads JOIN hosts ON hosts.guid =\nosmoduleloads.host_guid WHERE hosts.guid = '2007075705813916178' AND\nosmoduleloads.firstloadtime >= 129604320000000000 AND\nosmoduleloads.firstloadtime < 129610367990000000 AND hosts.enabled =\ntrue AND hosts.user_id = 111 ORDER BY osmoduleloads.firstloadtime DESC\nLIMIT 251) AS osm_1 LEFT OUTER JOIN storefiles AS storefiles_1 ON\nosm_1.md5hash = storefiles_1.md5hash LEFT OUTER JOIN filepaths AS\nfilepaths_1 ON osm_1.process_filepath_guid = filepaths_1.guid AND\nosm_1.event_time = filepaths_1.event_time LEFT OUTER JOIN filepaths AS\nfilepaths_2 ON osm_1.filepath_guid = filepaths_2.guid AND\nosm_1.event_time= filepaths_2.event_time ORDER BY osm_1.firstloadtime\nDESC;\n\n\n\n\nWhat you should start with here is confirming whether or not a simpler\nquery touches all of the partitions or just the ones you expect it to. \nA simpler one like this:\n\nSELECT\n* FROM\nosmoduleloads WHERE\nosmoduleloads.firstloadtime\n>= 129604320000000000 AND osmoduleloads.firstloadtime <\n129610367990000000;\n\nWould be the place to begin. Once you've got that working, then you can\nbuild up more pieces, and see if one of them results in the query not\nexcluding partitions anymore or not. I can't figure out if you're\nrunning into a basic error here, where constraint exclusion just isn't\nworking at all, or if you are only having this problem because the\nquery is too complicated. Figuring that out will narrow the potential\nsolutions.\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Wed, 21 Sep 2011 21:48:41 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization using order by and limit"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> That weird casting can't be helping. I'm not sure if it's your problem \n> here, but the constraint exclusion code is pretty picky about matching \n> the thing you're looking for against the CHECK constraint, and this is a \n> messy one. The bigint conversion in the middle there isn't doing \n> anything useful for you anyway; you really should simplify this to just \n> look like this:\n> firstloadtime >= 129604464000000000::numeric\n\nI have a more aggressive suggestion: change all the numeric(20,0) fields\nto bigint. Unless the OP actually needs values wider than 64 bits,\nthe choice to use numeric is a significant performance penalty for\nnothing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Sep 2011 22:09:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization using order by and limit "
},
{
"msg_contents": "Thanks guys,\n\nFirst of all, I should have included my postgres.conf file with the\noriginal submission. Sorry about that. It is now attached.\n\nBased on a recommendation, I also should have shown the parent child\nrelationship between osmoduleloads and its daily partitioned tables. to\nreduce clutter, It is at the end of this message.\n\nTaking this one step at a time and taking Greg's second suggestion\nfirst, issuing\n\nselect * from osmoduleloads WHERE osmoduleloads.firstloadtime >=\n129604320000000000 AND osmoduleloads.firstloadtime < 129610367990000000;\n\nappears to only query the appropriate daily tables (2011_09_13 through\n2011_09_20 - http://explain.depesz.com/s/QCG). So it appears that\nconstraint_exclusion is working properly. Putting a limit on the query\nlike:\n\nselect * from osmoduleloads WHERE osmoduleloads.firstloadtime >=\n129604320000000000 AND osmoduleloads.firstloadtime < 129610367990000000\nlimit 251;\n\nhas the result that I'd expect to see http://explain.depesz.com/s/O7fZ.\nOrdering by firstloadtime AND limiting like:\n\nselect * from osmoduleloads WHERE osmoduleloads.firstloadtime >=\n129604320000000000 AND osmoduleloads.firstloadtime < 129610367990000000\norder by firstloadtime desc limit 251;\n\nalso has the result that I'd expect to see\nhttp://explain.depesz.com/s/RDh. \n\nAdding the hosts join condition to the mix was still OK\nhttp://explain.depesz.com/s/2Ns. \n\nAdding the hosts.enabled condition was still OK\nhttp://explain.depesz.com/s/UYN. \n\nAdding the hosts.user_id = 111 started the descent but it appears to\nstill be obeying the proper contraint_exclusion that I'd expect, just\nwith a ton of rows returned from the most recent daily tables\nhttp://explain.depesz.com/s/4WE. \n\nAdding the final condition hosts_guid = '2007075705813916178' is what\nultimately kills it http://explain.depesz.com/s/8zy. By adding the\nhost_guid, it spends considerably more time in the older tables than\nwithout this condition and I'm not sure why. \n\nThanks Greg for the recommendation to step through it like that -\nhopefully this helps get us closer to a resolution.\n\nGreg/Tom, you are correct, these columns should be modified to whatever\nis easiest for Postgres to recognize 64-bit unsigned integers. Would\nyou still recommend bigint for unsigned integers? I likely read the\nwrong documentation that suggested bigint for signed 64-bit integers and\nnumeric(20) for unsigned 64-bit integers.\n\nThanks again for all your help! Perhaps 15 hours of pouring over\nexplain logs will finally pan out!\n\nMike\n\ncb=# \\d+ osmoduleloads;\n Table \"public.osmoduleloads\"\n Column | Type | \nModifiers | Storage | Description\n-----------------------+-----------------------------+-----------------------+----------+-------------\n guid | numeric(20,0) | not\nnull | main |\n osprocess_guid | numeric(20,0) | not\nnull | main |\n filepath_guid | numeric(20,0) | not\nnull | main |\n firstloadtime | numeric(20,0) | not\nnull | main |\n md5hash | bytea | not\nnull | extended |\n host_guid | numeric(20,0) | default\nNULL::numeric | main |\n process_create_time | numeric(20,0) | default\nNULL::numeric | main |\n process_filepath_guid | numeric(20,0) | default\nNULL::numeric | main |\n event_time | timestamp without time zone\n| | plain |\nIndexes:\n \"osmoduleloads_pkey\" PRIMARY KEY, btree (guid)\nChild tables: osmoduleloads_2001_12_31,\n osmoduleloads_2010_10_11,\n osmoduleloads_2010_10_12,\n osmoduleloads_2010_10_13,\n osmoduleloads_2011_07_27,\n osmoduleloads_2011_08_04,\n osmoduleloads_2011_08_05,\n osmoduleloads_2011_08_06,\n osmoduleloads_2011_08_07,\n osmoduleloads_2011_08_08,\n osmoduleloads_2011_08_09,\n osmoduleloads_2011_08_10,\n osmoduleloads_2011_08_11,\n osmoduleloads_2011_08_12,\n osmoduleloads_2011_08_13,\n osmoduleloads_2011_08_14,\n osmoduleloads_2011_08_15,\n osmoduleloads_2011_08_16,\n osmoduleloads_2011_08_17,\n osmoduleloads_2011_08_18,\n osmoduleloads_2011_08_19,\n osmoduleloads_2011_08_20,\n osmoduleloads_2011_08_21,\n osmoduleloads_2011_08_22,\n osmoduleloads_2011_08_23,\n osmoduleloads_2011_08_24,\n osmoduleloads_2011_08_25,\n osmoduleloads_2011_08_26,\n osmoduleloads_2011_08_27,\n osmoduleloads_2011_08_28,\n osmoduleloads_2011_08_29,\n osmoduleloads_2011_08_30,\n osmoduleloads_2011_08_31,\n osmoduleloads_2011_09_01,\n osmoduleloads_2011_09_02,\n osmoduleloads_2011_09_03,\n osmoduleloads_2011_09_04,\n osmoduleloads_2011_09_05,\n osmoduleloads_2011_09_06,\n osmoduleloads_2011_09_07,\n osmoduleloads_2011_09_08,\n osmoduleloads_2011_09_09,\n osmoduleloads_2011_09_10,\n osmoduleloads_2011_09_11,\n osmoduleloads_2011_09_12,\n osmoduleloads_2011_09_13,\n osmoduleloads_2011_09_14,\n osmoduleloads_2011_09_15,\n osmoduleloads_2011_09_16,\n osmoduleloads_2011_09_17,\n osmoduleloads_2011_09_18,\n osmoduleloads_2011_09_19,\n osmoduleloads_2011_09_20,\n osmoduleloads_2011_12_01\nHas OIDs: no\n\nOn 9/21/2011 10:09 PM, Tom Lane wrote:\n> Greg Smith <[email protected]> writes:\n>> That weird casting can't be helping. I'm not sure if it's your problem \n>> here, but the constraint exclusion code is pretty picky about matching \n>> the thing you're looking for against the CHECK constraint, and this is a \n>> messy one. The bigint conversion in the middle there isn't doing \n>> anything useful for you anyway; you really should simplify this to just \n>> look like this:\n>> firstloadtime >= 129604464000000000::numeric\n> I have a more aggressive suggestion: change all the numeric(20,0) fields\n> to bigint. Unless the OP actually needs values wider than 64 bits,\n> the choice to use numeric is a significant performance penalty for\n> nothing.\n>\n> \t\t\tregards, tom lane\n>",
"msg_date": "Wed, 21 Sep 2011 22:55:21 -0400",
"msg_from": "Michael Viscuso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query optimization using order by and limit"
},
{
"msg_contents": "Michael Viscuso <[email protected]> writes:\n> Greg/Tom, you are correct, these columns should be modified to whatever\n> is easiest for Postgres to recognize 64-bit unsigned integers. Would\n> you still recommend bigint for unsigned integers? I likely read the\n> wrong documentation that suggested bigint for signed 64-bit integers and\n> numeric(20) for unsigned 64-bit integers.\n\nUnsigned? Oh, hm, that's a bit of a problem because we don't have any\nunsigned types. If you really need to go to 2^64 and not 2^63 then\nyou're stuck with numeric ... but that last bit is costing ya a lot.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Sep 2011 23:22:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization using order by and limit "
},
{
"msg_contents": "On Wed, Sep 21, 2011 at 11:22:53PM -0400, Tom Lane wrote:\n> Michael Viscuso <[email protected]> writes:\n> > Greg/Tom, you are correct, these columns should be modified to whatever\n> > is easiest for Postgres to recognize 64-bit unsigned integers. Would\n> > you still recommend bigint for unsigned integers? I likely read the\n> > wrong documentation that suggested bigint for signed 64-bit integers and\n> > numeric(20) for unsigned 64-bit integers.\n> \n> Unsigned? Oh, hm, that's a bit of a problem because we don't have any\n> unsigned types. If you really need to go to 2^64 and not 2^63 then\n> you're stuck with numeric ... but that last bit is costing ya a lot.\n> \n> \t\t\tregards, tom lane\n> \n\nHi Michael,\n\nIf you have access to the application, you can map the unsigned 64-bits\nto the PostgreSQL signed 64-bit type with a simple subtraction. That will\nallow you to drop all the numeric use. Also if the guid is a 64-bit\nvalues stuffed into a numeric(20), you can do it there as well. I achieved\na hefty performance boost by making those application level changes in a\nsimilar situation.\n\nRegards,\nKen\n",
"msg_date": "Thu, 22 Sep 2011 08:41:25 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization using order by and limit"
},
{
"msg_contents": "Thanks Ken,\n\nI'm discussing with my coworker how to best make that change *as we\nspeak*. Do you think this will also resolve the original issue I'm\nseeing where the query doesn't \"limit out properly\" and spends time in\nchild tables that won't yield any results? I was hoping that by using\nthe check constraints, I could query over a week or month's worth of\npartitioned tables and the combination of order by and limit would\neliminate any time searching unnecessary tables but that doesn't appear\nto be true. (I'm still very new to high-end Postgres performance so I\ncould be mistaken.)\n\nRegardless, in the meantime, I'll switch those columns to bigint instead\nof numeric and have an update as soon as possible.\n\nThanks for your help!\n\nMike\n\nOn 9/22/2011 9:41 AM, [email protected] wrote:\n> On Wed, Sep 21, 2011 at 11:22:53PM -0400, Tom Lane wrote:\n>> Michael Viscuso <[email protected]> writes:\n>>> Greg/Tom, you are correct, these columns should be modified to whatever\n>>> is easiest for Postgres to recognize 64-bit unsigned integers. Would\n>>> you still recommend bigint for unsigned integers? I likely read the\n>>> wrong documentation that suggested bigint for signed 64-bit integers and\n>>> numeric(20) for unsigned 64-bit integers.\n>> Unsigned? Oh, hm, that's a bit of a problem because we don't have any\n>> unsigned types. If you really need to go to 2^64 and not 2^63 then\n>> you're stuck with numeric ... but that last bit is costing ya a lot.\n>>\n>> \t\t\tregards, tom lane\n>>\n> Hi Michael,\n>\n> If you have access to the application, you can map the unsigned 64-bits\n> to the PostgreSQL signed 64-bit type with a simple subtraction. That will\n> allow you to drop all the numeric use. Also if the guid is a 64-bit\n> values stuffed into a numeric(20), you can do it there as well. I achieved\n> a hefty performance boost by making those application level changes in a\n> similar situation.\n>\n> Regards,\n> Ken\n\n",
"msg_date": "Thu, 22 Sep 2011 09:55:28 -0400",
"msg_from": "Michael Viscuso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query optimization using order by and limit"
},
{
"msg_contents": "* Michael Viscuso ([email protected]) wrote:\n> Adding the final condition hosts_guid = '2007075705813916178' is what\n> ultimately kills it http://explain.depesz.com/s/8zy. By adding the\n> host_guid, it spends considerably more time in the older tables than\n> without this condition and I'm not sure why. \n\nWhat I think is happening here is that PG is pushing down that filter\n(not typically a bad thing..), but with that condition, it's going to\nscan the index until it finds a match for that filter before returning\nback up only to have that result cut out due to the limit. Having it as\nnumerics isn't helping here, but the bigger issue is having to check all\nthose tuples for a match to the filter.\n\nMike, the filter has to be applied before the order by/limit, since\nthose clauses come after the filter has been applied (you wouldn't want\na 'where x = 2 limit 10' to return early just because it found 10\nrecords where x didn't equal 2).\n\nWhat would be great is if PG would realize that the CHECK constraints\nprevent earlier records from being in these earlier tables, so it\nshouldn't need to consider them at all once the records from the\n'latest' table has been found and the limit reached (reverse all this\nfor an 'ascending' query, of course), which we can do when there's no\norder by. I don't believe we have that kind of logic or that\ninformation available at this late stage- the CHECK constraints are used\nto eliminate the impossible-to-match tables, but that's it.\n\nOne option, which isn't great of course, would be to implement your own\n'nested loop' construct (something I typically despise..) in the\napplication which just walks backwards from the latest and pulls\nwhatever records it can from each day and then stops once it hits the\nlimit.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 22 Sep 2011 10:53:19 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization using order by and limit"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n \nStephen,\n \nI spent the better part of the day implementing an application layer\nnested loop and it seems to be working well. Of course it's a little\nslower than a Postgres only solution because it has to pass data back\nand forth for each daily table query until it reaches the limit, but at\nleast I don't have \"runaway\" queries like I was seeing before. That\nshould be a pretty good stopgap solution for the time being.\n \nI was really hoping there was a Postgres exclusive answer though! :) If\nthere are any other suggestions, it's a simple flag in my application to\nquery the other way again...\n \nThanks for all your help - and I'm still looking to change those\nnumerics to bigints, just haven't figured out the best way yet.\n \nMike\n \nOn 9/22/2011 10:53 AM, Stephen Frost wrote:\n> * Michael Viscuso ([email protected]) wrote:\n>> Adding the final condition hosts_guid = '2007075705813916178' is what\n>> ultimately kills it http://explain.depesz.com/s/8zy. By adding the\n>> host_guid, it spends considerably more time in the older tables than\n>> without this condition and I'm not sure why.\n>\n> What I think is happening here is that PG is pushing down that filter\n> (not typically a bad thing..), but with that condition, it's going to\n> scan the index until it finds a match for that filter before returning\n> back up only to have that result cut out due to the limit. Having it as\n> numerics isn't helping here, but the bigger issue is having to check all\n> those tuples for a match to the filter.\n>\n> Mike, the filter has to be applied before the order by/limit, since\n> those clauses come after the filter has been applied (you wouldn't want\n> a 'where x = 2 limit 10' to return early just because it found 10\n> records where x didn't equal 2).\n>\n> What would be great is if PG would realize that the CHECK constraints\n> prevent earlier records from being in these earlier tables, so it\n> shouldn't need to consider them at all once the records from the\n> 'latest' table has been found and the limit reached (reverse all this\n> for an 'ascending' query, of course), which we can do when there's no\n> order by. I don't believe we have that kind of logic or that\n> information available at this late stage- the CHECK constraints are used\n> to eliminate the impossible-to-match tables, but that's it.\n>\n> One option, which isn't great of course, would be to implement your own\n> 'nested loop' construct (something I typically despise..) in the\n> application which just walks backwards from the latest and pulls\n> whatever records it can from each day and then stops once it hits the\n> limit.\n>\n> Thanks,\n>\n> Stephen\n \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.17 (MingW32)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/\n \niQEcBAEBAgAGBQJOe7zzAAoJEBKjVK2HR1IXYwAIAKQBnFOtCNljL1Hs1ZQW3e+I\nele/kZCiHzgHLFpN7zawt1Y7qf+3ntd6u+mkatJsnqeC+HY1Qee4VTUqr+hIKhcc\nVIGuuYkzuojs6/PgF6MAERHP24lRFdLCQtMgTY8RshYODvc07VpqkLq1cXhsNJZw\n6pNBTEpEmA0MzMrmk3x6C8lFbyXZAYUxNLwG5SEWecV+lkOjnA70oKnSxG6EXRgk\nfkj2l1ezVn23KoO8SSUp4xBFHHOY/PQP9JtV7b52Gm5PC7lOqFFrXFygNP0KkWho\nTzyjoYKttShEjmTMXoLt181+NB4rQEas8USasemRA1pUkx2NrfvcK46gYucOAsg=\n=8yQW\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Thu, 22 Sep 2011 18:55:47 -0400",
"msg_from": "Michael Viscuso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query optimization using order by and limit"
},
{
"msg_contents": "Mike,\n\n* Michael Viscuso ([email protected]) wrote:\n> I spent the better part of the day implementing an application layer\n> nested loop and it seems to be working well. Of course it's a little\n> slower than a Postgres only solution because it has to pass data back\n> and forth for each daily table query until it reaches the limit, but at\n> least I don't have \"runaway\" queries like I was seeing before. That\n> should be a pretty good stopgap solution for the time being.\n\nGlad to hear that you were able to get something going which worked for\nyou.\n\n> I was really hoping there was a Postgres exclusive answer though! :) If\n> there are any other suggestions, it's a simple flag in my application to\n> query the other way again...\n\nI continue to wonder if some combination of multi-column indexes might\nhave made the task of finding the 'lowest' record from each of the\ntables fast enough that it wouldn't be an issue.\n\n> Thanks for all your help - and I'm still looking to change those\n> numerics to bigints, just haven't figured out the best way yet.\n\nOur timestamps are also implemented using 64bit integers and would allow\nyou to use all the PG date/time functions and operators. Just a\nthought.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 22 Sep 2011 19:14:56 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization using order by and limit"
},
{
"msg_contents": "Stephen,\n\nYes, I couldn't agree more. The next two things I will be looking at very\ncarefully are the timestamps and indexes. I will reply to this post if\neither dramatically helps.\n\nThanks again for all your help. My eyes were starting to bleed from staring\nat explain logs!\n\nMike\n\nOn Thu, Sep 22, 2011 at 7:14 PM, Stephen Frost <[email protected]> wrote:\n\n> Mike,\n>\n> * Michael Viscuso ([email protected]) wrote:\n> > I spent the better part of the day implementing an application layer\n> > nested loop and it seems to be working well. Of course it's a little\n> > slower than a Postgres only solution because it has to pass data back\n> > and forth for each daily table query until it reaches the limit, but at\n> > least I don't have \"runaway\" queries like I was seeing before. That\n> > should be a pretty good stopgap solution for the time being.\n>\n> Glad to hear that you were able to get something going which worked for\n> you.\n>\n> > I was really hoping there was a Postgres exclusive answer though! :) If\n> > there are any other suggestions, it's a simple flag in my application to\n> > query the other way again...\n>\n> I continue to wonder if some combination of multi-column indexes might\n> have made the task of finding the 'lowest' record from each of the\n> tables fast enough that it wouldn't be an issue.\n>\n> > Thanks for all your help - and I'm still looking to change those\n> > numerics to bigints, just haven't figured out the best way yet.\n>\n> Our timestamps are also implemented using 64bit integers and would allow\n> you to use all the PG date/time functions and operators. Just a\n> thought.\n>\n> Thanks,\n>\n> Stephen\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.10 (GNU/Linux)\n>\n> iEYEARECAAYFAk57wXAACgkQrzgMPqB3kijaNwCfQ9cSdzzHyiPwa+BTzIihWR7T\n> baoAoIbL8P3atU1cfbcCoFXFGbKE7fPt\n> =ZRqu\n> -----END PGP SIGNATURE-----\n>\n>\n\nStephen,Yes, I couldn't agree more. The next two things I will be looking at very carefully are the timestamps and indexes. I will reply to this post if either dramatically helps.\nThanks again for all your help. My eyes were starting to bleed from staring at explain logs!MikeOn Thu, Sep 22, 2011 at 7:14 PM, Stephen Frost <[email protected]> wrote:\nMike,\n\n* Michael Viscuso ([email protected]) wrote:\n> I spent the better part of the day implementing an application layer\n> nested loop and it seems to be working well. Of course it's a little\n> slower than a Postgres only solution because it has to pass data back\n> and forth for each daily table query until it reaches the limit, but at\n> least I don't have \"runaway\" queries like I was seeing before. That\n> should be a pretty good stopgap solution for the time being.\n\nGlad to hear that you were able to get something going which worked for\nyou.\n\n> I was really hoping there was a Postgres exclusive answer though! :) If\n> there are any other suggestions, it's a simple flag in my application to\n> query the other way again...\n\nI continue to wonder if some combination of multi-column indexes might\nhave made the task of finding the 'lowest' record from each of the\ntables fast enough that it wouldn't be an issue.\n\n> Thanks for all your help - and I'm still looking to change those\n> numerics to bigints, just haven't figured out the best way yet.\n\nOur timestamps are also implemented using 64bit integers and would allow\nyou to use all the PG date/time functions and operators. Just a\nthought.\n\n Thanks,\n\n Stephen\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.10 (GNU/Linux)\n\niEYEARECAAYFAk57wXAACgkQrzgMPqB3kijaNwCfQ9cSdzzHyiPwa+BTzIihWR7T\nbaoAoIbL8P3atU1cfbcCoFXFGbKE7fPt\n=ZRqu\n-----END PGP SIGNATURE-----",
"msg_date": "Thu, 22 Sep 2011 19:21:04 -0400",
"msg_from": "Michael Viscuso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query optimization using order by and limit"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> What I think is happening here is that PG is pushing down that filter\n> (not typically a bad thing..), but with that condition, it's going to\n> scan the index until it finds a match for that filter before returning\n> back up only to have that result cut out due to the limit.\n\nYeah, it's spending quite a lot of time finding the first matching row\nin each child table. I'm curious why that is though; are the child\ntables not set up with nonoverlapping firstloadtime ranges?\n\n> What would be great is if PG would realize that the CHECK constraints\n> prevent earlier records from being in these earlier tables,\n\nThe explain shows that that isn't the case, because it *is* finding at\nleast one candidate row in each table. It's just running quite far into\nthe firstloadtime sequence to do it.\n\nIf you're stuck with this table arrangement, one thing that would help\nis a two-column index on (host_guid, firstloadtime) on each child table.\nThat would match the search condition exactly, and so reduce the cost\nto find the first matching row to nearly nil. Whether this query's\nspeed is important enough to justify maintaining such an index is a\nquestion I can't answer for you.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 25 Sep 2011 18:22:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization using order by and limit "
},
{
"msg_contents": "* Tom Lane ([email protected]) wrote:\n> Yeah, it's spending quite a lot of time finding the first matching row\n> in each child table. I'm curious why that is though; are the child\n> tables not set up with nonoverlapping firstloadtime ranges?\n\nThey are set up w/ nonoverlapping firstloadtime ranges, using CHECK\nconstraints such as:\n\n\"osmoduleloads_2011_09_14_firstloadtime_check\" CHECK (firstloadtime >=\n129604464000000000::bigint::numeric AND firstloadtime <\n129605328000000000::bigint::numeric)\n\nThe issue here is that the query is saying \"Give me the first 150\nrecords with this host_id in this week-long range\". PG happily\neliminates all the tables that are outside of the week-long range during\nconstraint exclusion. After that, however, it hunts down the earliest\nrecords (which matches 'host_id') from each child table. Sure, from\neach table there's a record in the week-long range with the host_id that\nmatches. What PG doesn't realize is that it can stop after pulling the\n150 records from the most recent table (and flipping the direction of\nthe query or the tables doesn't help- PG still pulls a record from each\ntable).\n\n> > What would be great is if PG would realize that the CHECK constraints\n> > prevent earlier records from being in these earlier tables,\n> \n> The explain shows that that isn't the case, because it *is* finding at\n> least one candidate row in each table. It's just running quite far into\n> the firstloadtime sequence to do it.\n\nMy point above is that the CHECK constraints ensure an ordering which\ncould be leveraged to use the latest table first and then stop if enough\ntuples are returned (or immediately go to the next table), without ever\nconsidering the other tables. I'm not looking for PG to eliminate those\nother tables for consideration in all cases- if the limit is large\nenough, it may get all the way down to them. I'm pretty sure this isn't\nsomething which PG does today and I don't expect teaching it to do this\nto be trivial, but it certainly would be nice as this strikes me as a\nvery common use-case.\n\n> If you're stuck with this table arrangement, one thing that would help\n> is a two-column index on (host_guid, firstloadtime) on each child table.\n\nAgreed, I mentioned this to the OP previously and it's on his list of\nthings to try.\n\n\tThanks,\n\t\t\n\t\tStephen",
"msg_date": "Sun, 25 Sep 2011 20:34:28 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization using order by and limit"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> * Tom Lane ([email protected]) wrote:\n>> Yeah, it's spending quite a lot of time finding the first matching row\n>> in each child table. I'm curious why that is though; are the child\n>> tables not set up with nonoverlapping firstloadtime ranges?\n\n> The issue here is that the query is saying \"Give me the first 150\n> records with this host_id in this week-long range\".\n\nOh, I see. So the query range overlaps multiple child tables, even\nafter constraint exclusion eliminates a lot of them.\n\n> My point above is that the CHECK constraints ensure an ordering which\n> could be leveraged to use the latest table first and then stop if enough\n> tuples are returned (or immediately go to the next table), without ever\n> considering the other tables.\n\nYeah. My opinion is that trying to reverse-engineer that from the CHECK\nconstraints would cost a lot more than it's worth. What we need, and\nwill hopefully have sooner or later, is an abstract concept of\n\"partitioned table\" in which this kind of relationship is known a-priori\ninstead of having to be laboriously re-deduced every time we plan a\nquery.\n\n>> If you're stuck with this table arrangement, one thing that would help\n>> is a two-column index on (host_guid, firstloadtime) on each child table.\n\n> Agreed, I mentioned this to the OP previously and it's on his list of\n> things to try.\n\nAFAICS the fact that this example would be fast if we were only paying\nattention to the newest table is mere luck. If it can take a long time\nto find the first matching host_guid record in several of the child\ntables, why might it not take just as long to find said record in the\nother one? I think you really need the two-column indexes, if keeping\nthis query's runtime to a minimum is critical.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 25 Sep 2011 21:24:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization using order by and limit "
}
] |
[
{
"msg_contents": "I am working on a fuzzy search of a large dataset. Basically, it is a list\nof all of the songs, albums, artists, movies, and celebrities exported from\nFreebase. Anyway, I was hoping to get a fuzzy search that was nearly as\nfast as the full-text search with the new nearest-neighbor GIST indexes,\nbut, while it is improved from 9.0, it is still taking some time. The table\nhas about 16 million rows, each with a \"name\" column that is usually 2-10\nwords.\n\nMy query using full-text search is this:\nexplain analyze select * from entities where\nto_tsvector('unaccented_english', entities.name) @@\nplainto_tsquery('unaccented_english', 'bon jovi');\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on entities (cost=42.64..1375.62 rows=340 width=1274)\n(actual time=0.422..0.617 rows=109 loops=1)\n Recheck Cond: (to_tsvector('unaccented_english'::regconfig, (name)::text)\n@@ '''bon'' & ''jovi'''::tsquery)\n -> Bitmap Index Scan on entity_unaccented_name_gin_index\n (cost=0.00..42.56 rows=340 width=0) (actual time=0.402..0.402 rows=109\nloops=1)\n Index Cond: (to_tsvector('unaccented_english'::regconfig,\n(name)::text) @@ '''bon'' & ''jovi'''::tsquery)\n Total runtime: 0.728 ms\n(5 rows)\n\nMy new query using trigrams is this:\nexplain analyze select * from entities where name % 'bon jovi';\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on entities (cost=913.73..46585.14 rows=13615 width=1274)\n(actual time=7769.380..7772.739 rows=326 loops=1)\n Recheck Cond: ((name)::text % 'bon jovi'::text)\n -> Bitmap Index Scan on tmp_entity_name_trgm_gist_idx\n (cost=0.00..910.33 rows=13615 width=0) (actual time=7769.307..7769.307\nrows=326 loops=1)\n Index Cond: ((name)::text % 'bon jovi'::text)\n Total runtime: 7773.008 ms\n\nIf I put a limit on it, it gets better, but is still pretty bad:\nexplain analyze select * from entities where name % 'bon jovi' limit 50;\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..200.14 rows=50 width=1274) (actual time=1.246..1226.146\nrows=50 loops=1)\n -> Index Scan using tmp_entity_name_trgm_gist_idx on entities\n (cost=0.00..54498.48 rows=13615 width=1274) (actual time=1.243..1226.016\nrows=50 loops=1)\n Index Cond: ((name)::text % 'bon jovi'::text)\n Total runtime: 1226.261 ms\n(4 rows)\n\nAnd if I try to get the \"best\" matches, the performance goes completely down\nthe tubes, even with a limit:\nexplain analyze select * from entities where name % 'bon jovi' order by name\n<-> 'bon jovi' limit 50;\n\nQUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..200.39 rows=50 width=1274) (actual\ntime=421.811..8058.877 rows=50 loops=1)\n -> Index Scan using tmp_entity_name_trgm_gist_idx on entities\n (cost=0.00..54566.55 rows=13615 width=1274) (actual time=421.808..8058.766\nrows=50 loops=1)\n Index Cond: ((name)::text % 'bon jovi'::text)\n Order By: ((name)::text <-> 'bon jovi'::text)\n Total runtime: 8060.760 ms\n\nAnyway, this may just be a limitation of the trigram indexing, but my hope\nwas to get a fuzzy search that at least approached the performance of the\nfull-text searching. Am I missing something, or am I just bumping into the\nlimits? I also noticed that different strings get radically different\nperformance. Searching for \"hello\" drops the search time down to 310ms!\n But searching for 'hello my friend' brings the search time to 9616ms!\nexplain analyze select * from entities where name % 'hello' order by name\n<-> 'hello' limit 50;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..200.39 rows=50 width=1274) (actual time=5.056..309.492\nrows=50 loops=1)\n -> Index Scan using tmp_entity_name_trgm_gist_idx on entities\n (cost=0.00..54566.55 rows=13615 width=1274) (actual time=5.053..309.393\nrows=50 loops=1)\n Index Cond: ((name)::text % 'hello'::text)\n Order By: ((name)::text <-> 'hello'::text)\n Total runtime: 309.637 ms\n\nexplain analyze select * from entities where name % 'hello my friend' order\nby name <-> 'hello my friend' limit 50;\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..200.39 rows=50 width=1274) (actual time=76.358..9616.066\nrows=50 loops=1)\n -> Index Scan using tmp_entity_name_trgm_gist_idx on entities\n (cost=0.00..54566.55 rows=13615 width=1274) (actual time=76.356..9615.968\nrows=50 loops=1)\n Index Cond: ((name)::text % 'hello my friend'::text)\n Order By: ((name)::text <-> 'hello my friend'::text)\n Total runtime: 9616.203 ms\n\nFor reference, here is my table structure:\n\\d entities\n Table \"public.entities\"\n Column | Type |\nModifiers\n----------------------+-----------------------------+-------------------------------------------------------\n id | integer | not null default\nnextval('entities_id_seq'::regclass)\n name | character varying(255) |\n disambiguation | character varying(255) |\n description | text |\n entity_basic_type | character varying(255) |\n entity_extended_type | character varying(255) |\n primary | boolean | default true\n semantic_world_id | integer |\n calc_completed | boolean | default true\n source | text |\n source_entity_id | integer |\n created_at | timestamp without time zone |\n updated_at | timestamp without time zone |\n data_import_id | integer |\n validated | boolean | default true\n weight | integer |\n description_source | text |\n description_url | text |\n rating | text |\nIndexes:\n \"entities_pkey\" PRIMARY KEY, btree (id)\n \"entity_lower_name_idx\" btree (lower(name::text) text_pattern_ops)\n \"entity_name_gin_index\" gin (to_tsvector('english'::regconfig,\nname::text))\n \"entity_unaccented_name_gin_index\" gin\n(to_tsvector('unaccented_english'::regconfig, name::text))\n \"index_entities_on_data_import_id\" btree (data_import_id)\n \"index_entities_on_name\" btree (name)\n \"index_entities_on_source\" btree (source)\n \"tmp_entity_name_trgm_gist_idx\" gist (name gist_trgm_ops)\n\nThanks for the help!\n\nJon\n\nI am working on a fuzzy search of a large dataset. Basically, it is a list of all of the songs, albums, artists, movies, and celebrities exported from Freebase. Anyway, I was hoping to get a fuzzy search that was nearly as fast as the full-text search with the new nearest-neighbor GIST indexes, but, while it is improved from 9.0, it is still taking some time. The table has about 16 million rows, each with a \"name\" column that is usually 2-10 words.\nMy query using full-text search is this:explain analyze select * from entities where to_tsvector('unaccented_english', entities.name) @@ plainto_tsquery('unaccented_english', 'bon jovi');\n QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on entities (cost=42.64..1375.62 rows=340 width=1274) (actual time=0.422..0.617 rows=109 loops=1) Recheck Cond: (to_tsvector('unaccented_english'::regconfig, (name)::text) @@ '''bon'' & ''jovi'''::tsquery)\n -> Bitmap Index Scan on entity_unaccented_name_gin_index (cost=0.00..42.56 rows=340 width=0) (actual time=0.402..0.402 rows=109 loops=1) Index Cond: (to_tsvector('unaccented_english'::regconfig, (name)::text) @@ '''bon'' & ''jovi'''::tsquery)\n Total runtime: 0.728 ms(5 rows)My new query using trigrams is this:explain analyze select * from entities where name % 'bon jovi'; QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on entities (cost=913.73..46585.14 rows=13615 width=1274) (actual time=7769.380..7772.739 rows=326 loops=1)\n Recheck Cond: ((name)::text % 'bon jovi'::text) -> Bitmap Index Scan on tmp_entity_name_trgm_gist_idx (cost=0.00..910.33 rows=13615 width=0) (actual time=7769.307..7769.307 rows=326 loops=1)\n Index Cond: ((name)::text % 'bon jovi'::text) Total runtime: 7773.008 msIf I put a limit on it, it gets better, but is still pretty bad:\nexplain analyze select * from entities where name % 'bon jovi' limit 50; QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..200.14 rows=50 width=1274) (actual time=1.246..1226.146 rows=50 loops=1)\n -> Index Scan using tmp_entity_name_trgm_gist_idx on entities (cost=0.00..54498.48 rows=13615 width=1274) (actual time=1.243..1226.016 rows=50 loops=1) Index Cond: ((name)::text % 'bon jovi'::text)\n Total runtime: 1226.261 ms(4 rows)And if I try to get the \"best\" matches, the performance goes completely down the tubes, even with a limit:explain analyze select * from entities where name % 'bon jovi' order by name <-> 'bon jovi' limit 50;\n QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..200.39 rows=50 width=1274) (actual time=421.811..8058.877 rows=50 loops=1) -> Index Scan using tmp_entity_name_trgm_gist_idx on entities (cost=0.00..54566.55 rows=13615 width=1274) (actual time=421.808..8058.766 rows=50 loops=1)\n Index Cond: ((name)::text % 'bon jovi'::text) Order By: ((name)::text <-> 'bon jovi'::text) Total runtime: 8060.760 msAnyway, this may just be a limitation of the trigram indexing, but my hope was to get a fuzzy search that at least approached the performance of the full-text searching. Am I missing something, or am I just bumping into the limits? I also noticed that different strings get radically different performance. Searching for \"hello\" drops the search time down to 310ms! But searching for 'hello my friend' brings the search time to 9616ms!\nexplain analyze select * from entities where name % 'hello' order by name <-> 'hello' limit 50; QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=0.00..200.39 rows=50 width=1274) (actual time=5.056..309.492 rows=50 loops=1)\n -> Index Scan using tmp_entity_name_trgm_gist_idx on entities (cost=0.00..54566.55 rows=13615 width=1274) (actual time=5.053..309.393 rows=50 loops=1) Index Cond: ((name)::text % 'hello'::text)\n Order By: ((name)::text <-> 'hello'::text) Total runtime: 309.637 msexplain analyze select * from entities where name % 'hello my friend' order by name <-> 'hello my friend' limit 50;\n QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..200.39 rows=50 width=1274) (actual time=76.358..9616.066 rows=50 loops=1) -> Index Scan using tmp_entity_name_trgm_gist_idx on entities (cost=0.00..54566.55 rows=13615 width=1274) (actual time=76.356..9615.968 rows=50 loops=1)\n Index Cond: ((name)::text % 'hello my friend'::text) Order By: ((name)::text <-> 'hello my friend'::text) Total runtime: 9616.203 ms\nFor reference, here is my table structure:\\d entities Table \"public.entities\" Column | Type | Modifiers \n----------------------+-----------------------------+------------------------------------------------------- id | integer | not null default nextval('entities_id_seq'::regclass)\n name | character varying(255) | disambiguation | character varying(255) | description | text | entity_basic_type | character varying(255) | \n entity_extended_type | character varying(255) | primary | boolean | default true semantic_world_id | integer | calc_completed | boolean | default true\n source | text | source_entity_id | integer | created_at | timestamp without time zone | updated_at | timestamp without time zone | \n data_import_id | integer | validated | boolean | default true weight | integer | description_source | text | \n description_url | text | rating | text | Indexes: \"entities_pkey\" PRIMARY KEY, btree (id) \"entity_lower_name_idx\" btree (lower(name::text) text_pattern_ops)\n \"entity_name_gin_index\" gin (to_tsvector('english'::regconfig, name::text)) \"entity_unaccented_name_gin_index\" gin (to_tsvector('unaccented_english'::regconfig, name::text))\n \"index_entities_on_data_import_id\" btree (data_import_id) \"index_entities_on_name\" btree (name) \"index_entities_on_source\" btree (source) \"tmp_entity_name_trgm_gist_idx\" gist (name gist_trgm_ops)\nThanks for the help!Jon",
"msg_date": "Thu, 22 Sep 2011 11:40:46 -0500",
"msg_from": "Jonathan Bartlett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing Trigram searches in PG 9.1"
},
{
"msg_contents": "Depending on your needs, you might consider putting the data into a columnar\ntext search engine like Lucene, having it return the integer id's which can\nthen be used for row lookups in PG.\n\nOn Thu, Sep 22, 2011 at 11:40 AM, Jonathan Bartlett <\[email protected]> wrote:\n\n> I am working on a fuzzy search of a large dataset. Basically, it is a list\n> of all of the songs, albums, artists, movies, and celebrities exported from\n> Freebase. Anyway, I was hoping to get a fuzzy search that was nearly as\n> fast as the full-text search with the new nearest-neighbor GIST indexes,\n> but, while it is improved from 9.0, it is still taking some time. The table\n> has about 16 million rows, each with a \"name\" column that is usually 2-10\n> words.\n>\n> My query using full-text search is this:\n> explain analyze select * from entities where\n> to_tsvector('unaccented_english', entities.name) @@\n> plainto_tsquery('unaccented_english', 'bon jovi');\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on entities (cost=42.64..1375.62 rows=340 width=1274)\n> (actual time=0.422..0.617 rows=109 loops=1)\n> Recheck Cond: (to_tsvector('unaccented_english'::regconfig,\n> (name)::text) @@ '''bon'' & ''jovi'''::tsquery)\n> -> Bitmap Index Scan on entity_unaccented_name_gin_index\n> (cost=0.00..42.56 rows=340 width=0) (actual time=0.402..0.402 rows=109\n> loops=1)\n> Index Cond: (to_tsvector('unaccented_english'::regconfig,\n> (name)::text) @@ '''bon'' & ''jovi'''::tsquery)\n> Total runtime: 0.728 ms\n> (5 rows)\n>\n> My new query using trigrams is this:\n> explain analyze select * from entities where name % 'bon jovi';\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on entities (cost=913.73..46585.14 rows=13615\n> width=1274) (actual time=7769.380..7772.739 rows=326 loops=1)\n> Recheck Cond: ((name)::text % 'bon jovi'::text)\n> -> Bitmap Index Scan on tmp_entity_name_trgm_gist_idx\n> (cost=0.00..910.33 rows=13615 width=0) (actual time=7769.307..7769.307\n> rows=326 loops=1)\n> Index Cond: ((name)::text % 'bon jovi'::text)\n> Total runtime: 7773.008 ms\n>\n> If I put a limit on it, it gets better, but is still pretty bad:\n> explain analyze select * from entities where name % 'bon jovi' limit 50;\n>\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..200.14 rows=50 width=1274) (actual time=1.246..1226.146\n> rows=50 loops=1)\n> -> Index Scan using tmp_entity_name_trgm_gist_idx on entities\n> (cost=0.00..54498.48 rows=13615 width=1274) (actual time=1.243..1226.016\n> rows=50 loops=1)\n> Index Cond: ((name)::text % 'bon jovi'::text)\n> Total runtime: 1226.261 ms\n> (4 rows)\n>\n> And if I try to get the \"best\" matches, the performance goes completely\n> down the tubes, even with a limit:\n> explain analyze select * from entities where name % 'bon jovi' order by\n> name <-> 'bon jovi' limit 50;\n>\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..200.39 rows=50 width=1274) (actual\n> time=421.811..8058.877 rows=50 loops=1)\n> -> Index Scan using tmp_entity_name_trgm_gist_idx on entities\n> (cost=0.00..54566.55 rows=13615 width=1274) (actual time=421.808..8058.766\n> rows=50 loops=1)\n> Index Cond: ((name)::text % 'bon jovi'::text)\n> Order By: ((name)::text <-> 'bon jovi'::text)\n> Total runtime: 8060.760 ms\n>\n> Anyway, this may just be a limitation of the trigram indexing, but my hope\n> was to get a fuzzy search that at least approached the performance of the\n> full-text searching. Am I missing something, or am I just bumping into the\n> limits? I also noticed that different strings get radically different\n> performance. Searching for \"hello\" drops the search time down to 310ms!\n> But searching for 'hello my friend' brings the search time to 9616ms!\n> explain analyze select * from entities where name % 'hello' order by name\n> <-> 'hello' limit 50;\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..200.39 rows=50 width=1274) (actual time=5.056..309.492rows=50 loops=1)\n> -> Index Scan using tmp_entity_name_trgm_gist_idx on entities\n> (cost=0.00..54566.55 rows=13615 width=1274) (actual time=5.053..309.393rows=50 loops=1)\n> Index Cond: ((name)::text % 'hello'::text)\n> Order By: ((name)::text <-> 'hello'::text)\n> Total runtime: 309.637 ms\n>\n> explain analyze select * from entities where name % 'hello my friend' order\n> by name <-> 'hello my friend' limit 50;\n>\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..200.39 rows=50 width=1274) (actual\n> time=76.358..9616.066 rows=50 loops=1)\n> -> Index Scan using tmp_entity_name_trgm_gist_idx on entities\n> (cost=0.00..54566.55 rows=13615 width=1274) (actual time=76.356..9615.968\n> rows=50 loops=1)\n> Index Cond: ((name)::text % 'hello my friend'::text)\n> Order By: ((name)::text <-> 'hello my friend'::text)\n> Total runtime: 9616.203 ms\n>\n> For reference, here is my table structure:\n> \\d entities\n> Table \"public.entities\"\n> Column | Type |\n> Modifiers\n>\n> ----------------------+-----------------------------+-------------------------------------------------------\n> id | integer | not null default\n> nextval('entities_id_seq'::regclass)\n> name | character varying(255) |\n> disambiguation | character varying(255) |\n> description | text |\n> entity_basic_type | character varying(255) |\n> entity_extended_type | character varying(255) |\n> primary | boolean | default true\n> semantic_world_id | integer |\n> calc_completed | boolean | default true\n> source | text |\n> source_entity_id | integer |\n> created_at | timestamp without time zone |\n> updated_at | timestamp without time zone |\n> data_import_id | integer |\n> validated | boolean | default true\n> weight | integer |\n> description_source | text |\n> description_url | text |\n> rating | text |\n> Indexes:\n> \"entities_pkey\" PRIMARY KEY, btree (id)\n> \"entity_lower_name_idx\" btree (lower(name::text) text_pattern_ops)\n> \"entity_name_gin_index\" gin (to_tsvector('english'::regconfig,\n> name::text))\n> \"entity_unaccented_name_gin_index\" gin\n> (to_tsvector('unaccented_english'::regconfig, name::text))\n> \"index_entities_on_data_import_id\" btree (data_import_id)\n> \"index_entities_on_name\" btree (name)\n> \"index_entities_on_source\" btree (source)\n> \"tmp_entity_name_trgm_gist_idx\" gist (name gist_trgm_ops)\n>\n> Thanks for the help!\n>\n> Jon\n>\n\nDepending on your needs, you might consider putting the data into a columnar text search engine like Lucene, having it return the integer id's which can then be used for row lookups in PG.\nOn Thu, Sep 22, 2011 at 11:40 AM, Jonathan Bartlett <[email protected]> wrote:\nI am working on a fuzzy search of a large dataset. Basically, it is a list of all of the songs, albums, artists, movies, and celebrities exported from Freebase. Anyway, I was hoping to get a fuzzy search that was nearly as fast as the full-text search with the new nearest-neighbor GIST indexes, but, while it is improved from 9.0, it is still taking some time. The table has about 16 million rows, each with a \"name\" column that is usually 2-10 words.\nMy query using full-text search is this:explain analyze select * from entities where to_tsvector('unaccented_english', entities.name) @@ plainto_tsquery('unaccented_english', 'bon jovi');\n QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on entities (cost=42.64..1375.62 rows=340 width=1274) (actual time=0.422..0.617 rows=109 loops=1) Recheck Cond: (to_tsvector('unaccented_english'::regconfig, (name)::text) @@ '''bon'' & ''jovi'''::tsquery)\n -> Bitmap Index Scan on entity_unaccented_name_gin_index (cost=0.00..42.56 rows=340 width=0) (actual time=0.402..0.402 rows=109 loops=1) Index Cond: (to_tsvector('unaccented_english'::regconfig, (name)::text) @@ '''bon'' & ''jovi'''::tsquery)\n Total runtime: 0.728 ms(5 rows)My new query using trigrams is this:explain analyze select * from entities where name % 'bon jovi'; QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on entities (cost=913.73..46585.14 rows=13615 width=1274) (actual time=7769.380..7772.739 rows=326 loops=1)\n Recheck Cond: ((name)::text % 'bon jovi'::text) -> Bitmap Index Scan on tmp_entity_name_trgm_gist_idx (cost=0.00..910.33 rows=13615 width=0) (actual time=7769.307..7769.307 rows=326 loops=1)\n Index Cond: ((name)::text % 'bon jovi'::text) Total runtime: 7773.008 msIf I put a limit on it, it gets better, but is still pretty bad:\nexplain analyze select * from entities where name % 'bon jovi' limit 50; QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..200.14 rows=50 width=1274) (actual time=1.246..1226.146 rows=50 loops=1)\n -> Index Scan using tmp_entity_name_trgm_gist_idx on entities (cost=0.00..54498.48 rows=13615 width=1274) (actual time=1.243..1226.016 rows=50 loops=1) Index Cond: ((name)::text % 'bon jovi'::text)\n Total runtime: 1226.261 ms(4 rows)And if I try to get the \"best\" matches, the performance goes completely down the tubes, even with a limit:explain analyze select * from entities where name % 'bon jovi' order by name <-> 'bon jovi' limit 50;\n QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..200.39 rows=50 width=1274) (actual time=421.811..8058.877 rows=50 loops=1) -> Index Scan using tmp_entity_name_trgm_gist_idx on entities (cost=0.00..54566.55 rows=13615 width=1274) (actual time=421.808..8058.766 rows=50 loops=1)\n Index Cond: ((name)::text % 'bon jovi'::text) Order By: ((name)::text <-> 'bon jovi'::text) Total runtime: 8060.760 msAnyway, this may just be a limitation of the trigram indexing, but my hope was to get a fuzzy search that at least approached the performance of the full-text searching. Am I missing something, or am I just bumping into the limits? I also noticed that different strings get radically different performance. Searching for \"hello\" drops the search time down to 310ms! But searching for 'hello my friend' brings the search time to 9616ms!\nexplain analyze select * from entities where name % 'hello' order by name <-> 'hello' limit 50; QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=0.00..200.39 rows=50 width=1274) (actual time=5.056..309.492 rows=50 loops=1)\n -> Index Scan using tmp_entity_name_trgm_gist_idx on entities (cost=0.00..54566.55 rows=13615 width=1274) (actual time=5.053..309.393 rows=50 loops=1)\n Index Cond: ((name)::text % 'hello'::text)\n Order By: ((name)::text <-> 'hello'::text) Total runtime: 309.637 msexplain analyze select * from entities where name % 'hello my friend' order by name <-> 'hello my friend' limit 50;\n QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..200.39 rows=50 width=1274) (actual time=76.358..9616.066 rows=50 loops=1) -> Index Scan using tmp_entity_name_trgm_gist_idx on entities (cost=0.00..54566.55 rows=13615 width=1274) (actual time=76.356..9615.968 rows=50 loops=1)\n Index Cond: ((name)::text % 'hello my friend'::text) Order By: ((name)::text <-> 'hello my friend'::text) Total runtime: 9616.203 ms\nFor reference, here is my table structure:\\d entities Table \"public.entities\" Column | Type | Modifiers \n----------------------+-----------------------------+------------------------------------------------------- id | integer | not null default nextval('entities_id_seq'::regclass)\n name | character varying(255) | disambiguation | character varying(255) | description | text | entity_basic_type | character varying(255) | \n entity_extended_type | character varying(255) | primary | boolean | default true semantic_world_id | integer | calc_completed | boolean | default true\n source | text | source_entity_id | integer | created_at | timestamp without time zone | updated_at | timestamp without time zone | \n data_import_id | integer | validated | boolean | default true weight | integer | description_source | text | \n description_url | text | rating | text | Indexes: \"entities_pkey\" PRIMARY KEY, btree (id) \"entity_lower_name_idx\" btree (lower(name::text) text_pattern_ops)\n \"entity_name_gin_index\" gin (to_tsvector('english'::regconfig, name::text)) \"entity_unaccented_name_gin_index\" gin (to_tsvector('unaccented_english'::regconfig, name::text))\n \"index_entities_on_data_import_id\" btree (data_import_id) \"index_entities_on_name\" btree (name) \"index_entities_on_source\" btree (source) \"tmp_entity_name_trgm_gist_idx\" gist (name gist_trgm_ops)\nThanks for the help!Jon",
"msg_date": "Thu, 22 Sep 2011 18:03:51 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Trigram searches in PG 9.1"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI did a software upgrade, and with it came a new feature where when \nselecting a customer it queries for the sum of a few columns. This \ntakes 7 seconds for the 'Cash Sale' customer - by far the most active \ncustomer. I'd like to see if it's possible to get it down a bit by \nchanging settings.\n\nQuery:\nexplain analyse select sum(item_points),sum(disc_points) from invoice \nleft join gltx on invoice.invoice_id = gltx.gltx_id\nwhere gltx.inactive_on is null and gltx.posted = 'Y' and \ngltx.customer_id = 'A0ZQ2gsACIsEKLI638ikyg'\n\nitem_points and disc_points are the 2 columns added, so they are mostly 0.\n\ntable info:\nCREATE TABLE gltx -- rows: 894,712\n(\n gltx_id character(22) NOT NULL,\n \"version\" integer NOT NULL,\n created_by character varying(16) NOT NULL,\n updated_by character varying(16),\n inactive_by character varying(16),\n created_on date NOT NULL,\n updated_on date,\n inactive_on date,\n external_id numeric(14,0),\n data_type integer NOT NULL,\n \"number\" character varying(14) NOT NULL,\n reference_str character varying(14),\n post_date date NOT NULL,\n post_time time without time zone NOT NULL,\n work_date date NOT NULL,\n memo text,\n customer_id character(22),\n vendor_id character(22),\n station_id character(22),\n employee_id character(22),\n store_id character(22) NOT NULL,\n shift_id character(22),\n link_id character(22),\n link_num integer NOT NULL,\n printed character(1) NOT NULL,\n paid character(1) NOT NULL,\n posted character(1) NOT NULL,\n amount numeric(18,4) NOT NULL,\n card_amt numeric(18,4) NOT NULL,\n paid_amt numeric(18,4) NOT NULL,\n paid_date date,\n due_date date,\n CONSTRAINT gltx_pkey PRIMARY KEY (gltx_id),\n CONSTRAINT gltx_c0 FOREIGN KEY (customer_id)\n REFERENCES customer (customer_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT gltx_c1 FOREIGN KEY (vendor_id)\n REFERENCES vendor (vendor_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT gltx_c2 FOREIGN KEY (station_id)\n REFERENCES station (station_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT gltx_c3 FOREIGN KEY (employee_id)\n REFERENCES employee (employee_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT gltx_c4 FOREIGN KEY (store_id)\n REFERENCES store (store_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT gltx_c5 FOREIGN KEY (shift_id)\n REFERENCES shift (shift_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT gltx_c6 FOREIGN KEY (link_id)\n REFERENCES gltx (gltx_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE SET NULL\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE gltx OWNER TO quasar;\nGRANT ALL ON TABLE gltx TO quasar;\n\nCREATE INDEX gltx_i0\n ON gltx\n USING btree\n (data_type);\n\nCREATE INDEX gltx_i1\n ON gltx\n USING btree\n (post_date);\n\nCREATE INDEX gltx_i2\n ON gltx\n USING btree\n (number);\n\nCREATE INDEX gltx_i3\n ON gltx\n USING btree\n (data_type, number);\n\nCREATE INDEX gltx_i4\n ON gltx\n USING btree\n (customer_id, paid);\n\nCREATE INDEX gltx_i5\n ON gltx\n USING btree\n (vendor_id, paid);\n\nCREATE INDEX gltx_i6\n ON gltx\n USING btree\n (work_date);\n\nCREATE INDEX gltx_i7\n ON gltx\n USING btree\n (link_id);\n\n\nCREATE TABLE invoice -- 623,270 rows\n(\n invoice_id character(22) NOT NULL,\n ship_id character(22),\n ship_via character varying(20),\n term_id character(22),\n promised_date date,\n tax_exempt_id character(22),\n customer_addr text,\n ship_addr text,\n comments text,\n item_points numeric(14,0) NOT NULL,\n disc_points numeric(14,0) NOT NULL,\n CONSTRAINT invoice_pkey PRIMARY KEY (invoice_id),\n CONSTRAINT invoice_c0 FOREIGN KEY (invoice_id)\n REFERENCES gltx (gltx_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT invoice_c1 FOREIGN KEY (ship_id)\n REFERENCES customer (customer_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT invoice_c2 FOREIGN KEY (term_id)\n REFERENCES term (term_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT invoice_c3 FOREIGN KEY (tax_exempt_id)\n REFERENCES tax (tax_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\n\nBoth tables have mostly writes, some updates, very few deletes.\n\nExplain analyse: (http://explain.depesz.com/s/SYW)\n\nAggregate (cost=179199.52..179199.53 rows=1 width=10) (actual time=7520.922..7520.924 rows=1 loops=1)\n -> Merge Join (cost=9878.78..177265.66 rows=386771 width=10) (actual time=104.651..6690.194 rows=361463 loops=1)\n Merge Cond: (invoice.invoice_id = gltx.gltx_id)\n -> Index Scan using invoice_pkey on invoice (cost=0.00..86222.54 rows=623273 width=33) (actual time=0.010..1316.507 rows=623273 loops=1)\n -> Index Scan using gltx_pkey on gltx (cost=0.00..108798.53 rows=386771 width=23) (actual time=104.588..1822.886 rows=361464 loops=1)\n Filter: ((gltx.inactive_on IS NULL) AND (gltx.posted = 'Y'::bpchar) AND (gltx.customer_id = 'A0ZQ2gsACIsEKLI638ikyg'::bpchar))\nTotal runtime: 7521.026 ms\n\n\nPostgreSQL: 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50), 64-bit - self compiled\nLinux: Linux server.domain.lan 2.6.18-238.12.1.el5xen #1 SMP Tue May 31 13:35:45 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux\n\nHardware: single CPU: model name : Intel(R) Xeon(R) CPU E5335 @ 2.00GHz\nRAM: 8GB\nDB Size: 5876MB\nHDs: Raid 1 Sata drives - dell PowerEdge 1900 - lower middle class server\n\n\nPostgres config:\nmax_connections = 200 #it's a bit high I know, but most connections are idle\nshared_buffers = 2048MB #\nwork_mem = 8MB # tried up to 32MB, but no diff\nmaintenance_work_mem = 16MB #\nbgwriter_delay = 2000ms #\ncheckpoint_segments = 15 #\ncheckpoint_completion_target = 0.8 #\nseq_page_cost = 5.0 #\nrandom_page_cost = 2.5 #\neffective_cache_size = 2048MB\t\t# just upgraded to 2GB. had another aggressive memory using program before, so did not want to have this high\nlog_destination = 'stderr' #\nlogging_collector = off #\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' #\nlog_rotation_age = 1d #\nlog_min_duration_statement = 10000 #\nlog_line_prefix='%t:%r:%u@%d:[%p]: ' #\ntrack_activities = on\ntrack_counts = on\ntrack_activity_query_size = 1024 #\nautovacuum = on \t#\nautovacuum_max_workers = 5 #\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' #\nlc_monetary = 'en_US.UTF-8' #\nlc_numeric = 'en_US.UTF-8' #\nlc_time = 'en_US.UTF-8' #\ndefault_text_search_config = 'pg_catalog.english'\n\n\n\n",
"msg_date": "Fri, 23 Sep 2011 11:49:45 -0600",
"msg_from": "\"M. D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow query on tables with new columns added."
},
{
"msg_contents": "2011/9/23 M. D. <[email protected]>\n\n>\n> I did a software upgrade, and with it came a new feature where when\n> selecting a customer it queries for the sum of a few columns. This takes 7\n> seconds for the 'Cash Sale' customer - by far the most active customer. I'd\n> like to see if it's possible to get it down a bit by changing settings.\n>\n>\nTo make things clear before we search for a solution. You wrote \"by changing\nsettings\". Is it the only option? Can't you change the query in software?\nCan't you change database schema (add indexes etc)?\n\n\nQuery:\n> explain analyse select sum(item_points),sum(disc_**points) from invoice\n> left join gltx on invoice.invoice_id = gltx.gltx_id\n> where gltx.inactive_on is null and gltx.posted = 'Y' and gltx.customer_id =\n> 'A0ZQ2gsACIsEKLI638ikyg'\n>\n\nAside from other things, you know that LEFT join here is useless? - planner\nshould collapse it to normal join but I'd check.\n\n\nFilip\n\n2011/9/23 M. D. <[email protected]>\n\nI did a software upgrade, and with it came a new feature where when selecting a customer it queries for the sum of a few columns. This takes 7 seconds for the 'Cash Sale' customer - by far the most active customer. I'd like to see if it's possible to get it down a bit by changing settings.\nTo make things clear before we search for a solution. You wrote \"by changing settings\". Is it the only option? Can't you change the query in software? Can't you change database schema (add indexes etc)?\n\n \n\nQuery:\nexplain analyse select sum(item_points),sum(disc_points) from invoice left join gltx on invoice.invoice_id = gltx.gltx_id\nwhere gltx.inactive_on is null and gltx.posted = 'Y' and gltx.customer_id = 'A0ZQ2gsACIsEKLI638ikyg'Aside from other things, you know that LEFT join here is useless? - planner should collapse it to normal join but I'd check.\nFilip",
"msg_date": "Sat, 24 Sep 2011 08:10:13 +0200",
"msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query on tables with new columns added."
},
{
"msg_contents": "\n\n\n\n\n I have full access to the database, but no access to the application\n source code. If creating an index will help, I can do that, but\n with the columns I don't see it helping as I don't have access to\n the application source to change that. \n\n So yes, by changing settings, I would like to know if there's any\n memory settings I can change to help or create an index. There is\n an index on the customer_id column in the gltx table, so I'm not\n sure what else could be done.\n\n If there was a way to create a select trigger, I would do it and\n return 0 for both columns on that customer_id as it should always be\n 0.\n\n\n On 09/24/2011 12:10 AM, Filip Rembiałkowski wrote:\n \n2011/9/23 M. D. <[email protected]>\n\n\n I did a software upgrade, and with it came a new feature where\n when selecting a customer it queries for the sum of a few\n columns. This takes 7 seconds for the 'Cash Sale' customer -\n by far the most active customer. I'd like to see if it's\n possible to get it down a bit by changing settings.\n\n\n\n To make things clear before we search for a solution. You\n wrote \"by changing settings\". Is it the only option? Can't you\n change the query in software? Can't you change database schema\n (add indexes etc)?\n \n \n\n\n\n Query:\n explain analyse select sum(item_points),sum(disc_points) from\n invoice left join gltx on invoice.invoice_id = gltx.gltx_id\n where gltx.inactive_on is null and gltx.posted = 'Y' and\n gltx.customer_id = 'A0ZQ2gsACIsEKLI638ikyg'\n\n\n Aside from other things, you know that LEFT join here is\n useless? - planner should collapse it to normal join but I'd\n check.\n\n\n Filip\n\n\n\n\n\n\n\n",
"msg_date": "Mon, 26 Sep 2011 14:13:10 -0600",
"msg_from": "\"M. D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow query on tables with new columns added."
},
{
"msg_contents": "2011/9/26 M. D. <[email protected]>\n\n> I have full access to the database, but no access to the application\n> source code. If creating an index will help, I can do that, but with the\n> columns I don't see it helping as I don't have access to the application\n> source to change that.\n>\n> So yes, by changing settings, I would like to know if there's any memory\n> settings I can change to help or create an index. There is an index on the\n> customer_id column in the gltx table, so I'm not sure what else could be\n> done.\n>\n> If there was a way to create a select trigger, I would do it and return 0\n> for both columns on that customer_id as it should always be 0.\n>\n>\n>\nHi\n\nI didn't respond earlier, because I actually don't see any easy way of\nspeeding up the query.\n\nThe memory settings seem fine for this size of data.\n\nIt does not look like you can change things by simply adding indexes. I\nmean, you can certainly add a specially crafted partial index on\ngltx.customer_id WHERE (gltx.inactive_on IS NULL) AND (gltx.posted = 'Y') -\nthis can earn you a few percent max.\n\nThe problem here might be the type of join columns - we can see they are\nabout 10 characters which is not an ideal choice (that's one of reasons why\nI'm a fan of artificial integer pkeys).\n\nYou _could_ try running the query with enable_mergejoin = off and see what\nhappens.\n\nYou can check if the problem persists after dumping and reloading to another\ndb.\n\nIf app modification was possible, you could materialize the data _before_ it\nmust be queried - using summary table and appropriate triggers for keeping\nit up-to-date.\n\nRegarding your last comment - on that customer_id values should be 0 - if\nit's a persistent business rule, I would try to create a CHECK to reflect\nit. With some luck and fiddling, constraint_exclusion might come to help\nwith speeding up your query.\n\nAlso, if there is something special about customer_id distribution - table\npartitioning might be an option.\n\nOk, that's a long list - hope this helps, and good luck.\n\nAfter all you can throw more hardware at the problem - or hire some Pg\nmagician :-)\n\n2011/9/26 M. D. <[email protected]>\n\n I have full access to the database, but no access to the application\n source code. If creating an index will help, I can do that, but\n with the columns I don't see it helping as I don't have access to\n the application source to change that. \n\n So yes, by changing settings, I would like to know if there's any\n memory settings I can change to help or create an index. There is\n an index on the customer_id column in the gltx table, so I'm not\n sure what else could be done.\n\n If there was a way to create a select trigger, I would do it and\n return 0 for both columns on that customer_id as it should always be\n 0.HiI didn't respond earlier, because I actually don't see any easy way of speeding up the query.The memory settings seem fine for this size of data.\nIt does not look like you can change things by simply adding indexes. I mean, you can certainly add a specially crafted partial index on gltx.customer_id WHERE (gltx.inactive_on IS NULL) AND (gltx.posted = 'Y') - this can earn you a few percent max.\nThe problem here might be the type of join columns - we can see\n they are about 10 characters which is not an ideal choice (that's one of \nreasons why I'm a fan of artificial integer pkeys). \n\nYou _could_ try running the query with enable_mergejoin = off and see what happens.\nYou can check if the problem persists after dumping and reloading to another db.If app modification was possible, you could materialize the data _before_ it must be queried - using summary table and appropriate triggers for keeping it up-to-date. \nRegarding your last comment - on that customer_id values should be 0 - if it's a persistent business rule, I would try to create a CHECK to reflect it. With some luck and fiddling, constraint_exclusion might come to help with speeding up your query.\nAlso, if there is something special about customer_id distribution - table partitioning might be an option.Ok, that's a long list - hope this helps, and good luck.After all you can throw more hardware at the problem - or hire some Pg magician :-)",
"msg_date": "Tue, 27 Sep 2011 01:45:59 +0200",
"msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query on tables with new columns added."
}
] |
[
{
"msg_contents": "Hello,\n\nIt is interesting how PostgreSQL reads the tablefiie.\nWhether its indexes store/use filesystem clusters locations containing\nrequired data (so it can issue a low level cluster read) or it just\nfseeks inside a file?\n\nThank you\n",
"msg_date": "Fri, 23 Sep 2011 23:03:57 +0300",
"msg_from": "Antonio Rodriges <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PERFORMANCE] Insights: fseek OR read_cluster?"
},
{
"msg_contents": "Hello,\n\nIt is interesting how PostgreSQL reads the tablefiie.\nWhether its indexes store/use filesystem clusters locations containing\nrequired data (so it can issue a low level cluster read) or it just\nfseeks inside a file?\n\nThank you\n",
"msg_date": "Sat, 24 Sep 2011 09:49:42 +0300",
"msg_from": "Antonio Rodriges <[email protected]>",
"msg_from_op": false,
"msg_subject": "[PERFORMANCE] Insights: fseek OR read_cluster?"
},
{
"msg_contents": "On 24/09/2011 2:49 PM, Antonio Rodriges wrote:\n> Hello,\n>\n> It is interesting how PostgreSQL reads the tablefiie.\n> Whether its indexes store/use filesystem clusters locations containing\n> required data (so it can issue a low level cluster read) or it just\n> fseeks inside a file?\n\nWhat is read_cluster() ? Are you talking about some kind of async \nand/or direct I/O? If so, PostgreSQL is not designed for direct I/O, it \nbenefits from using the OS's buffer cache, I/O scheduler, etc.\n\nIIRC Pg uses pread() to read from its data files, but I didn't go double \ncheck in the sources to make sure.\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 26 Sep 2011 15:59:39 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] Insights: fseek OR read_cluster?"
},
{
"msg_contents": "Thank you, Craig, your answers are always insightful\n\n> What is read_cluster() ? Are you talking about some kind of async and/or\n\nI meant that if you want to read a chunk of data from file you (1)\nmight not call traditional fseek but rather memorize hard drive\ncluster numbers to boost disk seeks and, (2) perform the read of disk\ncluster directly.\n\n> direct I/O? If so, PostgreSQL is not designed for direct I/O, it benefits\n> from using the OS's buffer cache, I/O scheduler, etc.\n>\n> IIRC Pg uses pread() to read from its data files, but I didn't go double\n> check in the sources to make sure.\n>\n> --\n> Craig Ringer\n>\n",
"msg_date": "Mon, 26 Sep 2011 16:51:12 +0400",
"msg_from": "Antonio Rodriges <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] Insights: fseek OR read_cluster?"
},
{
"msg_contents": "On Mon, Sep 26, 2011 at 15:51, Antonio Rodriges <[email protected]> wrote:\n>> What is read_cluster() ? Are you talking about some kind of async and/or\n>\n> I meant that if you want to read a chunk of data from file you (1)\n> might not call traditional fseek but rather memorize hard drive\n> cluster numbers to boost disk seeks and, (2) perform the read of disk\n> cluster directly.\n\nPostgreSQL accesses regular files on a file system via lseek(), read()\nand write() calls, no magic.\n\nIn modern extent-based file systems, mapping a file offset to a\nphysical disk sector is very fast -- compared to the time of actually\naccessing the disk.\n\nI can't see how direct cluster access would even work, unless you'd\ngive the database direct access to a raw partition, in which case\nPostgres would effectively have to implement its own file system. The\ngains are simply not worth it for Postgres, our developer resources\nare better spent elsewhere.\n\nRegards,\nMarti\n",
"msg_date": "Mon, 26 Sep 2011 22:06:51 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] Insights: fseek OR read_cluster?"
},
{
"msg_contents": "Thank you, Marti,\n\nIs there any comprehensive survey of (at least most, if not all)\nmodern features of operating systems, for example I/O scheduling,\nextent-based filesytems, etc.?\n",
"msg_date": "Tue, 27 Sep 2011 19:12:36 +0300",
"msg_from": "Antonio Rodriges <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] Insights: fseek OR read_cluster?"
}
] |
[
{
"msg_contents": "Hi All,\n\nWe are currently using PostgreSQL 9.0.3 and we noticed a performance anomaly\nfrom a framework (ActiveRecord) generated query to one of our tables. The\nquery uses an in clause to check an indexed column for the presence of\neither of two values. In this particular case neither of them is present\n(but in other cases one or more might be). The framework generates a limit\n1 query to test for existence. This query ends up using a seq scan and is\nquite slow, however rewriting it using OR = rather then IN uses the index\n(as does removing the limit or raising it to a large value). The table has\n36 million rows (more details are below) and is read only in typical usage.\nI was wondering if IN vs OR planning being so differently represented a bug\nand/or if we might have some misconfiguration somewhere that leads the query\nplanner to pick what in best case can only be a slightly faster plan then\nusing the index but in worst case is much much slower. I would also think\nthe cluster on the table would argue against using a sequence scan for this\nkind of query (since the hts_code_id's will be colocated, perf, if the id is\npresent, will very greatly depending on what order the seq scan walks the\ntable which we've observed...; if the id(s) are not present then this plan\nis always terrible). We can use set enable_seqscan TO off around this query\nif need be, but it seems like something the planner should have done better\nwith unless we have something weird somewhere (conf file details are below).\n\npsql (9.0.3)\nType \"help\" for help.\n\n-- Table info\ndev=> ANALYZE exp_detls;\nANALYZE\ndev=> select count(*) from exp_detls;\n 36034391\ndev=>explain analyze select count(*) from exp_detls;\n Aggregate (cost=1336141.30..1336141.31 rows=1 width=0) (actual\ntime=43067.620..43067.621 rows=1 loops=1)\n -> Seq Scan on exp_detls (cost=0.00..1246046.84 rows=36037784 width=0)\n(actual time=0.011..23703.177 rows=36034391 loops=1)\n Total runtime: 43067.675 ms\ndev=>select pg_size_pretty(pg_table_size('exp_detls'));\n 6919 MB\n\n\n-- Problematic Query\ndev=> explain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE\n(\"exp_detls\".\"hts_code_id\" IN (12,654)) LIMIT 1;\n Limit (cost=0.00..158.18 rows=1 width=4) (actual time=9661.363..9661.363\nrows=0 loops=1)\n -> Seq Scan on exp_detls (cost=0.00..1336181.90 rows=8447 width=4)\n(actual time=9661.360..9661.360 rows=0 loops=1)\n Filter: (hts_code_id = ANY ('{12,654}'::integer[]))\n Total runtime: 9661.398 ms\n(4 rows)\n\n\n-- Using OR =, much faster, though more complicated plan then below\ndev=> explain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE\n(\"exp_detls\".\"hts_code_id\" = 12 OR \"exp_detls\".\"hts_code_id\" = 654) LIMIT 1;\n Limit (cost=162.59..166.29 rows=1 width=4) (actual time=0.029..0.029\nrows=0 loops=1)\n -> Bitmap Heap Scan on exp_detls (cost=162.59..31188.14 rows=8370\nwidth=4) (actual time=0.028..0.028 rows=0 loops=1)\n Recheck Cond: ((hts_code_id = 12) OR (hts_code_id = 654))\n -> BitmapOr (cost=162.59..162.59 rows=8370 width=0) (actual\ntime=0.027..0.027 rows=0 loops=1)\n -> Bitmap Index Scan on\nindex_exp_detls_on_hts_code_id_and_data_month (cost=0.00..79.20 rows=4185\nwidth=0) (actual time=0.017..0.017 rows=0 loops=1)\n Index Cond: (hts_code_id = 12)\n -> Bitmap Index Scan on\nindex_exp_detls_on_hts_code_id_and_data_month (cost=0.00..79.20 rows=4185\nwidth=0) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (hts_code_id = 654)\n Total runtime: 0.051 ms\n(9 rows)\n\n\n-- No limit, much faster, also a cleaner looking plan (of course problematic\nwhen there are many matching rows)\ndev=>explain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE\n(\"exp_detls\".\"hts_code_id\" IN (12,654));\n Bitmap Heap Scan on exp_detls (cost=156.93..31161.56 rows=8370 width=4)\n(actual time=0.028..0.028 rows=0 loops=1)\n Recheck Cond: (hts_code_id = ANY ('{12,654}'::integer[]))\n -> Bitmap Index Scan on index_exp_detls_on_hts_code_id_and_data_month\n(cost=0.00..154.84 rows=8370 width=0) (actual time=0.026..0.026 rows=0\nloops=1)\n Index Cond: (hts_code_id = ANY ('{12,654}'::integer[]))\n Total runtime: 0.045 ms\n(5 rows)\n\n\n-- Table Schema\n\n Table \"public.exp_detls\"\n Column | Type |\nModifiers\n------------------+-----------------------------+--------------------------------------------------------\n id | integer | not null default\nnextval('exp_detls_id_seq'::regclass)\n created_at | timestamp without time zone | not null\n df | integer |\n hts_code_id | integer | not null\n uscb_country_id | integer |\n country_id | integer |\n uscb_district_id | integer |\n cards_mo | numeric(15,0) | not null\n qty_1_mo | numeric(15,0) | not null\n qty_2_mo | numeric(15,0) |\n all_val_mo | numeric(15,0) | not null\n air_val_mo | numeric(15,0) | not null\n air_wgt_mo | numeric(15,0) | not null\n ves_val_mo | numeric(15,0) | not null\n ves_wgt_mo | numeric(15,0) | not null\n cnt_val_mo | numeric(15,0) | not null\n cnt_wgt_mo | numeric(15,0) | not null\n cards_yr | numeric(15,0) | not null\n qty_1_yr | numeric(15,0) | not null\n qty_2_yr | numeric(15,0) |\n all_val_yr | numeric(15,0) | not null\n air_val_yr | numeric(15,0) | not null\n air_wgt_yr | numeric(15,0) | not null\n ves_val_yr | numeric(15,0) | not null\n ves_wgt_yr | numeric(15,0) | not null\n cnt_val_yr | numeric(15,0) | not null\n cnt_wgt_yr | numeric(15,0) | not null\n data_month | date | not null\n parent_id | integer |\nIndexes:\n \"exp_detls_pkey\" PRIMARY KEY, btree (id)\n \"index_exp_detls_on_data_month\" btree (data_month) WITH (fillfactor=100)\n \"index_exp_detls_on_hts_code_id_and_data_month\" btree (hts_code_id,\ndata_month) WITH (fillfactor=100) CLUSTER\n \"index_exp_detls_on_parent_id\" btree (parent_id) WITH (fillfactor=100)\nWHERE parent_id IS NOT NULL\n<Several FK's>\n\n\npostgresql.conf non-default settings\nlisten_addresses = '*' # what IP address(es) to listen on;\nport = 5432 # (change requires restart)\nmax_connections = 230 # (change requires restart)\ntcp_keepalives_idle = 180 # TCP_KEEPIDLE, in seconds;\nshared_buffers = 4GB # min 128kB, DEFAULT 32MB\nwork_mem = 512MB # min 64kB, DEFAULT 1MB\nmaintenance_work_mem = 256MB # min 1MB, DEFAULT 16MB\neffective_io_concurrency = 2 # 1-1000. 0 disables prefetching\nsynchronous_commit = off # immediate fsync at commit, DEFAULT on\nwal_buffers = 16MB # min 32kB, DEFAULT 64kB\nwal_writer_delay = 330ms # 1-10000 milliseconds, DEFAULT 200ms\ncheckpoint_segments = 24 # in logfile segments, min 1, 16MB each\ncheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0\neffective_cache_size = 24GB # DEFAULT 128MB\nlogging_collector = on # Enable capturing of stderr and csvlog\nlog_checkpoints = on # DEFAULT off\nlog_connections = on # DEFAULT off\nlog_disconnections = on # DEFAULT off\nlog_hostname = on # DEFAULT off\nlog_line_prefix = '%t' # special values:\ntrack_activity_query_size = 8192 # (change requires restart)\nbytea_output = 'escape' # hex, escape, Default hex\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system error message strings\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\n\nHi All,We are currently using PostgreSQL 9.0.3 and we noticed a performance anomaly from a framework (ActiveRecord) generated query to one of our tables. The query uses an in clause to check an indexed column for the presence of either of two values. In this particular case neither of them is present (but in other cases one or more might be). The framework generates a limit 1 query to test for existence. This query ends up using a seq scan and is quite slow, however rewriting it using OR = rather then IN uses the index (as does removing the limit or raising it to a large value). The table has 36 million rows (more details are below) and is read only in typical usage. I was wondering if IN vs OR planning being so differently represented a bug and/or if we might have some misconfiguration somewhere that leads the query planner to pick what in best case can only be a slightly faster plan then using the index but in worst case is much much slower. I would also think the cluster on the table would argue against using a sequence scan for this kind of query (since the hts_code_id's will be colocated, perf, if the id is present, will very greatly depending on what order the seq scan walks the table which we've observed...; if the id(s) are not present then this plan is always terrible). We can use set enable_seqscan TO off around this query if need be, but it seems like something the planner should have done better with unless we have something weird somewhere (conf file details are below).\npsql (9.0.3)Type \"help\" for help.-- Table infodev=> ANALYZE exp_detls;\n\nANALYZEdev=> select count(*) from exp_detls; 36034391dev=>explain analyze select count(*) from exp_detls; Aggregate (cost=1336141.30..1336141.31 rows=1 width=0) (actual time=43067.620..43067.621 rows=1 loops=1)\n\n -> Seq Scan on exp_detls (cost=0.00..1246046.84 rows=36037784 width=0) (actual time=0.011..23703.177 rows=36034391 loops=1) Total runtime: 43067.675 msdev=>select pg_size_pretty(pg_table_size('exp_detls'));\n\n 6919 MB-- Problematic Querydev=> explain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE (\"exp_detls\".\"hts_code_id\" IN (12,654)) LIMIT 1; Limit (cost=0.00..158.18 rows=1 width=4) (actual time=9661.363..9661.363 rows=0 loops=1)\n\n -> Seq Scan on exp_detls (cost=0.00..1336181.90 rows=8447 width=4) (actual time=9661.360..9661.360 rows=0 loops=1) Filter: (hts_code_id = ANY ('{12,654}'::integer[])) Total runtime: 9661.398 ms\n\n(4 rows)-- Using OR =, much faster, though more complicated plan then belowdev=> explain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE (\"exp_detls\".\"hts_code_id\" = 12 OR \"exp_detls\".\"hts_code_id\" = 654) LIMIT 1;\n\n Limit (cost=162.59..166.29 rows=1 width=4) (actual time=0.029..0.029 rows=0 loops=1) -> Bitmap Heap Scan on exp_detls (cost=162.59..31188.14 rows=8370 width=4) (actual time=0.028..0.028 rows=0 loops=1) Recheck Cond: ((hts_code_id = 12) OR (hts_code_id = 654))\n\n -> BitmapOr (cost=162.59..162.59 rows=8370 width=0) (actual time=0.027..0.027 rows=0 loops=1) -> Bitmap Index Scan on index_exp_detls_on_hts_code_id_and_data_month (cost=0.00..79.20 rows=4185 width=0) (actual time=0.017..0.017 rows=0 loops=1)\n\n Index Cond: (hts_code_id = 12) -> Bitmap Index Scan on index_exp_detls_on_hts_code_id_and_data_month (cost=0.00..79.20 rows=4185 width=0) (actual time=0.007..0.007 rows=0 loops=1)\n\n Index Cond: (hts_code_id = 654) Total runtime: 0.051 ms(9 rows)-- No limit, much faster, also a cleaner looking plan (of course problematic when there are many matching rows)dev=>explain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE (\"exp_detls\".\"hts_code_id\" IN (12,654));\n\n Bitmap Heap Scan on exp_detls (cost=156.93..31161.56 rows=8370 width=4) (actual time=0.028..0.028 rows=0 loops=1) Recheck Cond: (hts_code_id = ANY ('{12,654}'::integer[])) -> Bitmap Index Scan on index_exp_detls_on_hts_code_id_and_data_month (cost=0.00..154.84 rows=8370 width=0) (actual time=0.026..0.026 rows=0 loops=1)\n\n Index Cond: (hts_code_id = ANY ('{12,654}'::integer[])) Total runtime: 0.045 ms(5 rows)-- Table Schema Table \"public.exp_detls\"\n\n Column | Type | Modifiers ------------------+-----------------------------+-------------------------------------------------------- id | integer | not null default nextval('exp_detls_id_seq'::regclass)\n\n created_at | timestamp without time zone | not null df | integer | hts_code_id | integer | not null uscb_country_id | integer | \n\n country_id | integer | uscb_district_id | integer | cards_mo | numeric(15,0) | not null qty_1_mo | numeric(15,0) | not null\n\n qty_2_mo | numeric(15,0) | all_val_mo | numeric(15,0) | not null air_val_mo | numeric(15,0) | not null air_wgt_mo | numeric(15,0) | not null\n\n ves_val_mo | numeric(15,0) | not null ves_wgt_mo | numeric(15,0) | not null cnt_val_mo | numeric(15,0) | not null cnt_wgt_mo | numeric(15,0) | not null\n\n cards_yr | numeric(15,0) | not null qty_1_yr | numeric(15,0) | not null qty_2_yr | numeric(15,0) | all_val_yr | numeric(15,0) | not null\n\n air_val_yr | numeric(15,0) | not null air_wgt_yr | numeric(15,0) | not null ves_val_yr | numeric(15,0) | not null ves_wgt_yr | numeric(15,0) | not null\n\n cnt_val_yr | numeric(15,0) | not null cnt_wgt_yr | numeric(15,0) | not null data_month | date | not null parent_id | integer | \n\nIndexes: \"exp_detls_pkey\" PRIMARY KEY, btree (id) \"index_exp_detls_on_data_month\" btree (data_month) WITH (fillfactor=100) \"index_exp_detls_on_hts_code_id_and_data_month\" btree (hts_code_id, data_month) WITH (fillfactor=100) CLUSTER\n\n \"index_exp_detls_on_parent_id\" btree (parent_id) WITH (fillfactor=100) WHERE parent_id IS NOT NULL<Several FK's>postgresql.conf non-default settingslisten_addresses = '*' # what IP address(es) to listen on;\n\nport = 5432 # (change requires restart)max_connections = 230 # (change requires restart)tcp_keepalives_idle = 180 # TCP_KEEPIDLE, in seconds;shared_buffers = 4GB # min 128kB, DEFAULT 32MB\n\nwork_mem = 512MB # min 64kB, DEFAULT 1MBmaintenance_work_mem = 256MB # min 1MB, DEFAULT 16MBeffective_io_concurrency = 2 # 1-1000. 0 disables prefetchingsynchronous_commit = off # immediate fsync at commit, DEFAULT on\n\nwal_buffers = 16MB # min 32kB, DEFAULT 64kBwal_writer_delay = 330ms # 1-10000 milliseconds, DEFAULT 200mscheckpoint_segments = 24 # in logfile segments, min 1, 16MB eachcheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0\n\neffective_cache_size = 24GB # DEFAULT 128MBlogging_collector = on # Enable capturing of stderr and csvloglog_checkpoints = on # DEFAULT offlog_connections = on # DEFAULT offlog_disconnections = on # DEFAULT off\n\nlog_hostname = on # DEFAULT offlog_line_prefix = '%t' # special values:track_activity_query_size = 8192 # (change requires restart)bytea_output = 'escape' # hex, escape, Default hex\n\ndatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error message stringslc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formatting\n\nlc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'",
"msg_date": "Fri, 23 Sep 2011 18:37:22 -0400",
"msg_from": "Timothy Garnett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance Anomaly with \"col in (A,\n\tB)\" vs. \"col = A OR col = B\" ver. 9.0.3"
},
{
"msg_contents": "Timothy Garnett <[email protected]> writes:\n> -- Problematic Query\n> dev=> explain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE\n> (\"exp_detls\".\"hts_code_id\" IN (12,654)) LIMIT 1;\n> Limit (cost=0.00..158.18 rows=1 width=4) (actual time=9661.363..9661.363\n> rows=0 loops=1)\n> -> Seq Scan on exp_detls (cost=0.00..1336181.90 rows=8447 width=4)\n> (actual time=9661.360..9661.360 rows=0 loops=1)\n> Filter: (hts_code_id = ANY ('{12,654}'::integer[]))\n> Total runtime: 9661.398 ms\n> (4 rows)\n\n> -- Using OR =, much faster, though more complicated plan then below\n> dev=> explain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE\n> (\"exp_detls\".\"hts_code_id\" = 12 OR \"exp_detls\".\"hts_code_id\" = 654) LIMIT 1;\n> Limit (cost=162.59..166.29 rows=1 width=4) (actual time=0.029..0.029\n> rows=0 loops=1)\n> -> Bitmap Heap Scan on exp_detls (cost=162.59..31188.14 rows=8370\n> width=4) (actual time=0.028..0.028 rows=0 loops=1)\n> Recheck Cond: ((hts_code_id = 12) OR (hts_code_id = 654))\n> -> BitmapOr (cost=162.59..162.59 rows=8370 width=0) (actual\n> time=0.027..0.027 rows=0 loops=1)\n> -> Bitmap Index Scan on\n> index_exp_detls_on_hts_code_id_and_data_month (cost=0.00..79.20 rows=4185\n> width=0) (actual time=0.017..0.017 rows=0 loops=1)\n> Index Cond: (hts_code_id = 12)\n> -> Bitmap Index Scan on\n> index_exp_detls_on_hts_code_id_and_data_month (cost=0.00..79.20 rows=4185\n> width=0) (actual time=0.007..0.007 rows=0 loops=1)\n> Index Cond: (hts_code_id = 654)\n> Total runtime: 0.051 ms\n> (9 rows)\n\nWell, the reason it likes the first plan is that that has a smaller\nestimated cost ;-). Basically this is a startup-time-versus-total-time\nissue: the cost of the seqscan+limit is estimated to be about 1/8447'th\nof the time to read the whole table, since it's estimating 8447\ncandidate matches and assuming that those are uniformly distributed in\nthe table. Meanwhile, the bitmap scan has a significant startup cost\nbecause the entire indexscan is completed before we start to do any\nfetching from the heap. The overestimate of the number of matching\nrows contributes directly to overestimating the cost of the indexscan,\ntoo. It ends up being a near thing --- 158 vs 166 cost units --- but\non the basis of these estimates the planner did the right thing.\n\nSo, what you need to do to make this better is to get it to have a\nbetter idea of how many rows match the query condition; the overestimate\nis both making the expensive plan look cheap, and making the cheaper\nplan look expensive. Cranking up the statistics target for the\nhts_code_id column (and re-ANALYZEing) ought to fix it. If all your\ntables are this large you might want to just increase\ndefault_statistics_target across the board.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Sep 2011 13:07:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Anomaly with \"col in (A,\n\tB)\" vs. \"col = A OR col = B\" ver. 9.0.3"
},
{
"msg_contents": "On 9/26/11 10:07 AM, Tom Lane wrote:\n> Cranking up the statistics target for the hts_code_id column (and re-ANALYZEing) ought to fix it. If all your tables are this large you might want to just increase default_statistics_target across the board. regards, tom lane \nThis is common advice in this forum .... but what's the down side to increasing statistics? With so many questions coming to this forum that are due to insufficient statistics, why not just increase the default_statistics_target? I assume there is a down side, but I've never seen it discussed. Does it increase planning time? Analyze time? Take lots of space?\n\nThanks,\nCraig\n",
"msg_date": "Mon, 26 Sep 2011 10:18:49 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Anomaly with \"col in (A,B)\" vs. \"col =\n\tA OR col = B\" ver. 9.0.3"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> On 9/26/11 10:07 AM, Tom Lane wrote:\n>> Cranking up the statistics target for the hts_code_id column (and re-ANALYZEing) ought to fix it. If all your tables are this large you might want to just increase default_statistics_target across the board. regards, tom lane \n\n> This is common advice in this forum .... but what's the down side to increasing statistics? With so many questions coming to this forum that are due to insufficient statistics, why not just increase the default_statistics_target? I assume there is a down side, but I've never seen it discussed. Does it increase planning time? Analyze time? Take lots of space?\n\nYes, yes, and yes. We already did crank up the default\ndefault_statistics_target once (in 8.4), so I'm hesitant to do it again.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Sep 2011 13:35:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Anomaly with \"col in (A,\n\tB)\" vs. \"col = A OR col = B\" ver. 9.0.3"
},
{
"msg_contents": "> Well, the reason it likes the first plan is that that has a smaller\n> estimated cost ;-). Basically this is a startup-time-versus-total-time\n> issue: the cost of the seqscan+limit is estimated to be about 1/8447'th\n> of the time to read the whole table, since it's estimating 8447\n> candidate matches and assuming that those are uniformly distributed in\n> the table. Meanwhile, the bitmap scan has a significant startup cost\n> because the entire indexscan is completed before we start to do any\n> fetching from the heap. The overestimate of the number of matching\n> rows contributes directly to overestimating the cost of the indexscan,\n> too. It ends up being a near thing --- 158 vs 166 cost units --- but\n> on the basis of these estimates the planner did the right thing.\n>\n> So, what you need to do to make this better is to get it to have a\n> better idea of how many rows match the query condition; the overestimate\n> is both making the expensive plan look cheap, and making the cheaper\n> plan look expensive. Cranking up the statistics target for the\n> hts_code_id column (and re-ANALYZEing) ought to fix it. If all your\n> tables are this large you might want to just increase\n> default_statistics_target across the board.\n>\n> regards, tom lane\n>\n\n\nThanks for the great description of what's happening. Very informative.\nUpping the stats to the max 10000 (from the default 100) makes my example\nquery use a faster plan, but not other problematic queries we have in the\nsame vein (just add a few more values to the in clause). For ex. (this is\nwith stats set to 10000 and re-analyzed).\n\n=>explain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE\n(\"exp_detls\".\"hts_code_id\" IN\n(8127,8377,8374,10744,11125,8375,8344,9127,9345)) LIMIT 1;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..152.94 rows=1 width=4) (actual time=12057.637..12057.637\nrows=0 loops=1)\n -> Seq Scan on exp_detls (cost=0.00..1651399.83 rows=10798 width=4)\n(actual time=12057.635..12057.635 rows=0 loops=1)\n Filter: (hts_code_id = ANY\n('{8127,8377,8374,10744,11125,8375,8344,9127,9345}'::integer[]))\n Total runtime: 12057.678 ms\n\n From your description I think the planner is making two problematic\nassumptions that are leading to our issue:\n\nFirst is that the potential matches are uniformly distributed across the\nphysical table. While there are a number of reasons this may not be the\ncase (correlated insertion order or update patterns etc.), in this case\nthere's a very clear reason which is that the table is clustered on an index\nthat leads with the column we're querying against ('hts_code_id') and\nnothing has been inserted or updated in the table since the last time we ran\ncluster on it (see the schema in the original e-mail).\n\nSecond is that is that it's assuming that the IN clause values are actually\npresent in the table (and at some reasonably large frequency), in the worst\ncases (like above) they aren't present at all, forcing a full table scan. I\ndon't know how the expected freq of values not present in the frequent\nvalues are estimated, but I'm guessing something based on the residual\nprobability in the stats table and the est. number of distinct values? If\nso that assumes that the supplied values actually are in the set of distinct\nvalues which seems unjustified.\n\nPerhaps there should be some estimation on whether the supplied value is one\nof the distinct values or not (could probably do something pretty\nstatistically solid with a not too large bloom filter), or scaling by the\nwidth of the histogram bucket it occurs in (i.e. assuming an even\ndistribution across the bucket). Though maybe in a lot of common use\nsituations people only supply values that are known present so maybe this\nwould make things worse more often then better (maybe limit 1 or better\nEXISTS would be a hint the value is not known present). Experimenting a bit\nit doesn't seem to matter which values are selected so it's not taking into\naccount any kind of distribution over the histogram boundaries. For ex this\nquery:\n\nexplain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE\n(\"exp_detls\".\"hts_code_id\" IN\n(20000,20001,200002,20003,20004,20005,20006,20007,20008)) LIMIT 1;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..152.94 rows=1 width=4) (actual time=12925.646..12925.646\nrows=0 loops=1)\n -> Seq Scan on exp_detls (cost=0.00..1651399.83 rows=10798 width=4)\n(actual time=12925.644..12925.644 rows=0 loops=1)\n Filter: (hts_code_id = ANY\n('{20000,20001,200002,20003,20004,20005,20006,20007,20008}'::integer[]))\n Total runtime: 12925.677 ms\n\nHas the same expected row count as the earlier query with the same number of\nnot present IN values, but looking at the histogram boundaries these values\nare much, much less likely to find rows (assuming an even distribution\nacross the bucket) then the values in the earlier query (histogram\nboundaries are something like 1,5,10,15,...11995,12000,80000,80005...) I can\nsend the actual stats if interested (it's large though). Feels like this\nshould factor into the estimates somehow.\n\nAlso, we occasionally run into surprises like this esp. as tables grow (fast\nquery plans hit some planning tipping point and turn into very slow plans,\nor some new combination of values is requested) etc. Would be nice if there\nwas some straightforward way to tweak the planner towards less risky\nqueries, not necessarily worst case planning but maybe pick the plan based\non expected cost + n% * 'worst case' cost for some reasonable estimation of\nworst case or the like (or using some probability distribution around the\nexpected cost and picking the nth percentile time for ranking). Just\nthinking out loud.\n\nBtw, thanks for all your work on what is a really great and useful tool!\n\nTim\n\nWell, the reason it likes the first plan is that that has a smaller\nestimated cost ;-). Basically this is a startup-time-versus-total-time\nissue: the cost of the seqscan+limit is estimated to be about 1/8447'th\nof the time to read the whole table, since it's estimating 8447\ncandidate matches and assuming that those are uniformly distributed in\nthe table. Meanwhile, the bitmap scan has a significant startup cost\nbecause the entire indexscan is completed before we start to do any\nfetching from the heap. The overestimate of the number of matching\nrows contributes directly to overestimating the cost of the indexscan,\ntoo. It ends up being a near thing --- 158 vs 166 cost units --- but\non the basis of these estimates the planner did the right thing.\n\nSo, what you need to do to make this better is to get it to have a\nbetter idea of how many rows match the query condition; the overestimate\nis both making the expensive plan look cheap, and making the cheaper\nplan look expensive. Cranking up the statistics target for the\nhts_code_id column (and re-ANALYZEing) ought to fix it. If all your\ntables are this large you might want to just increase\ndefault_statistics_target across the board.\n\n regards, tom lane\nThanks for the great description of what's happening. Very informative. Upping the stats to the max 10000 (from the default 100) makes my example query \nuse a faster plan, but not other problematic queries we have in the same vein (just add a few more values to the in clause). For ex. (this is with stats set to 10000 and re-analyzed).=>explain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE (\"exp_detls\".\"hts_code_id\" IN (8127,8377,8374,10744,11125,8375,8344,9127,9345)) LIMIT 1;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..152.94 rows=1 width=4) (actual time=12057.637..12057.637 rows=0 loops=1) -> Seq Scan on exp_detls (cost=0.00..1651399.83 rows=10798 width=4) (actual time=12057.635..12057.635 rows=0 loops=1)\n Filter: (hts_code_id = ANY ('{8127,8377,8374,10744,11125,8375,8344,9127,9345}'::integer[])) Total runtime: 12057.678 ms\nFrom your description I think the planner is making two problematic assumptions that are leading to our issue: First is that the potential matches are uniformly distributed across the physical table. While there are a number of reasons this may not be the case (correlated insertion order or update patterns etc.), in this case there's a very clear reason which is that the table is clustered on an index that leads with the column we're querying against ('hts_code_id') and nothing has been inserted or updated in the table since the last time we ran cluster on it (see the schema in the original e-mail).\nSecond is that is that it's assuming that the IN clause values are actually present in the table (and at some reasonably large frequency), in the worst cases (like above) they aren't present at all, forcing a full table scan. I don't know how the expected freq of values not present in the frequent values are estimated, but I'm guessing something based on the residual probability in the stats table and the est. number of distinct values? If so that assumes that the supplied values actually are in the set of distinct values which seems unjustified.\nPerhaps there should be some estimation on whether the supplied value is one of the distinct values or not (could probably do something pretty statistically solid with a not too large bloom filter), or scaling by the width of the histogram bucket it occurs in (i.e. assuming an even distribution across the bucket). Though maybe in a lot of common use situations people only supply values that are known present so maybe this would make things worse more often then better (maybe limit 1 or better EXISTS would be a hint the value is not known present). Experimenting a bit it doesn't seem to matter which values are selected so it's not taking into account any kind of distribution over the histogram boundaries. For ex this query:\nexplain analyze SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE (\"exp_detls\".\"hts_code_id\" IN (20000,20001,200002,20003,20004,20005,20006,20007,20008)) LIMIT 1;\n\n QUERY PLAN --------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..152.94 rows=1 width=4) (actual time=12925.646..12925.646 rows=0 loops=1)\n\n -> Seq Scan on exp_detls (cost=0.00..1651399.83 rows=10798 width=4) (actual time=12925.644..12925.644 rows=0 loops=1) Filter: (hts_code_id = ANY ('{20000,20001,200002,20003,20004,20005,20006,20007,20008}'::integer[]))\n\n Total runtime: 12925.677 msHas the same expected row count as the earlier query with the same number of not present IN values, but looking at the histogram boundaries these values are much, much less likely to find rows (assuming an even distribution across the bucket) then the values in the earlier query (histogram boundaries are something like 1,5,10,15,...11995,12000,80000,80005...) I can send the actual stats if interested (it's large though). Feels like this should factor into the estimates somehow.\nAlso, we occasionally run into surprises like this esp. as tables grow (fast query plans hit some planning tipping point and turn into very slow plans, or some new combination of values is requested) etc. Would be nice if there was some straightforward way to tweak the planner towards less risky queries, not necessarily worst case planning but maybe pick the plan based on expected cost + n% * 'worst case' cost for some reasonable estimation of worst case or the like (or using some probability distribution around the expected cost and picking the nth percentile time for ranking). Just thinking out loud.\nBtw, thanks for all your work on what is a really great and useful tool!Tim",
"msg_date": "Mon, 26 Sep 2011 17:11:33 -0400",
"msg_from": "Timothy Garnett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Anomaly with \"col in (A,B)\" vs. \"col = A OR\n\tcol = B\" ver. 9.0.3"
},
{
"msg_contents": "On 27/09/2011 1:35 AM, Tom Lane wrote:\n> Craig James<[email protected]> writes:\n>> On 9/26/11 10:07 AM, Tom Lane wrote:\n>>> Cranking up the statistics target for the hts_code_id column (and re-ANALYZEing) ought to fix it. If all your tables are this large you might want to just increase default_statistics_target across the board. regards, tom lane\n>> This is common advice in this forum .... but what's the down side to increasing statistics? With so many questions coming to this forum that are due to insufficient statistics, why not just increase the default_statistics_target? I assume there is a down side, but I've never seen it discussed. Does it increase planning time? Analyze time? Take lots of space?\n> Yes, yes, and yes. We already did crank up the default\n> default_statistics_target once (in 8.4), so I'm hesitant to do it again.\n\nThis has me wondering about putting together a maintenance/analysis tool \nthat generates and captures stats from several ANALYZE runs and compares \nthem to see if they're reasonably consistent. It then re-runs with \nhigher targets as a one-off, again to see if the stats agree, before \nrestoring the targets to defaults. The tool could crunch comparisons of \nthe resulting stats and warn about tables or columns where the default \nstats targets aren't sufficient.\n\nIn the long run this might even be something it'd be good to have Pg do \nautomatically behind the scenes (like autovacuum) - auto-raise stats \ntargets where repeat samplings are inconsistent.\n\nThoughts? Is this reasonable to explore, or a totally bogus idea? I'll \nsee if I can have a play if there's any point to trying it out.\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 27 Sep 2011 08:09:37 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Anomaly with \"col in (A,B)\" vs. \"col =\n\tA OR col = B\" ver. 9.0.3"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> This has me wondering about putting together a maintenance/analysis tool \n> that generates and captures stats from several ANALYZE runs and compares \n> them to see if they're reasonably consistent. It then re-runs with \n> higher targets as a one-off, again to see if the stats agree, before \n> restoring the targets to defaults. The tool could crunch comparisons of \n> the resulting stats and warn about tables or columns where the default \n> stats targets aren't sufficient.\n\nIt would certainly be useful to have such a tool, but I suspect it's\neasier said than done. The problem is to know whether the queries on\nthat table are particularly sensitive to having better stats. I think\nwe've largely solved issues having to do with the quality of the\nhistogram (eg, what fraction of the table has values falling into some\nrange), and the remaining trouble spots have to do with predicting the\nfrequency of specific values that are too infrequent to have made it\ninto the most-common-values list. Enlarging the MCV list helps not so\nmuch by getting these long-tail values into the MCV list --- they\nprobably still aren't there --- as by allowing us to tighten the upper\nbound on what the frequency of an unrepresented value must be. So what\nyou have to ask is how many queries care about that. Craig James'\nexample query in this thread is sort of a worst case, because the values\nit's searching for are in fact not in the table at all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Sep 2011 20:39:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Anomaly with \"col in (A,\n\tB)\" vs. \"col = A OR col = B\" ver. 9.0.3"
},
{
"msg_contents": "On Mon, Sep 26, 2011 at 2:11 PM, Timothy Garnett <[email protected]>wrote:\n\n>\n> Though maybe in a lot of common use situations people only supply values\n> that are known present so maybe this would make things worse more often then\n> better (maybe limit 1 or better EXISTS would be a hint the value is not\n> known present). Experimenting a bit it doesn't seem to matter which values\n> are selected so it's not taking into account any kind of distribution over\n> the histogram boundaries.\n\n\nIf I'm not mistaken, the problem here is actually the LIMIT 1, yes? The\nplanner is opting for the sequential scan because it assumes it will\ninterrupt the scan relatively quickly when a row is matched? So in the case\nwhere you are really looking for existence, perhaps the better solution is\nto select a count of the number of matching rows (perhaps grouped by id so\nyou know which ones match)? That would emulate the behaviour of select\nwithout a limit, which would be more likely to use the index. It all depends\non just what you are actually doing with the row you are returning, of\ncourse, and if there is some other way to get it once you know of its\nexistence.\n\nSELECT count(1), exp_detls.id FROM exp_detls WHERE (exp_detls.hts_code_id IN\n(12,654)) GROUP BY exp_detls.id\n\nmight work, depending upon how many different values of exp_detls.id you are\nlikely to see for any given set of hts_code_ids. Actually, I know little\nabout the query planner, but it seems to me that the aggregation and\ngrouping might be sufficient to force it away from preferring the sequential\nscan, even if you leave a 'limit 1' on the query, since it will have to find\nmore than 1 row in order to return a single row, since that single row\ncontains an aggregate. So if your concern is about the potential of\ntransferring millions of rows across the network, I think that might fix it,\nthough it is definitely a kludge. Of course, the downside is that the index\nwon't be as fast as a sequential scan in the cases where the scan does get\ninterrupted quickly, but you've clearly already considered that for your use\npatterns.\n\nOn Mon, Sep 26, 2011 at 2:11 PM, Timothy Garnett <[email protected]> wrote:\nThough maybe in a lot of common use situations people only supply values that are known present so maybe this would make things worse more often then better (maybe limit 1 or better EXISTS would be a hint the value is not known present). Experimenting a bit it doesn't seem to matter which values are selected so it's not taking into account any kind of distribution over the histogram boundaries. \nIf I'm not mistaken, the problem here is actually the LIMIT 1, yes? The planner is opting for the sequential scan because it assumes it will interrupt the scan relatively quickly when a row is matched? So in the case where you are really looking for existence, perhaps the better solution is to select a count of the number of matching rows (perhaps grouped by id so you know which ones match)? That would emulate the behaviour of select without a limit, which would be more likely to use the index. It all depends on just what you are actually doing with the row you are returning, of course, and if there is some other way to get it once you know of its existence.\nSELECT count(1), exp_detls.id FROM exp_detls WHERE (exp_detls.hts_code_id IN (12,654)) GROUP BY exp_detls.id\nmight work, depending upon how many different values of exp_detls.id you are likely to see for any given set of hts_code_ids. Actually, I know little about the query planner, but it seems to me that the aggregation and grouping might be sufficient to force it away from preferring the sequential scan, even if you leave a 'limit 1' on the query, since it will have to find more than 1 row in order to return a single row, since that single row contains an aggregate. So if your concern is about the potential of transferring millions of rows across the network, I think that might fix it, though it is definitely a kludge. Of course, the downside is that the index won't be as fast as a sequential scan in the cases where the scan does get interrupted quickly, but you've clearly already considered that for your use patterns.",
"msg_date": "Mon, 26 Sep 2011 22:06:26 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Anomaly with \"col in (A,B)\" vs. \"col = A OR\n\tcol = B\" ver. 9.0.3"
},
{
"msg_contents": "Hi Sam,\n\nThe purpose of this (framework generated) code is to find out if there is at\nleast one row that has one of the selected hts_code_ids. We don't care\nabout anything that's returned other then whether at least one row exists or\nnot (rewriting the query with EXISTS generates that same plan). The actual\nselectivity can vary greatly anywhere from 0 to > 500k rows depending on the\ncodes chosen. On the higher end of the range a select count(*) takes ~14\ntimes longer then a limit 1 query that does an index scan (~475ms vs. 36ms).\n\nWhat we're doing now for the moment is putting a straight-jacket on the\nplanner\n\nbegin ; set local enable_seqscan = off ; SELECT \"exp_detls\".id FROM\n\"exp_detls\" WHERE\n(\"exp_detls\".\"hts_code_id\" IN (...)) LIMIT 1; commit ;\n\nAs even when the where clause selects a good fraction of the table seq_scan\nhas highly variable performance because of the clustered index (depends on\nwhat was last selected out of the table), so we pretty much never want to\nuse it (maybe the planner should be taking the cluster into\nconsideration?). What we'll probably move to is maintaining a table of ids\nthat are present in that column and running the query against that as the\nabove, while acceptable, can still be on the slow side when the where clause\nis not very selective and many rows are scanned in the index before the\nlimit can be applied. Would be nice if the bitmap index scan could be done\npiecemeal alternating with heap scan when a tight limit is present so not so\nmuch work has to be done, but I could see that being really problematic to\nimplement and use.\n\nTim\n\nOn Tue, Sep 27, 2011 at 1:06 AM, Samuel Gendler\n<[email protected]>wrote:\n\n>\n>\n> On Mon, Sep 26, 2011 at 2:11 PM, Timothy Garnett <[email protected]>wrote:\n>\n>>\n>> Though maybe in a lot of common use situations people only supply values\n>> that are known present so maybe this would make things worse more often then\n>> better (maybe limit 1 or better EXISTS would be a hint the value is not\n>> known present). Experimenting a bit it doesn't seem to matter which values\n>> are selected so it's not taking into account any kind of distribution over\n>> the histogram boundaries.\n>\n>\n> If I'm not mistaken, the problem here is actually the LIMIT 1, yes? The\n> planner is opting for the sequential scan because it assumes it will\n> interrupt the scan relatively quickly when a row is matched? So in the case\n> where you are really looking for existence, perhaps the better solution is\n> to select a count of the number of matching rows (perhaps grouped by id so\n> you know which ones match)? That would emulate the behaviour of select\n> without a limit, which would be more likely to use the index. It all depends\n> on just what you are actually doing with the row you are returning, of\n> course, and if there is some other way to get it once you know of its\n> existence.\n>\n> SELECT count(1), exp_detls.id FROM exp_detls WHERE (exp_detls.hts_code_id\n> IN (12,654)) GROUP BY exp_detls.id\n>\n> might work, depending upon how many different values of exp_detls.id you\n> are likely to see for any given set of hts_code_ids. Actually, I know\n> little about the query planner, but it seems to me that the aggregation and\n> grouping might be sufficient to force it away from preferring the sequential\n> scan, even if you leave a 'limit 1' on the query, since it will have to find\n> more than 1 row in order to return a single row, since that single row\n> contains an aggregate. So if your concern is about the potential of\n> transferring millions of rows across the network, I think that might fix it,\n> though it is definitely a kludge. Of course, the downside is that the index\n> won't be as fast as a sequential scan in the cases where the scan does get\n> interrupted quickly, but you've clearly already considered that for your use\n> patterns.\n>\n\nHi Sam,The purpose of this (framework generated) code is to find out if there is at least one row that has one of the selected hts_code_ids. We don't care about anything that's returned other then whether at least one row exists or not (rewriting the query with EXISTS generates that same plan). The actual selectivity can vary greatly anywhere from 0 to > 500k rows depending on the codes chosen. On the higher end of the range a select count(*) takes ~14 times longer then a limit 1 query that does an index scan (~475ms vs. 36ms).\nWhat we're doing now for the moment is putting a straight-jacket on the plannerbegin ; set local enable_seqscan = off ; SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE \n\n(\"exp_detls\".\"hts_code_id\" IN (...)) LIMIT 1; commit ;As even when the where clause selects a good fraction of the table seq_scan has highly variable performance because of the clustered index (depends on what was last selected out of the table), so we pretty much never want to use it (maybe the planner should be taking the cluster into consideration?). What we'll probably move to is maintaining a table of ids that are present in that column and running the query against that as the above, while acceptable, can still be on the slow side when the where clause is not very selective and many rows are scanned in the index before the limit can be applied. Would be nice if the bitmap index scan could be done piecemeal alternating with heap scan when a tight limit is present so not so much work has to be done, but I could see that being really problematic to implement and use.\nTimOn Tue, Sep 27, 2011 at 1:06 AM, Samuel Gendler <[email protected]> wrote:\nOn Mon, Sep 26, 2011 at 2:11 PM, Timothy Garnett <[email protected]> wrote:\nThough maybe in a lot of common use situations people only supply values that are known present so maybe this would make things worse more often then better (maybe limit 1 or better EXISTS would be a hint the value is not known present). Experimenting a bit it doesn't seem to matter which values are selected so it's not taking into account any kind of distribution over the histogram boundaries. \nIf I'm not mistaken, the problem here is actually the LIMIT 1, yes? The planner is opting for the sequential scan because it assumes it will interrupt the scan relatively quickly when a row is matched? So in the case where you are really looking for existence, perhaps the better solution is to select a count of the number of matching rows (perhaps grouped by id so you know which ones match)? That would emulate the behaviour of select without a limit, which would be more likely to use the index. It all depends on just what you are actually doing with the row you are returning, of course, and if there is some other way to get it once you know of its existence.\nSELECT count(1), exp_detls.id FROM exp_detls WHERE (exp_detls.hts_code_id IN (12,654)) GROUP BY exp_detls.id\n\nmight work, depending upon how many different values of exp_detls.id you are likely to see for any given set of hts_code_ids. Actually, I know little about the query planner, but it seems to me that the aggregation and grouping might be sufficient to force it away from preferring the sequential scan, even if you leave a 'limit 1' on the query, since it will have to find more than 1 row in order to return a single row, since that single row contains an aggregate. So if your concern is about the potential of transferring millions of rows across the network, I think that might fix it, though it is definitely a kludge. Of course, the downside is that the index won't be as fast as a sequential scan in the cases where the scan does get interrupted quickly, but you've clearly already considered that for your use patterns.",
"msg_date": "Tue, 27 Sep 2011 08:40:09 -0400",
"msg_from": "Timothy Garnett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Anomaly with \"col in (A,B)\" vs. \"col = A OR\n\tcol = B\" ver. 9.0.3"
},
{
"msg_contents": "Actually thinking about this a little more what we really want the planner\nto do is to consider the codes one at a time till it finds one that exists.\nIf we write that out explicitly we get a good plan whether the ids are\nselect many rows or none.\n\n=> explain analyze\nselect 1 from (\n select * from (select 1 from exp_detls where hts_code_id in (469169) limit\n1) a\n union all\n select * from (select 1 from exp_detls where hts_code_id in (15289) limit\n1) b\n union all\n select * from (select 1 from exp_detls where hts_code_id in (468137) limit\n1) c\n union all\n select * from (select 1 from exp_detls where hts_code_id in (14655) limit\n1) d\n union all\n select * from (select 1 from exp_detls where hts_code_id in (14670) limit\n1) e\n union all\n select * from (select 1 from exp_detls where hts_code_id in (15291) limit\n1) f\n) dummy limit 1;\n\nQUERY\nPLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1.75 rows=1 width=0) (actual time=0.031..0.032 rows=1\nloops=1)\n -> Result (cost=0.00..10.52 rows=6 width=0) (actual time=0.029..0.029\nrows=1 loops=1)\n -> Append (cost=0.00..10.52 rows=6 width=0) (actual\ntime=0.029..0.029 rows=1 loops=1)\n -> Subquery Scan on a (cost=0.00..1.72 rows=1 width=0)\n(actual time=0.027..0.027 rows=1 loops=1)\n -> Limit (cost=0.00..1.71 rows=1 width=0) (actual\ntime=0.025..0.025 rows=1 loops=1)\n -> Index Scan using\nindex_exp_detls_on_hts_code_id_and_data_month on exp_detls\n(cost=0.00..144890.56 rows=84657 width=0) (actual time=0.025..0.025 rows=1\nloops=1)\n Index Cond: (hts_code_id = 469169)\n -> Subquery Scan on b (cost=0.00..1.74 rows=1 width=0)\n(never executed)\n -> Limit (cost=0.00..1.73 rows=1 width=0) (never\nexecuted)\n -> Index Scan using\nindex_exp_detls_on_hts_code_id_and_data_month on exp_detls\n(cost=0.00..118206.85 rows=68477 width=0) (never executed)\n Index Cond: (hts_code_id = 15289)\n -> Subquery Scan on c (cost=0.00..1.74 rows=1 width=0)\n(never executed)\n -> Limit (cost=0.00..1.73 rows=1 width=0) (never\nexecuted)\n -> Index Scan using\nindex_exp_detls_on_hts_code_id_and_data_month on exp_detls\n(cost=0.00..102645.38 rows=59168 width=0) (never executed)\n Index Cond: (hts_code_id = 468137)\n -> Subquery Scan on d (cost=0.00..1.81 rows=1 width=0)\n(never executed)\n -> Limit (cost=0.00..1.80 rows=1 width=0) (never\nexecuted)\n -> Index Scan using\nindex_exp_detls_on_hts_code_id_and_data_month on exp_detls\n(cost=0.00..2155.38 rows=1200 width=0) (never executed)\n Index Cond: (hts_code_id = 14655)\n -> Subquery Scan on e (cost=0.00..1.75 rows=1 width=0)\n(never executed)\n -> Limit (cost=0.00..1.74 rows=1 width=0) (never\nexecuted)\n -> Index Scan using\nindex_exp_detls_on_hts_code_id_and_data_month on exp_detls\n(cost=0.00..90309.37 rows=51853 width=0) (never executed)\n Index Cond: (hts_code_id = 14670)\n -> Subquery Scan on f (cost=0.00..1.75 rows=1 width=0)\n(never executed)\n -> Limit (cost=0.00..1.74 rows=1 width=0) (never\nexecuted)\n -> Index Scan using\nindex_exp_detls_on_hts_code_id_and_data_month on exp_detls\n(cost=0.00..84767.69 rows=48586 width=0) (never executed)\n Index Cond: (hts_code_id = 15291)\n Total runtime: 0.089 ms\n(28 rows)\n\nThis is sub millisecond for all combinations of ids present or not that\nwe've tried, so we'll definitely go with this. Thanks for the help and\nexplanations!\n\nTim\n\nOn Tue, Sep 27, 2011 at 8:40 AM, Timothy Garnett <[email protected]>wrote:\n\n> Hi Sam,\n>\n> The purpose of this (framework generated) code is to find out if there is\n> at least one row that has one of the selected hts_code_ids. We don't care\n> about anything that's returned other then whether at least one row exists or\n> not (rewriting the query with EXISTS generates that same plan). The actual\n> selectivity can vary greatly anywhere from 0 to > 500k rows depending on the\n> codes chosen. On the higher end of the range a select count(*) takes ~14\n> times longer then a limit 1 query that does an index scan (~475ms vs. 36ms).\n>\n> What we're doing now for the moment is putting a straight-jacket on the\n> planner\n>\n> begin ; set local enable_seqscan = off ; SELECT \"exp_detls\".id FROM\n> \"exp_detls\" WHERE\n> (\"exp_detls\".\"hts_code_id\" IN (...)) LIMIT 1; commit ;\n>\n> As even when the where clause selects a good fraction of the table seq_scan\n> has highly variable performance because of the clustered index (depends on\n> what was last selected out of the table), so we pretty much never want to\n> use it (maybe the planner should be taking the cluster into\n> consideration?). What we'll probably move to is maintaining a table of ids\n> that are present in that column and running the query against that as the\n> above, while acceptable, can still be on the slow side when the where clause\n> is not very selective and many rows are scanned in the index before the\n> limit can be applied. Would be nice if the bitmap index scan could be done\n> piecemeal alternating with heap scan when a tight limit is present so not so\n> much work has to be done, but I could see that being really problematic to\n> implement and use.\n>\n> Tim\n>\n>\n> On Tue, Sep 27, 2011 at 1:06 AM, Samuel Gendler <[email protected]\n> > wrote:\n>\n>>\n>>\n>> On Mon, Sep 26, 2011 at 2:11 PM, Timothy Garnett <[email protected]>wrote:\n>>\n>>>\n>>> Though maybe in a lot of common use situations people only supply values\n>>> that are known present so maybe this would make things worse more often then\n>>> better (maybe limit 1 or better EXISTS would be a hint the value is not\n>>> known present). Experimenting a bit it doesn't seem to matter which values\n>>> are selected so it's not taking into account any kind of distribution over\n>>> the histogram boundaries.\n>>\n>>\n>> If I'm not mistaken, the problem here is actually the LIMIT 1, yes? The\n>> planner is opting for the sequential scan because it assumes it will\n>> interrupt the scan relatively quickly when a row is matched? So in the case\n>> where you are really looking for existence, perhaps the better solution is\n>> to select a count of the number of matching rows (perhaps grouped by id so\n>> you know which ones match)? That would emulate the behaviour of select\n>> without a limit, which would be more likely to use the index. It all depends\n>> on just what you are actually doing with the row you are returning, of\n>> course, and if there is some other way to get it once you know of its\n>> existence.\n>>\n>> SELECT count(1), exp_detls.id FROM exp_detls WHERE (exp_detls.hts_code_id\n>> IN (12,654)) GROUP BY exp_detls.id\n>>\n>> might work, depending upon how many different values of exp_detls.id you\n>> are likely to see for any given set of hts_code_ids. Actually, I know\n>> little about the query planner, but it seems to me that the aggregation and\n>> grouping might be sufficient to force it away from preferring the sequential\n>> scan, even if you leave a 'limit 1' on the query, since it will have to find\n>> more than 1 row in order to return a single row, since that single row\n>> contains an aggregate. So if your concern is about the potential of\n>> transferring millions of rows across the network, I think that might fix it,\n>> though it is definitely a kludge. Of course, the downside is that the index\n>> won't be as fast as a sequential scan in the cases where the scan does get\n>> interrupted quickly, but you've clearly already considered that for your use\n>> patterns.\n>>\n>\n>\n\nActually thinking about this a little more what we really want the planner to do is to consider the codes one at a time till it finds one that exists. If we write that out explicitly we get a good plan whether the ids are select many rows or none.\n=> explain analyzeselect 1 from (\n select * from (select 1 from exp_detls where hts_code_id in (469169) limit 1) a union all\n select * from (select 1 from exp_detls where hts_code_id in (15289) limit 1) b union all\n select * from (select 1 from exp_detls where hts_code_id in (468137) limit 1) c union all\n select * from (select 1 from exp_detls where hts_code_id in (14655) limit 1) d union all\n select * from (select 1 from exp_detls where hts_code_id in (14670) limit 1) e union all\n select * from (select 1 from exp_detls where hts_code_id in (15291) limit 1) f) dummy limit 1;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1.75 rows=1 width=0) (actual time=0.031..0.032 rows=1 loops=1) -> Result (cost=0.00..10.52 rows=6 width=0) (actual time=0.029..0.029 rows=1 loops=1)\n -> Append (cost=0.00..10.52 rows=6 width=0) (actual time=0.029..0.029 rows=1 loops=1) -> Subquery Scan on a (cost=0.00..1.72 rows=1 width=0) (actual time=0.027..0.027 rows=1 loops=1)\n -> Limit (cost=0.00..1.71 rows=1 width=0) (actual time=0.025..0.025 rows=1 loops=1) -> Index Scan using index_exp_detls_on_hts_code_id_and_data_month on exp_detls (cost=0.00..144890.56 rows=84657 width=0) (actual time=0.025..0.025 rows=1 loops=1)\n Index Cond: (hts_code_id = 469169) -> Subquery Scan on b (cost=0.00..1.74 rows=1 width=0) (never executed)\n -> Limit (cost=0.00..1.73 rows=1 width=0) (never executed) -> Index Scan using index_exp_detls_on_hts_code_id_and_data_month on exp_detls (cost=0.00..118206.85 rows=68477 width=0) (never executed)\n Index Cond: (hts_code_id = 15289) -> Subquery Scan on c (cost=0.00..1.74 rows=1 width=0) (never executed)\n -> Limit (cost=0.00..1.73 rows=1 width=0) (never executed) -> Index Scan using index_exp_detls_on_hts_code_id_and_data_month on exp_detls (cost=0.00..102645.38 rows=59168 width=0) (never executed)\n Index Cond: (hts_code_id = 468137) -> Subquery Scan on d (cost=0.00..1.81 rows=1 width=0) (never executed)\n -> Limit (cost=0.00..1.80 rows=1 width=0) (never executed) -> Index Scan using index_exp_detls_on_hts_code_id_and_data_month on exp_detls (cost=0.00..2155.38 rows=1200 width=0) (never executed)\n Index Cond: (hts_code_id = 14655) -> Subquery Scan on e (cost=0.00..1.75 rows=1 width=0) (never executed)\n -> Limit (cost=0.00..1.74 rows=1 width=0) (never executed) -> Index Scan using index_exp_detls_on_hts_code_id_and_data_month on exp_detls (cost=0.00..90309.37 rows=51853 width=0) (never executed)\n Index Cond: (hts_code_id = 14670) -> Subquery Scan on f (cost=0.00..1.75 rows=1 width=0) (never executed)\n -> Limit (cost=0.00..1.74 rows=1 width=0) (never executed) -> Index Scan using index_exp_detls_on_hts_code_id_and_data_month on exp_detls (cost=0.00..84767.69 rows=48586 width=0) (never executed)\n Index Cond: (hts_code_id = 15291) Total runtime: 0.089 ms\n(28 rows)This is sub millisecond for all combinations of ids present or not that we've tried, so we'll definitely go with this. Thanks for the help and explanations!\nTimOn Tue, Sep 27, 2011 at 8:40 AM, Timothy Garnett <[email protected]> wrote:\n\nHi Sam,The purpose of this (framework generated) code is to find out if there is at least one row that has one of the selected hts_code_ids. We don't care about anything that's returned other then whether at least one row exists or not (rewriting the query with EXISTS generates that same plan). The actual selectivity can vary greatly anywhere from 0 to > 500k rows depending on the codes chosen. On the higher end of the range a select count(*) takes ~14 times longer then a limit 1 query that does an index scan (~475ms vs. 36ms).\nWhat we're doing now for the moment is putting a straight-jacket on the plannerbegin ; set local enable_seqscan = off ; SELECT \"exp_detls\".id FROM \"exp_detls\" WHERE \n\n\n(\"exp_detls\".\"hts_code_id\" IN (...)) LIMIT 1; commit ;As even when the where clause selects a good fraction of the table seq_scan has highly variable performance because of the clustered index (depends on what was last selected out of the table), so we pretty much never want to use it (maybe the planner should be taking the cluster into consideration?). What we'll probably move to is maintaining a table of ids that are present in that column and running the query against that as the above, while acceptable, can still be on the slow side when the where clause is not very selective and many rows are scanned in the index before the limit can be applied. Would be nice if the bitmap index scan could be done piecemeal alternating with heap scan when a tight limit is present so not so much work has to be done, but I could see that being really problematic to implement and use.\nTimOn Tue, Sep 27, 2011 at 1:06 AM, Samuel Gendler <[email protected]> wrote:\n\nOn Mon, Sep 26, 2011 at 2:11 PM, Timothy Garnett <[email protected]> wrote:\nThough maybe in a lot of common use situations people only supply values that are known present so maybe this would make things worse more often then better (maybe limit 1 or better EXISTS would be a hint the value is not known present). Experimenting a bit it doesn't seem to matter which values are selected so it's not taking into account any kind of distribution over the histogram boundaries. \nIf I'm not mistaken, the problem here is actually the LIMIT 1, yes? The planner is opting for the sequential scan because it assumes it will interrupt the scan relatively quickly when a row is matched? So in the case where you are really looking for existence, perhaps the better solution is to select a count of the number of matching rows (perhaps grouped by id so you know which ones match)? That would emulate the behaviour of select without a limit, which would be more likely to use the index. It all depends on just what you are actually doing with the row you are returning, of course, and if there is some other way to get it once you know of its existence.\nSELECT count(1), exp_detls.id FROM exp_detls WHERE (exp_detls.hts_code_id IN (12,654)) GROUP BY exp_detls.id\n\nmight work, depending upon how many different values of exp_detls.id you are likely to see for any given set of hts_code_ids. Actually, I know little about the query planner, but it seems to me that the aggregation and grouping might be sufficient to force it away from preferring the sequential scan, even if you leave a 'limit 1' on the query, since it will have to find more than 1 row in order to return a single row, since that single row contains an aggregate. So if your concern is about the potential of transferring millions of rows across the network, I think that might fix it, though it is definitely a kludge. Of course, the downside is that the index won't be as fast as a sequential scan in the cases where the scan does get interrupted quickly, but you've clearly already considered that for your use patterns.",
"msg_date": "Tue, 27 Sep 2011 08:57:12 -0400",
"msg_from": "Timothy Garnett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Anomaly with \"col in (A,B)\" vs. \"col = A OR\n\tcol = B\" ver. 9.0.3"
}
] |
[
{
"msg_contents": "In Mammoth Replicator (PG 8.3) I have a table described as \n\n Table \"public.tevent_cdr\"\n Column | Type | Modifiers\n----------------+--------------------------+------------------------------------------------------------\n event_id | integer | not null default \nnextval(('event_id_seq'::text)::regclass)\n timestamp | timestamp with time zone | not null\n classification | character varying | not null\n area | character varying | not null\n kind | character varying |\n device_id | integer |\n device_name | character varying |\n fleet_id | integer |\n fleet_name | character varying |\n customer_id | integer |\n customer_name | character varying |\n event | text |\nIndexes:\n \"tevent_cdr_event_id\" UNIQUE, btree (event_id)\n \"tevent_cdr_timestamp\" btree (\"timestamp\")\nCheck constraints:\n \"tevent_cdr_classification_check\" CHECK (classification::text \n= 'cdr'::text)\nInherits: tevent\n\n\nThis simple query puzzles me. Why does it need to sort the records? Don't they \ncome from the index in order?\n\n \"explain analyze select * from tevent_cdr where timestamp >= \n'2011-09-09 12:00:00.000000+0' and timestamp < '2011-09-09 \n13:00:00.000000+0' and classification = 'cdr' order by timestamp;\"\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=9270.93..9277.12 rows=2477 width=588) (actual \ntime=9.219..11.489 rows=2480 loops=1)\n Sort Key: \"timestamp\"\n Sort Method: quicksort Memory: 2564kB\n -> Bitmap Heap Scan on tevent_cdr (cost=57.93..9131.30 rows=2477 \nwidth=588) (actual time=0.440..3.923 rows=2480 loops=1)\n Recheck Cond: ((\"timestamp\" >= '2011-09-09 \n22:00:00+10'::timestamp with time zone) AND (\"timestamp\" < '2011-09-09 \n23:00:00+10'::timestamp with time zone))\n Filter: ((classification)::text = 'cdr'::text)\n -> Bitmap Index Scan on tevent_cdr_timestamp \n(cost=0.00..57.31 rows=2477 width=0) (actual time=0.404..0.404 rows=2480 \nloops=1)\n Index Cond: ((\"timestamp\" >= '2011-09-09 \n22:00:00+10'::timestamp with time zone) AND (\"timestamp\" < '2011-09-09 \n23:00:00+10'::timestamp with time zone))\n Total runtime: 13.847 ms\n(9 rows)\n-- \nAnthony Shipman | flailover systems: When one goes down it \[email protected] | flails about until the other goes down too.\n",
"msg_date": "Mon, 26 Sep 2011 16:28:15 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "overzealous sorting?"
},
{
"msg_contents": "Le Mon, 26 Sep 2011 16:28:15 +1000,\[email protected] a écrit :\n\n> In Mammoth Replicator (PG 8.3) I have a table described as \n> \n> Table \"public.tevent_cdr\"\n> Column | Type |\n> Modifiers\n> ----------------+--------------------------+------------------------------------------------------------\n> event_id | integer | not null default\n> nextval(('event_id_seq'::text)::regclass) timestamp | timestamp\n> with time zone | not null classification | character varying |\n> not null area | character varying | not null\n> kind | character varying |\n> device_id | integer |\n> device_name | character varying |\n> fleet_id | integer |\n> fleet_name | character varying |\n> customer_id | integer |\n> customer_name | character varying |\n> event | text |\n> Indexes:\n> \"tevent_cdr_event_id\" UNIQUE, btree (event_id)\n> \"tevent_cdr_timestamp\" btree (\"timestamp\")\n> Check constraints:\n> \"tevent_cdr_classification_check\" CHECK (classification::text \n> = 'cdr'::text)\n> Inherits: tevent\n> \n> \n> This simple query puzzles me. Why does it need to sort the records?\n> Don't they come from the index in order?\n> \n> \"explain analyze select * from tevent_cdr where timestamp >= \n> '2011-09-09 12:00:00.000000+0' and timestamp < '2011-09-09 \n> 13:00:00.000000+0' and classification = 'cdr' order by timestamp;\"\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=9270.93..9277.12 rows=2477 width=588) (actual \n> time=9.219..11.489 rows=2480 loops=1)\n> Sort Key: \"timestamp\"\n> Sort Method: quicksort Memory: 2564kB\n> -> Bitmap Heap Scan on tevent_cdr (cost=57.93..9131.30\n> rows=2477 width=588) (actual time=0.440..3.923 rows=2480 loops=1)\n> Recheck Cond: ((\"timestamp\" >= '2011-09-09 \n> 22:00:00+10'::timestamp with time zone) AND (\"timestamp\" <\n> '2011-09-09 23:00:00+10'::timestamp with time zone))\n> Filter: ((classification)::text = 'cdr'::text)\n> -> Bitmap Index Scan on tevent_cdr_timestamp \n> (cost=0.00..57.31 rows=2477 width=0) (actual time=0.404..0.404\n> rows=2480 loops=1)\n> Index Cond: ((\"timestamp\" >= '2011-09-09 \n> 22:00:00+10'::timestamp with time zone) AND (\"timestamp\" <\n> '2011-09-09 23:00:00+10'::timestamp with time zone))\n> Total runtime: 13.847 ms\n> (9 rows)\n\nBecause Index Scans are sorted, not Bitmap Index Scans, which builds a\nlist of pages to visit, to be then visited by the Bitmap Heap Scan step.\n\nMarc.\n",
"msg_date": "Mon, 26 Sep 2011 11:39:01 +0200",
"msg_from": "Marc Cousin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: overzealous sorting?"
},
{
"msg_contents": "On Monday 26 September 2011 19:39, Marc Cousin wrote:\n> Because Index Scans are sorted, not Bitmap Index Scans, which builds a\n> list of pages to visit, to be then visited by the Bitmap Heap Scan step.\n>\n> Marc.\n\nWhere does this bitmap index scan come from? It seems to negate the advantages \nof b-tree indexes described in the section \"Indexes and ORDER BY\" of the \nmanual. If I do \"set enable_bitmapscan = off;\" the query runs a bit faster \nalthough with a larger time range it reverts to a sequential scan.\n\n-- \nAnthony Shipman | Consider the set of blacklists that\[email protected] | do not blacklist themselves...\n",
"msg_date": "Tue, 27 Sep 2011 12:45:00 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: overzealous sorting?"
},
{
"msg_contents": "Le Tue, 27 Sep 2011 12:45:00 +1000,\[email protected] a écrit :\n\n> On Monday 26 September 2011 19:39, Marc Cousin wrote:\n> > Because Index Scans are sorted, not Bitmap Index Scans, which\n> > builds a list of pages to visit, to be then visited by the Bitmap\n> > Heap Scan step.\n> >\n> > Marc.\n> \n> Where does this bitmap index scan come from? It seems to negate the\n> advantages of b-tree indexes described in the section \"Indexes and\n> ORDER BY\" of the manual. If I do \"set enable_bitmapscan = off;\" the\n> query runs a bit faster although with a larger time range it reverts\n> to a sequential scan.\n> \n\nBitmap Index Scan is just another way to use a btree index. It is often\nused when a bigger part of a table is required, as it costs more than\nplain index scan to retrieve a few records, but less when a lot of\nrecords are needed.\n\nYour tests show that index scans are a bit faster on this query. But it\nis probably true only when most needed data is cached, which is probably\nyour case, as you are doing tests using the same query all the time.\nThe bitmap index scan is probably cheaper when data isn't in cache. You\ncould also see the bitmap index scan as safer, as it won't perform as\nbad when data is not cached (less random IO) :)\n\nThe thing is, the optimizer doesn't know if your data will be in cache\nwhen you will run your query… if you are sure most of your data is in\nthe cache most of the time, you could try to tune random_page_cost\n(lower it) to reflect that data is cached. But if the win is small on\nthis query, it may not be worth it.\n",
"msg_date": "Tue, 27 Sep 2011 10:54:35 +0200",
"msg_from": "Marc Cousin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: overzealous sorting?"
},
{
"msg_contents": "On Tuesday 27 September 2011 18:54, Marc Cousin wrote:\n> The thing is, the optimizer doesn't know if your data will be in cache\n> when you will run your query… if you are sure most of your data is in\n> the cache most of the time, you could try to tune random_page_cost\n> (lower it) to reflect that data is cached. But if the win is small on\n> this query, it may not be worth it.\n\nWhat I really want is to just read a sequence of records in timestamp order \nbetween two timestamps. The number of records to be read may be in the \nmillions totalling more than 1GB of data so I'm trying to read them a slice \nat a time but I can't get PG to do just this.\n\nIf I use offset and limit to grab a slice of the records from a large \ntimestamp range then PG will grab all of the records in the range, sort them \non disk and return just the slice I want. This is absurdly slow. \n\nThe query that I've shown is one of a sequence of queries with the timestamp \nrange progressing in steps of 1 hour through the timestamp range. All I want \nPG to do is find the range in the index, find the matching records in the \ntable and return them. All of the planner's cleverness just seems to get in \nthe way.\n\n-- \nAnthony Shipman | Consider the set of blacklists that\[email protected] | do not blacklist themselves...\n",
"msg_date": "Tue, 27 Sep 2011 19:05:09 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: overzealous sorting?"
},
{
"msg_contents": "On 27/09/11 22:05, [email protected] wrote:\n>\n> What I really want is to just read a sequence of records in timestamp order\n> between two timestamps. The number of records to be read may be in the\n> millions totalling more than 1GB of data so I'm trying to read them a slice\n> at a time but I can't get PG to do just this.\n>\n> If I use offset and limit to grab a slice of the records from a large\n> timestamp range then PG will grab all of the records in the range, sort them\n> on disk and return just the slice I want. This is absurdly slow.\n>\n> The query that I've shown is one of a sequence of queries with the timestamp\n> range progressing in steps of 1 hour through the timestamp range. All I want\n> PG to do is find the range in the index, find the matching records in the\n> table and return them. All of the planner's cleverness just seems to get in\n> the way.\n>\n\nIt is not immediately clear that the planner is making the wrong choices \nhere. Index scans are not always the best choice, it depends heavily on \nthe correlation of the column concerned to the physical order of the \ntable's heap file. I suspect the reason for the planner choosing the \nbitmap scan is that said correlation is low (consult pg_stats to see). \nNow if you think that the table's heap data is cached anyway, then this \nis not such an issue - but you have to tell the planner that by reducing \nrandom_page_cost (as advised previously). Give it a try and report back!\n\nregards\n\nMark\n",
"msg_date": "Tue, 27 Sep 2011 22:22:27 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: overzealous sorting?"
},
{
"msg_contents": "Le Tue, 27 Sep 2011 19:05:09 +1000,\[email protected] a écrit :\n\n> On Tuesday 27 September 2011 18:54, Marc Cousin wrote:\n> > The thing is, the optimizer doesn't know if your data will be in\n> > cache when you will run your query… if you are sure most of your\n> > data is in the cache most of the time, you could try to tune\n> > random_page_cost (lower it) to reflect that data is cached. But if\n> > the win is small on this query, it may not be worth it.\n> \n> What I really want is to just read a sequence of records in timestamp\n> order between two timestamps. The number of records to be read may be\n> in the millions totalling more than 1GB of data so I'm trying to read\n> them a slice at a time but I can't get PG to do just this.\n> \n> If I use offset and limit to grab a slice of the records from a large \n> timestamp range then PG will grab all of the records in the range,\n> sort them on disk and return just the slice I want. This is absurdly\n> slow. \n> \n> The query that I've shown is one of a sequence of queries with the\n> timestamp range progressing in steps of 1 hour through the timestamp\n> range. All I want PG to do is find the range in the index, find the\n> matching records in the table and return them. All of the planner's\n> cleverness just seems to get in the way.\n> \n\nMaybe you should try using a cursor, if you don't know where you'll\nstop. This associated with a very low cursor_tuple_fraction will\nprobably give you what you want (a fast start plan).\n",
"msg_date": "Tue, 27 Sep 2011 17:00:04 +0200",
"msg_from": "Marc Cousin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: overzealous sorting?"
},
{
"msg_contents": "On Tuesday 27 September 2011 19:22, Mark Kirkwood wrote:\n> > The query that I've shown is one of a sequence of queries with the\n> > timestamp range progressing in steps of 1 hour through the timestamp\n> > range. All I want PG to do is find the range in the index, find the\n> > matching records in the table and return them. All of the planner's\n> > cleverness just seems to get in the way.\n>\n> It is not immediately clear that the planner is making the wrong choices\n> here. Index scans are not always the best choice, it depends heavily on\n> the correlation of the column concerned to the physical order of the\n> table's heap file. I suspect the reason for the planner choosing the\n> bitmap scan is that said correlation is low (consult pg_stats to see).\n> Now if you think that the table's heap data is cached anyway, then this\n> is not such an issue - but you have to tell the planner that by reducing\n> random_page_cost (as advised previously). Give it a try and report back!\n>\n> regards\n>\n> Mark\n\nI don't expect that any of it is cached. It is supposed to be a once-a-day \nlinear scan of a slice of the table. The correlation on the timestamp is \nreported as 0.0348395. I can't use cluster since it would lock the table for \ntoo long.\n\nI would try a cursor but my framework for this case doesn't support cursors. \nIn a later version of the framework I've tried cursors and haven't found them \nto be faster than reading in slices, in the tests I've done.\n\nAnyway at the moment it is fast enough. \n\nThanks\n-- \nAnthony Shipman | It's caches all the way \[email protected] | down.\n",
"msg_date": "Wed, 28 Sep 2011 15:13:06 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: overzealous sorting?"
}
] |
[
{
"msg_contents": "Hi,\nI am having problem retrieving data with % in the value. I tried a lot\nbut so far i have no luck.\n\nfor example i have first name = 'abc', middle name = 'pq%', last name\n= '123'\n\nI want to list all the entries that ends with %. Because those are\nwrong entries and i want to remove % from the middle name and retain\n'pq' only.\n\nAny help is very much appreciated.\n\nThanks\nBiswa.\n",
"msg_date": "Mon, 26 Sep 2011 06:34:44 -0700 (PDT)",
"msg_from": "Biswa <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to find record having % as part of data."
},
{
"msg_contents": "I already have solution. Just have to use ESCAPE\n",
"msg_date": "Mon, 26 Sep 2011 07:19:16 -0700 (PDT)",
"msg_from": "Biswa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to find record having % as part of data."
}
] |
[
{
"msg_contents": "Hi all....\n\nI would like to start a dialogue and hear general feedback about the use \nof constraint triggers in postgres (8.4.5).\n\nOur overall issue is that using general triggers is causing slow inserts \n(from locking issues) in our database. Here are some details:\n\nA little background (jboss/j2ee/hibernate/linux).....\nWe have 3 basic triggers on a particular database table - one for \ninserts, one for updates & another for deletes and they keep track of a \n\"granule count\" that is used in reporting. This field (gracount) is \nstored in another table called dataset. An example of the insert \ntrigger/function is as follows:\n\n----------------------\nCREATE TRIGGER increment_dataset_granule_count\n AFTER INSERT\n ON inventory\n FOR EACH ROW\n EXECUTE PROCEDURE increment_dataset_granule_count();\n\nCREATE OR REPLACE FUNCTION increment_dataset_granule_count()\n RETURNS trigger AS\n$BODY$\n DECLARE\n BEGIN\n IF NEW.visible_id != 5 THEN\n UPDATE dataset\n SET gracount = gracount + 1\n WHERE dataset.inv_id = NEW.inv_id;\n END IF;\n RETURN NULL;\n END;\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\nALTER FUNCTION increment_dataset_granule_count() OWNER TO jboss;\n-----------------------\n\nWhat we found was that when these triggers were fired we saw locking \nissues that slowed down performance dramatically to inserts into the \ninventory table (the table where the triggers are located). You could \nsee the inserts sit and wait by looking at the pg_stat_activity table.\n\nWithin our app, the trigger was invoked within the same hibernate \ntransaction that a stateless session bean was using to persist/merge the \ngranule (inventory). Subsequently, in the same transaction, a EJB MBean \nwas merging the dataset changes creating kind of a \"dead lock\" on the \ndataset table.\n\nOur first try to solve this problem has been to convert these triggers \ninto a constraint trigger which allows for DEFERRABLE INITIALLY DEFERRED \nflags. This, we are finding, is forcing the trigger function to run \nafter the triggering transaction is completed. We believe this will fix \nour locking problem and hopefully speed up our inserts again.\n\nAny comments or past experiences would certainly be helpful!\n\nthanks,\n\nMaria Wilson\nNASA/Langley Research Center\nHampton, Virginia 23681\n\n\n\n\n\n\n\n\n Hi all....\n\n I would like to start a dialogue and hear general feedback about the\n use of constraint triggers in postgres (8.4.5).\n\n Our overall issue is that using general triggers is causing slow\n inserts (from locking issues) in our database. Here are some\n details:\n\n A little background (jboss/j2ee/hibernate/linux).....\n We have 3 basic triggers on a particular database table - one for\n inserts, one for updates & another for deletes and they keep\n track of a \"granule count\" that is used in reporting. This field\n (gracount) is stored in another table called dataset. An example of\n the insert trigger/function is as follows:\n\n ----------------------\n CREATE TRIGGER increment_dataset_granule_count\n AFTER INSERT\n ON inventory\n FOR EACH ROW\n EXECUTE PROCEDURE increment_dataset_granule_count();\n\n CREATE OR REPLACE FUNCTION increment_dataset_granule_count()\n RETURNS trigger AS\n $BODY$\n DECLARE\n BEGIN\n IF NEW.visible_id != 5 THEN\n UPDATE dataset \n SET gracount = gracount + 1 \n WHERE dataset.inv_id = NEW.inv_id;\n END IF;\n RETURN NULL;\n END;\n $BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\n ALTER FUNCTION increment_dataset_granule_count() OWNER TO jboss;\n -----------------------\n\n What we found was that when these triggers were fired we saw locking\n issues that slowed down performance dramatically to inserts into the\n inventory table (the table where the triggers are located). You\n could see the inserts sit and wait by looking at the\n pg_stat_activity table.\n\n Within our app, the trigger was invoked within the same hibernate\n transaction that a stateless session bean was using to persist/merge\n the granule (inventory). Subsequently, in the same transaction, a\n EJB MBean was merging the dataset changes creating kind of a \"dead\n lock\" on the dataset table. \n\n Our first try to solve this problem has been to convert these\n triggers into a constraint trigger which allows for DEFERRABLE\n INITIALLY DEFERRED flags. This, we are finding, is forcing the\n trigger function to run after the triggering transaction is\n completed. We believe this will fix our locking problem and\n hopefully speed up our inserts again.\n\n Any comments or past experiences would certainly be helpful!\n\n thanks,\n\n Maria Wilson\n NASA/Langley Research Center\n Hampton, Virginia 23681",
"msg_date": "Mon, 26 Sep 2011 12:52:39 -0400",
"msg_from": "\"Maria L. Wilson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres constraint triggers"
},
{
"msg_contents": "On Sep 26, 2011, at 10:52 AM, Maria L. Wilson wrote:\n\n> Our first try to solve this problem has been to convert these triggers into a constraint trigger which allows for DEFERRABLE INITIALLY DEFERRED flags. This, we are finding, is forcing the trigger function to run after the triggering transaction is completed. We believe this will fix our locking problem and hopefully speed up our inserts again.\n> \n> Any comments or past experiences would certainly be helpful!\n\nMy memory is fuzzy but as I recall, a possible downside to using deferred constraints was increased memory usage, though I cannot see how at the moment. Regardless, I think the upshot is that they aren't without their cost but as long as you aren't doing massive transactions that cost is probably one that you can afford to pay without much worry. \nOn Sep 26, 2011, at 10:52 AM, Maria L. Wilson wrote:Our first try to solve this problem has been to convert these triggers into a constraint trigger which allows for DEFERRABLE INITIALLY DEFERRED flags. This, we are finding, is forcing the trigger function to run after the triggering transaction is completed. We believe this will fix our locking problem and hopefully speed up our inserts again.Any comments or past experiences would certainly be helpful!My memory is fuzzy but as I recall, a possible downside to using deferred constraints was increased memory usage, though I cannot see how at the moment. Regardless, I think the upshot is that they aren't without their cost but as long as you aren't doing massive transactions that cost is probably one that you can afford to pay without much worry.",
"msg_date": "Mon, 26 Sep 2011 22:54:10 -0600",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres constraint triggers"
},
{
"msg_contents": "On 09/27/2011 12:54 PM, Ben Chobot wrote:\n> On Sep 26, 2011, at 10:52 AM, Maria L. Wilson wrote:\n>\n>> Our first try to solve this problem has been to convert these triggers\n>> into a constraint trigger which allows for DEFERRABLE INITIALLY\n>> DEFERRED flags. This, we are finding, is forcing the trigger function\n>> to run after the triggering transaction is completed. We believe this\n>> will fix our locking problem and hopefully speed up our inserts again.\n>>\n>> Any comments or past experiences would certainly be helpful!\n>\n> My memory is fuzzy but as I recall, a possible downside to using\n> deferred constraints was increased memory usage\n\nThat's right. PostgreSQL doesn't currently support spilling of pending \nconstraint information to disk; it has to keep it in RAM, and with \nsufficiently huge deferred updates/inserts/deletes it's possible for the \nbackend to run out of RAM to use.\n\n> though I cannot see how at the moment.\n\nA list of which triggers to run, and on which tuples, must be maintained \nuntil those triggers are fired. That list has to be kept somewhere.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 28 Sep 2011 08:37:15 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres constraint triggers"
},
{
"msg_contents": "On Sep 27, 2011, at 6:37 PM, Craig Ringer wrote:\n\n> On 09/27/2011 12:54 PM, Ben Chobot wrote:\n>> \n>> My memory is fuzzy but as I recall, a possible downside to using\n>> deferred constraints was increased memory usage\n> \n> That's right. PostgreSQL doesn't currently support spilling of pending constraint information to disk; it has to keep it in RAM, and with sufficiently huge deferred updates/inserts/deletes it's possible for the backend to run out of RAM to use.\n> \n>> though I cannot see how at the moment.\n> \n> A list of which triggers to run, and on which tuples, must be maintained until those triggers are fired. That list has to be kept somewhere.\n\nWell when you put it like that, it's so obvious. :)",
"msg_date": "Thu, 29 Sep 2011 21:01:33 -0600",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres constraint triggers"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have a problem with autovacuum apparently not doing the job I need it to do.\n\nI have a table named datasession that is frequently inserted, updated and deleted from. Typically the table will have a few thousand rows in it. Each row typically survives a few days and is updated every 5 - 10 mins. The application receives unreliable, potentially duplicate data from its source, so this table is heavily used for synchronising application threads as well. A typical access pattern is:\n\n- tx begin\n- SELECT FOR UPDATE on a single row\n- Do some application processing (1 - 100 ms)\n- Possibly UPDATE the row\n- tx commit\n\nIn a few instances of our application we're seeing this table grow obscenely to the point where our monitoring servers get us out of bed to manually vacuum. I like sleep, so I want to fix this =D\n\nI've read some recent threads and found a discussion (below) on auto vacuum that mentions auto vacuum will be cancelled when a client requests a lock that auto vacuum is using… My questions:\n\n1) Does it look like I'm affected by the same problem as in the below discussion?\n\n2) Are there better solutions to this problem than a periodic task that vacuums/truncates-and-rebuilds the table? \n\n\nPerhaps relevant info:\n\n\n# select version();\n version \n--------------------------------------------------------------------------------------------------\n PostgreSQL 8.3.12 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu3)\n(1 row)\n\nAuto vacuum and vacuum parameters are set to the factory defaults.\n\nCheers,\n\n--Royce\n\n> From: Tom Lane <[email protected]>\n> Subject: Re: [GENERAL] Vacuum as \"easily obtained\" locks \n> Date: 4 August 2011 1:52:02 AM AEST\n> To: Michael Graham <[email protected]>\n> Cc: Pavan Deolasee <[email protected]>, [email protected]\n> \n>>> On Wed, 2011-08-03 at 11:40 -0400, Tom Lane wrote:\n>>> The other problem is that once autovacuum has gotten the lock, it has\n>>> to keep it for long enough to re-scan the truncatable pages (to make\n>>> sure they're still empty). And it is set up so that any access to the\n>>> table will kick autovacuum off the lock. An access pattern like that\n>>> would very likely prevent it from ever truncating, if there are a lot\n>>> of pages that need to be truncated. (There's been some discussion of\n>>> modifying this behavior, but nothing's been done about it yet.) \n\n> Michael Graham <[email protected]> writes:\n>> Ah! This looks like it is very much the issue. Since I've got around\n>> 150GB of data that should be truncatable and a select every ~2s.\n> \n>> Just to confirm would postgres write:\n> \n>> 2011-08-03 16:09:55 BST ERROR: canceling autovacuum task\n>> 2011-08-03 16:09:55 BST CONTEXT: automatic vacuum of table\n>> \"traffic.public.logdata5queue\"\n> \n>> Under those circumstances?\n> \n> Yup ...\n> \n> If you do a manual VACUUM, it won't allow itself to get kicked off the\n> lock ... but as noted upthread, that will mean your other queries get\n> blocked till it's done. Not sure there's any simple fix for this that\n> doesn't involve some downtime.\n> \n> \t\t\tregards, tom lane\n> \n> -- \n> Sent via pgsql-general mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-general\n\n\n\nHi all,I have a problem with autovacuum apparently not doing the job I need it to do.I have a table named datasession that is frequently inserted, updated and deleted from. Typically the table will have a few thousand rows in it. Each row typically survives a few days and is updated every 5 - 10 mins. The application receives unreliable, potentially duplicate data from its source, so this table is heavily used for synchronising application threads as well. A typical access pattern is:- tx begin- SELECT FOR UPDATE on a single row- Do some application processing (1 - 100 ms)- Possibly UPDATE the row- tx commitIn a few instances of our application we're seeing this table grow obscenely to the point where our monitoring servers get us out of bed to manually vacuum. I like sleep, so I want to fix this =DI've read some recent threads and found a discussion (below) on auto vacuum that mentions auto vacuum will be cancelled when a client requests a lock that auto vacuum is using… My questions:1) Does it look like I'm affected by the same problem as in the below discussion?2) Are there better solutions to this problem than a periodic task that vacuums/truncates-and-rebuilds the table? Perhaps relevant info:# select version(); version -------------------------------------------------------------------------------------------------- PostgreSQL 8.3.12 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu3)(1 row)Auto vacuum and vacuum parameters are set to the factory defaults.Cheers,--RoyceFrom: Tom Lane <[email protected]>Subject: Re: [GENERAL] Vacuum as \"easily obtained\" locks Date: 4 August 2011 1:52:02 AM AESTTo: Michael Graham <[email protected]>Cc: Pavan Deolasee <[email protected]>, [email protected] Wed, 2011-08-03 at 11:40 -0400, Tom Lane wrote:The other problem is that once autovacuum has gotten the lock, it hasto keep it for long enough to re-scan the truncatable pages (to makesure they're still empty). And it is set up so that any access to thetable will kick autovacuum off the lock. An access pattern like thatwould very likely prevent it from ever truncating, if there are a lotof pages that need to be truncated. (There's been some discussion ofmodifying this behavior, but nothing's been done about it yet.) Michael Graham <[email protected]> writes:Ah! This looks like it is very much the issue. Since I've got around150GB of data that should be truncatable and a select every ~2s.Just to confirm would postgres write:2011-08-03 16:09:55 BST ERROR: canceling autovacuum task2011-08-03 16:09:55 BST CONTEXT: automatic vacuum of table\"traffic.public.logdata5queue\"Under those circumstances?Yup ...If you do a manual VACUUM, it won't allow itself to get kicked off thelock ... but as noted upthread, that will mean your other queries getblocked till it's done. Not sure there's any simple fix for this thatdoesn't involve some downtime. regards, tom lane-- Sent via pgsql-general mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general",
"msg_date": "Tue, 27 Sep 2011 13:45:33 +1000",
"msg_from": "Royce Ausburn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ineffective autovacuum"
},
{
"msg_contents": "Royce Ausburn <[email protected]> writes:\n> I have a problem with autovacuum apparently not doing the job I need it to do.\n\nHm, I wonder whether you're getting bit by bug #5759, which was fixed\nafter 8.3.12.\n\n> I have a table named datasession that is frequently inserted, updated and deleted from. Typically the table will have a few thousand rows in it. Each row typically survives a few days and is updated every 5 - 10 mins. The application receives unreliable, potentially duplicate data from its source, so this table is heavily used for synchronising application threads as well. A typical access pattern is:\n\n> - tx begin\n> - SELECT FOR UPDATE on a single row\n> - Do some application processing (1 - 100 ms)\n> - Possibly UPDATE the row\n> - tx commit\n\nTransactions of that form would not interfere with autovacuum. You'd\nneed something that wants exclusive lock, like a schema change.\n\n> I've read some recent threads and found a discussion (below) on auto vacuum that mentions auto vacuum will be cancelled when a client requests a lock that auto vacuum is using� My questions:\n> 1) Does it look like I'm affected by the same problem as in the below discussion?\n\nNot unless you're seeing a lot of \"canceling autovacuum task\" messages\nin the postmaster log.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Sep 2011 00:21:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ineffective autovacuum "
},
{
"msg_contents": "\n\n\nOn 27/09/2011, at 2:21 PM, Tom Lane wrote:\n\n> Royce Ausburn <[email protected]> writes:\n>> I have a problem with autovacuum apparently not doing the job I need it to do.\n> \n> Hm, I wonder whether you're getting bit by bug #5759, which was fixed\n> after 8.3.12.\n\nIf this were the case, would I see lots of auto vacuum worker processes in ps that are essentially doing nothing because they're sleeping all the time? If so, then I think perhaps not.\n\n> \n>> I have a table named datasession that is frequently inserted, updated and deleted from. Typically the table will have a few thousand rows in it. Each row typically survives a few days and is updated every 5 - 10 mins. The application receives unreliable, potentially duplicate data from its source, so this table is heavily used for synchronising application threads as well. A typical access pattern is:\n> \n>> - tx begin\n>> - SELECT FOR UPDATE on a single row\n>> - Do some application processing (1 - 100 ms)\n>> - Possibly UPDATE the row\n>> - tx commit\n> \n> Transactions of that form would not interfere with autovacuum. You'd\n> need something that wants exclusive lock, like a schema change.\n> \n>> I've read some recent threads and found a discussion (below) on auto vacuum that mentions auto vacuum will be cancelled when a client requests a lock that auto vacuum is using∑ My questions:\n>> 1) Does it look like I'm affected by the same problem as in the below discussion?\n> \n> Not unless you're seeing a lot of \"canceling autovacuum task\" messages\n> in the postmaster log.\n\nOkay - This is not the case.\n\nSince sending this first email I've up'd the autovacuum log level and I've noticed that the same tables seem to be auto vacuum'd over and over again… Some of the tables are a bit surprising in that they're updated semi-regularly, but not enough (I'd think) to warrant an autovacuum every few minutes… Is this unusual?\n\n\nPerhaps unrelated: I've done some digging around and happened across a nightly task doing:\n\nselect pg_stat_reset()\n\non each of the databases in the cluster…. I've no idea why we're doing that (and our usual sysadmin / DBA has resigned, so I doubt I'll ever know). There must have been a reason at the time, but I wonder if this might be interfering with things?\n\nAt any rate, I think the logs might glean some more interesting information, I'll let it alone for a few hours and hopefully I'll have some more useful information.\n\n--Royce\n\n",
"msg_date": "Tue, 27 Sep 2011 15:08:26 +1000",
"msg_from": "Royce Ausburn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ineffective autovacuum "
},
{
"msg_contents": "1. First things first: vacuum cannot delete tuples that are still\nvisible to any old running transactions. You might have some very long\nqueries or transactions that prevent it from cleaning properly:\n\nselect * from pg_stat_activity where xact_start < now()-interval '10 minutes';\n\n2. On 8.3 and earlier servers with large tables, it's critical that\nyour max_fsm_pages and max_fsm_relations are tuned properly. Failing\nthat, autovacuum will permanently leak space that can only be fixed\nwith a VACUUM FULL (which will take an exclusive lock and run for a\nvery long time)\n\nPostgreSQL version 8.4 addressed this problem, but for the\nunfortunate, you have to follow the tuning advice here:\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#autovacuum_max_fsm_pages.2C_max_fsm_relations\n\nOn Tue, Sep 27, 2011 at 08:08, Royce Ausburn <[email protected]> wrote:\n> I've noticed that the same tables seem to be auto vacuum'd over and over again… Some of the tables are a bit surprising in that they're updated semi-regularly, but not enough (I'd think) to warrant an autovacuum every few minutes… Is this unusual?\n\nMaybe they're just auto-analyze processes? Those get triggered on\ninsert-only tables too, when vacuum normally wouldn't run.\n\n> Perhaps unrelated: I've done some digging around and happened across a nightly task doing:\n> select pg_stat_reset()\n\nAFAIK (but I could be wrong), vacuum uses a separate set of statistics\nnot affected by pg_stat_reset.\n\nRegards,\nMarti\n",
"msg_date": "Tue, 27 Sep 2011 13:29:14 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ineffective autovacuum"
},
{
"msg_contents": "Royce Ausburn <[email protected]> writes:\n> Since sending this first email I've up'd the autovacuum log level and I've noticed that the same tables seem to be auto vacuum'd over and over again… Some of the tables are a bit surprising in that they're updated semi-regularly, but not enough (I'd think) to warrant an autovacuum every few minutes… Is this unusual?\n\nWell, that proves autovacuum isn't getting blocked anyway. At this\npoint I suspect that Marti has fingered the correct issue: you likely\nneed to increase the FSM settings. You should try running a manual\nVACUUM VERBOSE and see if it suggests that more FSM space is needed\n(there'll be some FSM stats at the end of the verbose printout).\n\n> Perhaps unrelated: I've done some digging around and happened across a nightly task doing:\n> select pg_stat_reset()\n> on each of the databases in the cluster…. I've no idea why we're doing that (and our usual sysadmin / DBA has resigned, so I doubt I'll ever know). There must have been a reason at the time, but I wonder if this might be interfering with things?\n\nHmm, it's not helping any. Anything that needs vacuuming, but less\noften than once a day, would get missed due to the stats getting\nforgotten.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Sep 2011 09:49:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ineffective autovacuum "
},
{
"msg_contents": "On Tue, Sep 27, 2011 at 7:49 AM, Tom Lane <[email protected]> wrote:\n> Royce Ausburn <[email protected]> writes:\n>> Since sending this first email I've up'd the autovacuum log level and I've noticed that the same tables seem to be auto vacuum'd over and over again… Some of the tables are a bit surprising in that they're updated semi-regularly, but not enough (I'd think) to warrant an autovacuum every few minutes… Is this unusual?\n>\n> Well, that proves autovacuum isn't getting blocked anyway. At this\n> point I suspect that Marti has fingered the correct issue: you likely\n> need to increase the FSM settings. You should try running a manual\n> VACUUM VERBOSE and see if it suggests that more FSM space is needed\n> (there'll be some FSM stats at the end of the verbose printout).\n\nThat's the probably the best first step, a good second one might be to\nincrease the aggressiveness of autovac by lowering the delay, and\nincreasing the cost limit.\n\nOP: You need to watch it a little closer during the day. first do as\nsuggested and increase the max_fsm_pages. High settings on it don't\ncost a lot as they're only 6 bytes per page. So 1M max_fsm_pages\ncosts 6M of shared RAM. After that run vacuum verbose every hour or\ntwo to keep an eye on the trend of how many pages it says are needed.\nIf that number doesn't stabilize, but just keeps growing then you're\nnot vacuuming aggressively enough. Up autovacuum_vacuum_cost_limit by\na couple of factors, and lower autovacuum_vacuum_cost_delay to 5ms or\nless. Make sure you don't swamp your IO subsystem. On big machines\nwith lots of spindles it's hard to swamp the IO. On smaller\nworkstation class machines it's pretty easy.\n",
"msg_date": "Tue, 27 Sep 2011 08:00:08 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ineffective autovacuum"
},
{
"msg_contents": "On 27/09/2011, at 8:29 PM, Marti Raudsepp wrote:\n\n> 1. First things first: vacuum cannot delete tuples that are still\n> visible to any old running transactions. You might have some very long\n> queries or transactions that prevent it from cleaning properly:\n> \n> select * from pg_stat_activity where xact_start < now()-interval '10 minutes';\n\nThanks -- that query is very handy. I suspect this might be the cause of our woes as this query results in a handful of long lived connections, however they're connections to databases other than the one that I'm having trouble with. \n\nI've checked up on the FSM as you suggested, I don't think that's the problem as there're no warnings in the verbose output nor the logs. But another clue:\n\nDETAIL: 93 dead row versions cannot be removed yet.\n\nAfter clearing those stuffed transactions vacuum verbose manages to clear away all the dead rows… That's confirmation enough for me - Now to find the application bugs - Thanks Tom, Marti & Scott for your help!\n\n--Royce\n\n\nOn 27/09/2011, at 8:29 PM, Marti Raudsepp wrote:1. First things first: vacuum cannot delete tuples that are stillvisible to any old running transactions. You might have some very longqueries or transactions that prevent it from cleaning properly:select * from pg_stat_activity where xact_start < now()-interval '10 minutes';Thanks -- that query is very handy. I suspect this might be the cause of our woes as this query results in a handful of long lived connections, however they're connections to databases other than the one that I'm having trouble with. I've checked up on the FSM as you suggested, I don't think that's the problem as there're no warnings in the verbose output nor the logs. But another clue:DETAIL: 93 dead row versions cannot be removed yet.After clearing those stuffed transactions vacuum verbose manages to clear away all the dead rows… That's confirmation enough for me - Now to find the application bugs - Thanks Tom, Marti & Scott for your help!--Royce",
"msg_date": "Wed, 28 Sep 2011 00:27:39 +1000",
"msg_from": "Royce Ausburn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ineffective autovacuum"
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nI am preparing a plan to track the tables undergoing Full Table Scans for\nmost number of times.\n\nIf i track seq_scan from the pg_stat_user_tables, will that help\n(considering the latest analyzed ones) ?\n\nPlease help !\n\nThanks\nVB\n\nHello Everyone,I am preparing a plan to track the tables undergoing Full Table Scans for most number of times.If i track seq_scan from the pg_stat_user_tables, will that help (considering the latest analyzed ones) ?\nPlease help !ThanksVB",
"msg_date": "Tue, 27 Sep 2011 18:06:01 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": ": Tracking Full Table Scans"
},
{
"msg_contents": "Venkat Balaji <[email protected]> wrote:\n \n> I am preparing a plan to track the tables undergoing Full Table\n> Scans for most number of times.\n> \n> If i track seq_scan from the pg_stat_user_tables, will that help\n> (considering the latest analyzed ones) ?\n \nWell, yeah; but be careful not to assume that a sequential scan is\nalways a bad thing. Here's our top ten tables for sequential scans\nin a database which is performing quite well:\n \ncc=> select seq_scan, n_live_tup, relname\ncc-> from pg_stat_user_tables\ncc-> order by seq_scan desc\ncc-> limit 10;\n seq_scan | n_live_tup | relname\n----------+------------+--------------------\n 81264339 | 20 | MaintCode\n 16840299 | 3 | DbTranImageStatus\n 14905181 | 18 | ControlFeature\n 11908114 | 10 | AgingBoundary\n 8789288 | 22 | CtofcTypeCode\n 7786110 | 6 | PrefCounty\n 6303959 | 9 | ProtOrderHistEvent\n 5835430 | 1 | ControlRecord\n 5466806 | 1 | ControlAccounting\n 5202028 | 12 | ProtEventOrderType\n(10 rows)\n \nYou'll notice that they are all very small tables. In all cases the\nentire heap fits in one page, so any form of indexed scan would at\nleast double the number of pages visited, and slow things down.\n \nIf you have queries which are not performing to expectations, your\nbest bet might be to pick one of them and post it here, following\nthe advice on this page:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Tue, 27 Sep 2011 09:32:14 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Tracking Full Table Scans"
},
{
"msg_contents": "Thanks a lot Kevin !!\n\nYes. I intended to track full table scans first to ensure that only small\ntables or tables with very less pages are (as you said) getting scanned\nfull.\n\nI am yet to identify slow running queries. Will surely hit back with them in\nfuture.\n\nThanks\nVB\n\n\n\nOn Tue, Sep 27, 2011 at 8:02 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Venkat Balaji <[email protected]> wrote:\n>\n> > I am preparing a plan to track the tables undergoing Full Table\n> > Scans for most number of times.\n> >\n> > If i track seq_scan from the pg_stat_user_tables, will that help\n> > (considering the latest analyzed ones) ?\n>\n> Well, yeah; but be careful not to assume that a sequential scan is\n> always a bad thing. Here's our top ten tables for sequential scans\n> in a database which is performing quite well:\n>\n> cc=> select seq_scan, n_live_tup, relname\n> cc-> from pg_stat_user_tables\n> cc-> order by seq_scan desc\n> cc-> limit 10;\n> seq_scan | n_live_tup | relname\n> ----------+------------+--------------------\n> 81264339 | 20 | MaintCode\n> 16840299 | 3 | DbTranImageStatus\n> 14905181 | 18 | ControlFeature\n> 11908114 | 10 | AgingBoundary\n> 8789288 | 22 | CtofcTypeCode\n> 7786110 | 6 | PrefCounty\n> 6303959 | 9 | ProtOrderHistEvent\n> 5835430 | 1 | ControlRecord\n> 5466806 | 1 | ControlAccounting\n> 5202028 | 12 | ProtEventOrderType\n> (10 rows)\n>\n> You'll notice that they are all very small tables. In all cases the\n> entire heap fits in one page, so any form of indexed scan would at\n> least double the number of pages visited, and slow things down.\n>\n> If you have queries which are not performing to expectations, your\n> best bet might be to pick one of them and post it here, following\n> the advice on this page:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> -Kevin\n>\n\nThanks a lot Kevin !!Yes. I intended to track full table scans first to ensure that only small tables or tables with very less pages are (as you said) getting scanned full.I am yet to identify slow running queries. Will surely hit back with them in future.\nThanksVBOn Tue, Sep 27, 2011 at 8:02 PM, Kevin Grittner <[email protected]> wrote:\nVenkat Balaji <[email protected]> wrote:\n\n> I am preparing a plan to track the tables undergoing Full Table\n> Scans for most number of times.\n>\n> If i track seq_scan from the pg_stat_user_tables, will that help\n> (considering the latest analyzed ones) ?\n\nWell, yeah; but be careful not to assume that a sequential scan is\nalways a bad thing. Here's our top ten tables for sequential scans\nin a database which is performing quite well:\n\ncc=> select seq_scan, n_live_tup, relname\ncc-> from pg_stat_user_tables\ncc-> order by seq_scan desc\ncc-> limit 10;\n seq_scan | n_live_tup | relname\n----------+------------+--------------------\n 81264339 | 20 | MaintCode\n 16840299 | 3 | DbTranImageStatus\n 14905181 | 18 | ControlFeature\n 11908114 | 10 | AgingBoundary\n 8789288 | 22 | CtofcTypeCode\n 7786110 | 6 | PrefCounty\n 6303959 | 9 | ProtOrderHistEvent\n 5835430 | 1 | ControlRecord\n 5466806 | 1 | ControlAccounting\n 5202028 | 12 | ProtEventOrderType\n(10 rows)\n\nYou'll notice that they are all very small tables. In all cases the\nentire heap fits in one page, so any form of indexed scan would at\nleast double the number of pages visited, and slow things down.\n\nIf you have queries which are not performing to expectations, your\nbest bet might be to pick one of them and post it here, following\nthe advice on this page:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n-Kevin",
"msg_date": "Tue, 27 Sep 2011 21:56:51 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Tracking Full Table Scans"
},
{
"msg_contents": "I would like to know the difference between \"n_tup_upd\" and \"n_tup_hot_upd\".\n\nThanks\nVB\n\nOn Tue, Sep 27, 2011 at 9:56 PM, Venkat Balaji <[email protected]>wrote:\n\n> Thanks a lot Kevin !!\n>\n> Yes. I intended to track full table scans first to ensure that only small\n> tables or tables with very less pages are (as you said) getting scanned\n> full.\n>\n> I am yet to identify slow running queries. Will surely hit back with them\n> in future.\n>\n> Thanks\n> VB\n>\n>\n>\n> On Tue, Sep 27, 2011 at 8:02 PM, Kevin Grittner <\n> [email protected]> wrote:\n>\n>> Venkat Balaji <[email protected]> wrote:\n>>\n>> > I am preparing a plan to track the tables undergoing Full Table\n>> > Scans for most number of times.\n>> >\n>> > If i track seq_scan from the pg_stat_user_tables, will that help\n>> > (considering the latest analyzed ones) ?\n>>\n>> Well, yeah; but be careful not to assume that a sequential scan is\n>> always a bad thing. Here's our top ten tables for sequential scans\n>> in a database which is performing quite well:\n>>\n>> cc=> select seq_scan, n_live_tup, relname\n>> cc-> from pg_stat_user_tables\n>> cc-> order by seq_scan desc\n>> cc-> limit 10;\n>> seq_scan | n_live_tup | relname\n>> ----------+------------+--------------------\n>> 81264339 | 20 | MaintCode\n>> 16840299 | 3 | DbTranImageStatus\n>> 14905181 | 18 | ControlFeature\n>> 11908114 | 10 | AgingBoundary\n>> 8789288 | 22 | CtofcTypeCode\n>> 7786110 | 6 | PrefCounty\n>> 6303959 | 9 | ProtOrderHistEvent\n>> 5835430 | 1 | ControlRecord\n>> 5466806 | 1 | ControlAccounting\n>> 5202028 | 12 | ProtEventOrderType\n>> (10 rows)\n>>\n>> You'll notice that they are all very small tables. In all cases the\n>> entire heap fits in one page, so any form of indexed scan would at\n>> least double the number of pages visited, and slow things down.\n>>\n>> If you have queries which are not performing to expectations, your\n>> best bet might be to pick one of them and post it here, following\n>> the advice on this page:\n>>\n>> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>>\n>> -Kevin\n>>\n>\n>\n\nI would like to know the difference between \"n_tup_upd\" and \"n_tup_hot_upd\".ThanksVBOn Tue, Sep 27, 2011 at 9:56 PM, Venkat Balaji <[email protected]> wrote:\nThanks a lot Kevin !!Yes. I intended to track full table scans first to ensure that only small tables or tables with very less pages are (as you said) getting scanned full.\nI am yet to identify slow running queries. Will surely hit back with them in future.\nThanksVBOn Tue, Sep 27, 2011 at 8:02 PM, Kevin Grittner <[email protected]> wrote:\nVenkat Balaji <[email protected]> wrote:\n\n> I am preparing a plan to track the tables undergoing Full Table\n> Scans for most number of times.\n>\n> If i track seq_scan from the pg_stat_user_tables, will that help\n> (considering the latest analyzed ones) ?\n\nWell, yeah; but be careful not to assume that a sequential scan is\nalways a bad thing. Here's our top ten tables for sequential scans\nin a database which is performing quite well:\n\ncc=> select seq_scan, n_live_tup, relname\ncc-> from pg_stat_user_tables\ncc-> order by seq_scan desc\ncc-> limit 10;\n seq_scan | n_live_tup | relname\n----------+------------+--------------------\n 81264339 | 20 | MaintCode\n 16840299 | 3 | DbTranImageStatus\n 14905181 | 18 | ControlFeature\n 11908114 | 10 | AgingBoundary\n 8789288 | 22 | CtofcTypeCode\n 7786110 | 6 | PrefCounty\n 6303959 | 9 | ProtOrderHistEvent\n 5835430 | 1 | ControlRecord\n 5466806 | 1 | ControlAccounting\n 5202028 | 12 | ProtEventOrderType\n(10 rows)\n\nYou'll notice that they are all very small tables. In all cases the\nentire heap fits in one page, so any form of indexed scan would at\nleast double the number of pages visited, and slow things down.\n\nIf you have queries which are not performing to expectations, your\nbest bet might be to pick one of them and post it here, following\nthe advice on this page:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n-Kevin",
"msg_date": "Tue, 27 Sep 2011 22:12:03 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Tracking Full Table Scans"
},
{
"msg_contents": "Venkat Balaji <[email protected]> wrote:\n \n> I would like to know the difference between \"n_tup_upd\" and\n> \"n_tup_hot_upd\".\n \nA HOT update is used when none of the updated columns are used in an\nindex and there is room for the new tuple (version of the row) on\nthe same page as the old tuple. This is faster for a number of\nreasons, and cleanup of the old tuple is a little different.\n \nIf you want the gory implementation details, take a look at this\nfile in the source tree:\n \nsrc/backend/access/heap/README.HOT\n \n-Kevin\n",
"msg_date": "Tue, 27 Sep 2011 12:15:43 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Tracking Full Table Scans"
},
{
"msg_contents": "On 09/28/2011 12:26 AM, Venkat Balaji wrote:\n> Thanks a lot Kevin !!\n>\n> Yes. I intended to track full table scans first to ensure that only\n> small tables or tables with very less pages are (as you said) getting\n> scanned full.\n\nIt can also be best to do a full table scan of a big table for some \nqueries. If the query needs to touch all the data in a table - for \nexample, for an aggregate - then the query will often complete fastest \nand with less disk use by using a sequential scan.\n\nI guess what you'd really want to know is to find out about queries that \ndo seqscans to match relatively small fractions of the total tuples \nscanned, ie low-selectivity seqscans. I'm not sure whether or not it's \npossible to gather this data with PostgreSQL's current level of stats \ndetail.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 28 Sep 2011 08:55:12 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Tracking Full Table Scans"
},
{
"msg_contents": "Thanks Kevin !!\n\nI will have a look at the source tree.\n\nRegards\nVB\n\nOn Tue, Sep 27, 2011 at 10:45 PM, Kevin Grittner <\[email protected]> wrote:\n\n> Venkat Balaji <[email protected]> wrote:\n>\n> > I would like to know the difference between \"n_tup_upd\" and\n> > \"n_tup_hot_upd\".\n>\n> A HOT update is used when none of the updated columns are used in an\n> index and there is room for the new tuple (version of the row) on\n> the same page as the old tuple. This is faster for a number of\n> reasons, and cleanup of the old tuple is a little different.\n>\n> If you want the gory implementation details, take a look at this\n> file in the source tree:\n>\n> src/backend/access/heap/README.HOT\n>\n> -Kevin\n>\n\nThanks Kevin !!I will have a look at the source tree.RegardsVBOn Tue, Sep 27, 2011 at 10:45 PM, Kevin Grittner <[email protected]> wrote:\nVenkat Balaji <[email protected]> wrote:\n\n> I would like to know the difference between \"n_tup_upd\" and\n> \"n_tup_hot_upd\".\n\nA HOT update is used when none of the updated columns are used in an\nindex and there is room for the new tuple (version of the row) on\nthe same page as the old tuple. This is faster for a number of\nreasons, and cleanup of the old tuple is a little different.\n\nIf you want the gory implementation details, take a look at this\nfile in the source tree:\n\nsrc/backend/access/heap/README.HOT\n\n-Kevin",
"msg_date": "Wed, 28 Sep 2011 11:39:48 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Tracking Full Table Scans"
},
{
"msg_contents": "Yes. I am looking for the justified full table scans.\n\nIf bigger tables are getting scanned, I would like to know %age rows scanned\nagainst %age rows as the output.\n\nIf the query needs 80% of the rows as the output, then a full table scan is\nalways better.\n\nI believe there is a possibility for this in Postgres. I think we can get\nthis using pg_stat_user_table, pg_statio_user_tables and pg_stats.\n\nI will post the calculation once it i get it.\n\nThanks\nVB\n\nOn Wed, Sep 28, 2011 at 6:25 AM, Craig Ringer <[email protected]> wrote:\n\n> On 09/28/2011 12:26 AM, Venkat Balaji wrote:\n>\n>> Thanks a lot Kevin !!\n>>\n>> Yes. I intended to track full table scans first to ensure that only\n>> small tables or tables with very less pages are (as you said) getting\n>> scanned full.\n>>\n>\n> It can also be best to do a full table scan of a big table for some\n> queries. If the query needs to touch all the data in a table - for example,\n> for an aggregate - then the query will often complete fastest and with less\n> disk use by using a sequential scan.\n>\n> I guess what you'd really want to know is to find out about queries that do\n> seqscans to match relatively small fractions of the total tuples scanned, ie\n> low-selectivity seqscans. I'm not sure whether or not it's possible to\n> gather this data with PostgreSQL's current level of stats detail.\n>\n> --\n> Craig Ringer\n>\n\nYes. I am looking for the justified full table scans.If bigger tables are getting scanned, I would like to know %age rows scanned against %age rows as the output.If the query needs 80% of the rows as the output, then a full table scan is always better.\nI believe there is a possibility for this in Postgres. I think we can get this using pg_stat_user_table, pg_statio_user_tables and pg_stats.I will post the calculation once it i get it.\nThanksVBOn Wed, Sep 28, 2011 at 6:25 AM, Craig Ringer <[email protected]> wrote:\nOn 09/28/2011 12:26 AM, Venkat Balaji wrote:\n\nThanks a lot Kevin !!\n\nYes. I intended to track full table scans first to ensure that only\nsmall tables or tables with very less pages are (as you said) getting\nscanned full.\n\n\nIt can also be best to do a full table scan of a big table for some queries. If the query needs to touch all the data in a table - for example, for an aggregate - then the query will often complete fastest and with less disk use by using a sequential scan.\n\nI guess what you'd really want to know is to find out about queries that do seqscans to match relatively small fractions of the total tuples scanned, ie low-selectivity seqscans. I'm not sure whether or not it's possible to gather this data with PostgreSQL's current level of stats detail.\n\n\n--\nCraig Ringer",
"msg_date": "Wed, 28 Sep 2011 11:43:25 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Tracking Full Table Scans"
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nI am implementing a PostgreSQL performance monitoring system (to monitor the\nbelow) which would help us understand the database behavior -\n\n1. Big Full Table Scans\n2. Table with high IOs (hot tables)\n3. Highly used Indexes\n4. Tables undergoing high DMLs with index scans 0 (with unused indexes)\n5. Index usage for heap blk hits\n6. Tracking Checkpoints\n7. Tracking CPU, IO and memory usage ( by PG processes ) -- desperately\nneeded\n8. Buffer cache usage\n9. Tables, Indexes and Database growth statistics\n\nand more..\n\nI am struck at building a script or monitoring tool which gets us CPU usage,\nIO metrics and RAM usage of the database server.\n\nCan someone please help me achieve this ?\n\nI need to monitor a 12 processor system with 6 cores. I need to know how\neach CPU is performing.\n\nPlease help me know the availability of any open source monitoring tools or\nscripts for PG-9.0 on RHEL5.\n\nI will hit back with questions regarding monitoring in coming days.\n\nThanks\nVB\n\nHello Everyone,I am implementing a PostgreSQL performance monitoring system (to monitor the below) which would help us understand the database behavior -1. Big Full Table Scans\n2. Table with high IOs (hot tables)3. Highly used Indexes4. Tables undergoing high DMLs with index scans 0 (with unused indexes)5. Index usage for heap blk hits6. Tracking Checkpoints\n7. Tracking CPU, IO and memory usage ( by PG processes ) -- desperately needed8. Buffer cache usage9. Tables, Indexes and Database growth statisticsand more..\nI am struck at building a script or monitoring tool which gets us CPU usage, IO metrics and RAM usage of the database server. Can someone please help me achieve this ?\nI need to monitor a 12 processor system with 6 cores. I need to know how each CPU is performing.Please help me know the availability of any open source monitoring tools or scripts for PG-9.0 on RHEL5.\nI will hit back with questions regarding monitoring in coming days.ThanksVB",
"msg_date": "Tue, 27 Sep 2011 22:05:42 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL-9.0 Monitoring System to improve performance"
},
{
"msg_contents": "Venkat Balaji wrote:\n>\n> 1. Big Full Table Scans\n> 2. Table with high IOs (hot tables)\n> 3. Highly used Indexes\n> 4. Tables undergoing high DMLs with index scans 0 (with unused indexes)\n> 5. Index usage for heap blk hits\n> 6. Tracking Checkpoints\n\nThis is fairly easy to collect and analyze. You might take a look at \npgstatspack to see how one program collects snapshots of this sort of \ninformation: http://pgfoundry.org/projects/pgstatspack/\n\n>\n> 8. Buffer cache usage\n\nHigh-level information about this can be collected by things like the \npg_statio* views. If you want to actually look inside the buffer cache \nand get detailed statistics on it, that's a harder problem. I have some \nsample queries for that sort of thing in my book.\n\n> 9. Tables, Indexes and Database growth statistics\n\nThis is valuable information to monitor over time, but I'm not aware of \nany existing tools that track it well. It won't be hard to collect it \non your own though.\n\n> 7. Tracking CPU, IO and memory usage ( by PG processes ) -- \n> desperately needed\n\nI'm not aware of any open-source tool that tracks this information yet. \nPostgreSQL has no idea what CPU, memory, and I/O is being done by the OS \nwhen you execute a query. The operating system knows some of that, but \nhas no idea what the database is doing. You can see a real-time \nsnapshot combining the two pieces of info using the pg_top program: \nhttp://ptop.projects.postgresql.org/ but I suspect what you want is a \nhistorical record of it instead.\n\nWriting something that tracks both at once and logs all the information \nfor later analysis is one of the big missing pieces in PostgreSQL \nmanagement. I have some ideas for how to build such a thing. But I \nexpect it will take a few months of development time to get right, and I \nhaven't come across someone yet who wants to fund that size of project \nfor this purpose yet.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 28 Sep 2011 00:05:30 -0700",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-9.0 Monitoring System to improve performance"
},
{
"msg_contents": "On 28 Září 2011, 9:05, Greg Smith wrote:\n> Venkat Balaji wrote:\n>>\n>> 1. Big Full Table Scans\n>> 2. Table with high IOs (hot tables)\n>> 3. Highly used Indexes\n>> 4. Tables undergoing high DMLs with index scans 0 (with unused indexes)\n>> 5. Index usage for heap blk hits\n>> 6. Tracking Checkpoints\n>\n> This is fairly easy to collect and analyze. You might take a look at\n> pgstatspack to see how one program collects snapshots of this sort of\n> information: http://pgfoundry.org/projects/pgstatspack/\n\nIt's definitely fairly easy to collect, and pgstatspack help a lot. But\ninterpreting the collected data is much harder, especially when it comes\nto indexes. For example UNIQUE indexes often have idx_scan=0, because\nchecking the uniqueness is not an index scan. Other indexes may be created\nfor rare queries (e.g. a batch running once a year), so you need a very\nlong interval between the snapshots.\n\n>> 8. Buffer cache usage\n>\n> High-level information about this can be collected by things like the\n> pg_statio* views. If you want to actually look inside the buffer cache\n> and get detailed statistics on it, that's a harder problem. I have some\n> sample queries for that sort of thing in my book.\n\nThere's an extension pg_buffercache for that (the queries are using it IIRC).\n\n>> 9. Tables, Indexes and Database growth statistics\n>\n> This is valuable information to monitor over time, but I'm not aware of\n> any existing tools that track it well. It won't be hard to collect it\n> on your own though.\n\nWhat about check_postgres.pl script?\n\n>> 7. Tracking CPU, IO and memory usage ( by PG processes ) --\n>> desperately needed\n\nWhat about using check_postgres.pl and other plugins? Never used that\nthough, so maybe there are issues I'm not aware of.\n\n> I'm not aware of any open-source tool that tracks this information yet.\n> PostgreSQL has no idea what CPU, memory, and I/O is being done by the OS\n> when you execute a query. The operating system knows some of that, but\n> has no idea what the database is doing. You can see a real-time\n> snapshot combining the two pieces of info using the pg_top program:\n> http://ptop.projects.postgresql.org/ but I suspect what you want is a\n> historical record of it instead.\n>\n> Writing something that tracks both at once and logs all the information\n> for later analysis is one of the big missing pieces in PostgreSQL\n> management. I have some ideas for how to build such a thing. But I\n> expect it will take a few months of development time to get right, and I\n> haven't come across someone yet who wants to fund that size of project\n> for this purpose yet.\n\nA long (long long long) time ago I wrote something like this, it's called\npgmonitor and is available here:\n\n http://sourceforge.net/apps/trac/pgmonitor/\n\nBut the development stalled (not a rare thing for projects developed by a\nsingle person) and I'm not quite sure about the right direction. Maybe\nit's worthless, maybe it would be a good starting point - feel free to\ncomment.\n\nTomas\n\n",
"msg_date": "Wed, 28 Sep 2011 14:14:17 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-9.0 Monitoring System to improve\n performance"
},
{
"msg_contents": "Thanks Greg !\n\nSorry, I should have put it the other way.\n\nActually, I am looking for any tool (if exists) which gets me the following\ninformation with one installation or so.\n\nPlease see my replies below.\n\nThanks\nVB\n\nOn Wed, Sep 28, 2011 at 12:35 PM, Greg Smith <[email protected]> wrote:\n\n> Venkat Balaji wrote:\n>\n>>\n>> 1. Big Full Table Scans\n>> 2. Table with high IOs (hot tables)\n>> 3. Highly used Indexes\n>> 4. Tables undergoing high DMLs with index scans 0 (with unused indexes)\n>> 5. Index usage for heap blk hits\n>> 6. Tracking Checkpoints\n>>\n>\n> This is fairly easy to collect and analyze. You might take a look at\n> pgstatspack to see how one program collects snapshots of this sort of\n> information: http://pgfoundry.org/projects/**pgstatspack/<http://pgfoundry.org/projects/pgstatspack/>\n>\n> I am in the process of installing pgstatspack ( i have used it before ).\n>> We are waiting for the downtime (to load this through shared preloaded\n>> libraries).\n>\n>\n\n>\n>> 8. Buffer cache usage\n>>\n>\n> High-level information about this can be collected by things like the\n> pg_statio* views. If you want to actually look inside the buffer cache and\n> get detailed statistics on it, that's a harder problem. I have some sample\n> queries for that sort of thing in my book.\n>\n> I do have pgstattuple contrib module installed and is collecting the data\n> and loading it into the auditing tables.\n>\n\n\n>\n> 9. Tables, Indexes and Database growth statistics\n>>\n>\n> This is valuable information to monitor over time, but I'm not aware of any\n> existing tools that track it well. It won't be hard to collect it on your\n> own though.\n>\n> We are getting it done on daily basis and we also have metrics of data\n> growth\n>\n> 7. Tracking CPU, IO and memory usage ( by PG processes ) -- desperately\n>> needed\n>>\n>\n> I'm not aware of any open-source tool that tracks this information yet.\n> PostgreSQL has no idea what CPU, memory, and I/O is being done by the OS\n> when you execute a query. The operating system knows some of that, but has\n> no idea what the database is doing. You can see a real-time snapshot\n> combining the two pieces of info using the pg_top program:\n> http://ptop.projects.**postgresql.org/<http://ptop.projects.postgresql.org/>but I suspect what you want is a historical record of it instead.\n>\n> Writing something that tracks both at once and logs all the information for\n> later analysis is one of the big missing pieces in PostgreSQL management. I\n> have some ideas for how to build such a thing. But I expect it will take a\n> few months of development time to get right, and I haven't come across\n> someone yet who wants to fund that size of project for this purpose yet.\n>\n> As of now i am relying on MPSTAT and will be testing NMON analyzer (this\ngets me the graph)\n\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n>\n>\n\nThanks Greg !Sorry, I should have put it the other way.Actually, I am looking for any tool (if exists) which gets me the following information with one installation or so.\nPlease see my replies below.ThanksVBOn Wed, Sep 28, 2011 at 12:35 PM, Greg Smith <[email protected]> wrote:\nVenkat Balaji wrote:\n\n\n1. Big Full Table Scans\n2. Table with high IOs (hot tables)\n3. Highly used Indexes\n4. Tables undergoing high DMLs with index scans 0 (with unused indexes)\n5. Index usage for heap blk hits\n6. Tracking Checkpoints\n\n\nThis is fairly easy to collect and analyze. You might take a look at pgstatspack to see how one program collects snapshots of this sort of information: http://pgfoundry.org/projects/pgstatspack/\n\n\nI am in the process of installing pgstatspack ( i have used it before ). We are waiting for the downtime (to load this through shared preloaded libraries).\n \n\n8. Buffer cache usage\n\n\nHigh-level information about this can be collected by things like the pg_statio* views. If you want to actually look inside the buffer cache and get detailed statistics on it, that's a harder problem. I have some sample queries for that sort of thing in my book.\n\nI do have pgstattuple contrib module installed and is collecting the data and loading it into the auditing tables. \n\n\n9. Tables, Indexes and Database growth statistics\n\n\nThis is valuable information to monitor over time, but I'm not aware of any existing tools that track it well. It won't be hard to collect it on your own though.We are getting it done on daily basis and we also have metrics of data growth\n\n\n7. Tracking CPU, IO and memory usage ( by PG processes ) -- desperately needed\n\n\nI'm not aware of any open-source tool that tracks this information yet. PostgreSQL has no idea what CPU, memory, and I/O is being done by the OS when you execute a query. The operating system knows some of that, but has no idea what the database is doing. You can see a real-time snapshot combining the two pieces of info using the pg_top program: http://ptop.projects.postgresql.org/ but I suspect what you want is a historical record of it instead.\n\nWriting something that tracks both at once and logs all the information for later analysis is one of the big missing pieces in PostgreSQL management. I have some ideas for how to build such a thing. But I expect it will take a few months of development time to get right, and I haven't come across someone yet who wants to fund that size of project for this purpose yet.\n\n As of now i am relying on MPSTAT and will be testing NMON analyzer (this gets me the graph) \n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Fri, 30 Sep 2011 16:00:25 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL-9.0 Monitoring System to improve performance"
},
{
"msg_contents": "Hi Tomas,\n\nI will let you know about \"check_postgres.pl\".\n\nWe will explore \"pgmonitor\" as well.\n\nThe other tool we are working on is \"pgwatch\", we found this very useful.\n\nThanks\nVB\n\nOn Wed, Sep 28, 2011 at 5:44 PM, Tomas Vondra <[email protected]> wrote:\n\n> On 28 Září 2011, 9:05, Greg Smith wrote:\n> > Venkat Balaji wrote:\n> >>\n> >> 1. Big Full Table Scans\n> >> 2. Table with high IOs (hot tables)\n> >> 3. Highly used Indexes\n> >> 4. Tables undergoing high DMLs with index scans 0 (with unused indexes)\n> >> 5. Index usage for heap blk hits\n> >> 6. Tracking Checkpoints\n> >\n> > This is fairly easy to collect and analyze. You might take a look at\n> > pgstatspack to see how one program collects snapshots of this sort of\n> > information: http://pgfoundry.org/projects/pgstatspack/\n>\n> It's definitely fairly easy to collect, and pgstatspack help a lot. But\n> interpreting the collected data is much harder, especially when it comes\n> to indexes. For example UNIQUE indexes often have idx_scan=0, because\n> checking the uniqueness is not an index scan. Other indexes may be created\n> for rare queries (e.g. a batch running once a year), so you need a very\n> long interval between the snapshots.\n>\n> >> 8. Buffer cache usage\n> >\n> > High-level information about this can be collected by things like the\n> > pg_statio* views. If you want to actually look inside the buffer cache\n> > and get detailed statistics on it, that's a harder problem. I have some\n> > sample queries for that sort of thing in my book.\n>\n> There's an extension pg_buffercache for that (the queries are using it\n> IIRC).\n>\n> >> 9. Tables, Indexes and Database growth statistics\n> >\n> > This is valuable information to monitor over time, but I'm not aware of\n> > any existing tools that track it well. It won't be hard to collect it\n> > on your own though.\n>\n> What about check_postgres.pl script?\n>\n> >> 7. Tracking CPU, IO and memory usage ( by PG processes ) --\n> >> desperately needed\n>\n> What about using check_postgres.pl and other plugins? Never used that\n> though, so maybe there are issues I'm not aware of.\n>\n> > I'm not aware of any open-source tool that tracks this information yet.\n> > PostgreSQL has no idea what CPU, memory, and I/O is being done by the OS\n> > when you execute a query. The operating system knows some of that, but\n> > has no idea what the database is doing. You can see a real-time\n> > snapshot combining the two pieces of info using the pg_top program:\n> > http://ptop.projects.postgresql.org/ but I suspect what you want is a\n> > historical record of it instead.\n> >\n> > Writing something that tracks both at once and logs all the information\n> > for later analysis is one of the big missing pieces in PostgreSQL\n> > management. I have some ideas for how to build such a thing. But I\n> > expect it will take a few months of development time to get right, and I\n> > haven't come across someone yet who wants to fund that size of project\n> > for this purpose yet.\n>\n> A long (long long long) time ago I wrote something like this, it's called\n> pgmonitor and is available here:\n>\n> http://sourceforge.net/apps/trac/pgmonitor/\n>\n> But the development stalled (not a rare thing for projects developed by a\n> single person) and I'm not quite sure about the right direction. Maybe\n> it's worthless, maybe it would be a good starting point - feel free to\n> comment.\n>\n> Tomas\n>\n>\n\nHi Tomas,I will let you know about \"check_postgres.pl\".We will explore \"pgmonitor\" as well.\nThe other tool we are working on is \"pgwatch\", we found this very useful.ThanksVBOn Wed, Sep 28, 2011 at 5:44 PM, Tomas Vondra <[email protected]> wrote:\nOn 28 Září 2011, 9:05, Greg Smith wrote:\n> Venkat Balaji wrote:\n>>\n>> 1. Big Full Table Scans\n>> 2. Table with high IOs (hot tables)\n>> 3. Highly used Indexes\n>> 4. Tables undergoing high DMLs with index scans 0 (with unused indexes)\n>> 5. Index usage for heap blk hits\n>> 6. Tracking Checkpoints\n>\n> This is fairly easy to collect and analyze. You might take a look at\n> pgstatspack to see how one program collects snapshots of this sort of\n> information: http://pgfoundry.org/projects/pgstatspack/\n\nIt's definitely fairly easy to collect, and pgstatspack help a lot. But\ninterpreting the collected data is much harder, especially when it comes\nto indexes. For example UNIQUE indexes often have idx_scan=0, because\nchecking the uniqueness is not an index scan. Other indexes may be created\nfor rare queries (e.g. a batch running once a year), so you need a very\nlong interval between the snapshots.\n\n>> 8. Buffer cache usage\n>\n> High-level information about this can be collected by things like the\n> pg_statio* views. If you want to actually look inside the buffer cache\n> and get detailed statistics on it, that's a harder problem. I have some\n> sample queries for that sort of thing in my book.\n\nThere's an extension pg_buffercache for that (the queries are using it IIRC).\n\n>> 9. Tables, Indexes and Database growth statistics\n>\n> This is valuable information to monitor over time, but I'm not aware of\n> any existing tools that track it well. It won't be hard to collect it\n> on your own though.\n\nWhat about check_postgres.pl script?\n\n>> 7. Tracking CPU, IO and memory usage ( by PG processes ) --\n>> desperately needed\n\nWhat about using check_postgres.pl and other plugins? Never used that\nthough, so maybe there are issues I'm not aware of.\n\n> I'm not aware of any open-source tool that tracks this information yet.\n> PostgreSQL has no idea what CPU, memory, and I/O is being done by the OS\n> when you execute a query. The operating system knows some of that, but\n> has no idea what the database is doing. You can see a real-time\n> snapshot combining the two pieces of info using the pg_top program:\n> http://ptop.projects.postgresql.org/ but I suspect what you want is a\n> historical record of it instead.\n>\n> Writing something that tracks both at once and logs all the information\n> for later analysis is one of the big missing pieces in PostgreSQL\n> management. I have some ideas for how to build such a thing. But I\n> expect it will take a few months of development time to get right, and I\n> haven't come across someone yet who wants to fund that size of project\n> for this purpose yet.\n\nA long (long long long) time ago I wrote something like this, it's called\npgmonitor and is available here:\n\n http://sourceforge.net/apps/trac/pgmonitor/\n\nBut the development stalled (not a rare thing for projects developed by a\nsingle person) and I'm not quite sure about the right direction. Maybe\nit's worthless, maybe it would be a good starting point - feel free to\ncomment.\n\nTomas",
"msg_date": "Fri, 30 Sep 2011 16:02:46 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL-9.0 Monitoring System to improve performance"
},
{
"msg_contents": "Looks like this is generally an area that can be targeted by some\nbusinesses. Or an open source enthusiast.\nOne centre that captures all the information and produces a report\nbased on it would be a great thing. Especially in cases like mine,\nwhere I have tens of postgresql installations on different hardware\nand with different use patterns (but schemas and queries are the\nsame).\n",
"msg_date": "Fri, 30 Sep 2011 12:53:54 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-9.0 Monitoring System to improve performance"
},
{
"msg_contents": "EnterpriseDB now has Postgres Enterprise Manager\n(http://enterprisedb.com/products-services-training/products/postgres-enter\nprise-manager) that has some of the information that is being asked for.\nIt has a hot table analysis report that shows the number of scans, rows\nread, etc. Since much of the tool is using the pgAdmin code base, much of\nthis is also available in pgAdmin but PEM will track the statistics at\ngiven intervals and show you trending graphs over time. It's still a very\nnew tool so I'm sure they are working to add new features and have been\nlooking for enhancement suggestions. Of course, it requires a service\ncontract with them to use the tool, but it doesn't cost extra to add the\ntool if you already have a contract with them. It does have a 45 day\nevaluation if you wanted to check it out.\n\nHope that helps.\nBobby\n\nOn 9/30/11 7:53 AM, \"Gregg Jaskiewicz\" <[email protected]> wrote:\n\n>Looks like this is generally an area that can be targeted by some\n>businesses. Or an open source enthusiast.\n>One centre that captures all the information and produces a report\n>based on it would be a great thing. Especially in cases like mine,\n>where I have tens of postgresql installations on different hardware\n>and with different use patterns (but schemas and queries are the\n>same).\n\n",
"msg_date": "Fri, 30 Sep 2011 21:29:06 +0000",
"msg_from": "Bobby Dewitt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-9.0 Monitoring System to improve performance"
},
{
"msg_contents": "pgwatch might also be worth taking a look at:\nhttp://www.cybertec.at/en/postgresql_products/pgwatch-cybertec-enterprise-postgresql-monitor\n\nFernando.-\n\nOn Fri, Sep 30, 2011 at 18:29, Bobby Dewitt <[email protected]> wrote:\n\n> EnterpriseDB now has Postgres Enterprise Manager\n> (\n> http://enterprisedb.com/products-services-training/products/postgres-enter\n> prise-manager) that has some of the information that is being asked for.\n> It has a hot table analysis report that shows the number of scans, rows\n> read, etc. Since much of the tool is using the pgAdmin code base, much of\n> this is also available in pgAdmin but PEM will track the statistics at\n> given intervals and show you trending graphs over time. It's still a very\n> new tool so I'm sure they are working to add new features and have been\n> looking for enhancement suggestions. Of course, it requires a service\n> contract with them to use the tool, but it doesn't cost extra to add the\n> tool if you already have a contract with them. It does have a 45 day\n> evaluation if you wanted to check it out.\n>\n> Hope that helps.\n> Bobby\n>\n> On 9/30/11 7:53 AM, \"Gregg Jaskiewicz\" <[email protected]> wrote:\n>\n> >Looks like this is generally an area that can be targeted by some\n> >businesses. Or an open source enthusiast.\n> >One centre that captures all the information and produces a report\n> >based on it would be a great thing. Especially in cases like mine,\n> >where I have tens of postgresql installations on different hardware\n> >and with different use patterns (but schemas and queries are the\n> >same).\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\npgwatch might also be worth taking a look at: http://www.cybertec.at/en/postgresql_products/pgwatch-cybertec-enterprise-postgresql-monitor\nFernando.-On Fri, Sep 30, 2011 at 18:29, Bobby Dewitt <[email protected]> wrote:\nEnterpriseDB now has Postgres Enterprise Manager\n(http://enterprisedb.com/products-services-training/products/postgres-enter\nprise-manager) that has some of the information that is being asked for.\nIt has a hot table analysis report that shows the number of scans, rows\nread, etc. Since much of the tool is using the pgAdmin code base, much of\nthis is also available in pgAdmin but PEM will track the statistics at\ngiven intervals and show you trending graphs over time. It's still a very\nnew tool so I'm sure they are working to add new features and have been\nlooking for enhancement suggestions. Of course, it requires a service\ncontract with them to use the tool, but it doesn't cost extra to add the\ntool if you already have a contract with them. It does have a 45 day\nevaluation if you wanted to check it out.\n\nHope that helps.\nBobby\n\nOn 9/30/11 7:53 AM, \"Gregg Jaskiewicz\" <[email protected]> wrote:\n\n>Looks like this is generally an area that can be targeted by some\n>businesses. Or an open source enthusiast.\n>One centre that captures all the information and produces a report\n>based on it would be a great thing. Especially in cases like mine,\n>where I have tens of postgresql installations on different hardware\n>and with different use patterns (but schemas and queries are the\n>same).\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 7 Oct 2011 19:01:36 -0300",
"msg_from": "Fernando Hevia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-9.0 Monitoring System to improve performance"
},
{
"msg_contents": "doe sit include monitoring replicas ?\n\nCheers,\nBen-\n\n\n\n\nOn Fri, Oct 7, 2011 at 3:01 PM, Fernando Hevia <[email protected]> wrote:\n> pgwatch might also be worth taking a look\n> at: http://www.cybertec.at/en/postgresql_products/pgwatch-cybertec-enterprise-postgresql-monitor\n> Fernando.-\n> On Fri, Sep 30, 2011 at 18:29, Bobby Dewitt <[email protected]> wrote:\n>>\n>> EnterpriseDB now has Postgres Enterprise Manager\n>>\n>> (http://enterprisedb.com/products-services-training/products/postgres-enter\n>> prise-manager) that has some of the information that is being asked for.\n>> It has a hot table analysis report that shows the number of scans, rows\n>> read, etc. Since much of the tool is using the pgAdmin code base, much of\n>> this is also available in pgAdmin but PEM will track the statistics at\n>> given intervals and show you trending graphs over time. It's still a very\n>> new tool so I'm sure they are working to add new features and have been\n>> looking for enhancement suggestions. Of course, it requires a service\n>> contract with them to use the tool, but it doesn't cost extra to add the\n>> tool if you already have a contract with them. It does have a 45 day\n>> evaluation if you wanted to check it out.\n>>\n>> Hope that helps.\n>> Bobby\n>>\n>> On 9/30/11 7:53 AM, \"Gregg Jaskiewicz\" <[email protected]> wrote:\n>>\n>> >Looks like this is generally an area that can be targeted by some\n>> >businesses. Or an open source enthusiast.\n>> >One centre that captures all the information and produces a report\n>> >based on it would be a great thing. Especially in cases like mine,\n>> >where I have tens of postgresql installations on different hardware\n>> >and with different use patterns (but schemas and queries are the\n>> >same).\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n",
"msg_date": "Fri, 7 Oct 2011 15:13:14 -0700",
"msg_from": "Ben Ciceron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-9.0 Monitoring System to improve performance"
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nI am back with an issue (likely).\n\nI am trying to create a table in our production database, and is taking 5\nseconds.\n\nWe have executed VACUUM FULL and yet to run ANALYZE. Can i expect the CREATE\nTABLE to be faster after ANALYZE finishes ?\n\nOr is there anything serious ?\n\nPlease share your thoughts.\n\nThanks\nVB\n\nHello Everyone,I am back with an issue (likely).I am trying to create a table in our production database, and is taking 5 seconds.We have executed VACUUM FULL and yet to run ANALYZE. Can i expect the CREATE TABLE to be faster after ANALYZE finishes ?\nOr is there anything serious ?Please share your thoughts.ThanksVB",
"msg_date": "Wed, 28 Sep 2011 22:36:55 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": ": Create table taking time"
},
{
"msg_contents": "\n> We have executed VACUUM FULL and yet to run ANALYZE. Can i expect the CREATE\n> TABLE to be faster after ANALYZE finishes ?\n\nThe only way CREATE TABLE would be affected by VACUUM ANALYZE running is\nif you're saturating either your CPU or your disk 100%. Which you may\nbe, on a low-end system.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Wed, 28 Sep 2011 11:55:34 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Create table taking time"
},
{
"msg_contents": "On Wed, Sep 28, 2011 at 11:06 AM, Venkat Balaji <[email protected]> wrote:\n> Hello Everyone,\n> I am back with an issue (likely).\n> I am trying to create a table in our production database, and is taking 5\n> seconds.\n> We have executed VACUUM FULL and yet to run ANALYZE. Can i expect the CREATE\n> TABLE to be faster after ANALYZE finishes ?\n> Or is there anything serious ?\n> Please share your thoughts.\n\nAre your system tables heavily bloated?\n",
"msg_date": "Wed, 28 Sep 2011 13:14:51 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Create table taking time"
},
{
"msg_contents": "We had performed VACUUM FULL and ANALYZE on the whole database.\n\nYes, the CPU is ticking at 99-100% when i see the top command.\n\nBut, we have 8 CPUs with 6 cores each.\n\nThanks\nVB\n\n\nOn Thu, Sep 29, 2011 at 12:44 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Wed, Sep 28, 2011 at 11:06 AM, Venkat Balaji <[email protected]>\n> wrote:\n> > Hello Everyone,\n> > I am back with an issue (likely).\n> > I am trying to create a table in our production database, and is taking 5\n> > seconds.\n> > We have executed VACUUM FULL and yet to run ANALYZE. Can i expect the\n> CREATE\n> > TABLE to be faster after ANALYZE finishes ?\n> > Or is there anything serious ?\n> > Please share your thoughts.\n>\n> Are your system tables heavily bloated?\n>\n\nWe had performed VACUUM FULL and ANALYZE on the whole database.Yes, the CPU is ticking at 99-100% when i see the top command.But, we have 8 CPUs with 6 cores each.\nThanksVBOn Thu, Sep 29, 2011 at 12:44 AM, Scott Marlowe <[email protected]> wrote:\nOn Wed, Sep 28, 2011 at 11:06 AM, Venkat Balaji <[email protected]> wrote:\n\n> Hello Everyone,\n> I am back with an issue (likely).\n> I am trying to create a table in our production database, and is taking 5\n> seconds.\n> We have executed VACUUM FULL and yet to run ANALYZE. Can i expect the CREATE\n> TABLE to be faster after ANALYZE finishes ?\n> Or is there anything serious ?\n> Please share your thoughts.\n\nAre your system tables heavily bloated?",
"msg_date": "Thu, 29 Sep 2011 13:32:29 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Create table taking time"
},
{
"msg_contents": "Venkat Balaji <[email protected]> wrote:\n \n> We had performed VACUUM FULL and ANALYZE on the whole database.\n \nSince you don't mention REINDEX, it seems likely that you've bloated\nyour indexes (potentially including the indexes on system tables). \nThat could lead to the symptoms you describe.\n \n> Yes, the CPU is ticking at 99-100% when i see the top command.\n> \n> But, we have 8 CPUs with 6 cores each.\n \nIf you've pegged 48 CPUs, it might be interesting to get a profile\nof where the time is being spent. Are you able to run oprofile or\nsomething similar?\n \n-Kevin\n",
"msg_date": "Thu, 29 Sep 2011 08:48:56 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Create table taking time"
},
{
"msg_contents": "On Wed, Sep 28, 2011 at 12:06 PM, Venkat Balaji <[email protected]> wrote:\n> Hello Everyone,\n> I am back with an issue (likely).\n> I am trying to create a table in our production database, and is taking 5\n> seconds.\n> We have executed VACUUM FULL and yet to run ANALYZE. Can i expect the CREATE\n> TABLE to be faster after ANALYZE finishes ?\n> Or is there anything serious ?\n\njust ruling out something obvious -- this is vanilla create table, not\nCREATE TABLE AS SELECT...?\n\nalso, what's i/o wait -- are you sure your not i/o bound and waiting\non transaction commit?\n\nmerlin\n",
"msg_date": "Thu, 29 Sep 2011 11:52:46 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Create table taking time"
},
{
"msg_contents": "I did not calculate the IO behavior of the server.\n\nWhat i noticed for the logs is that, the checkpoints are occurring too\nfrequently each checkpoint is taking up to minimum 80 - 200+ seconds to\ncomplete write and checkpoint sync is taking 80 - 200+ seconds to sync,\nwhich is i believe IO intensive.\n\nThanks\nVB\n\n\n\nOn Thu, Sep 29, 2011 at 10:22 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Wed, Sep 28, 2011 at 12:06 PM, Venkat Balaji <[email protected]>\n> wrote:\n> > Hello Everyone,\n> > I am back with an issue (likely).\n> > I am trying to create a table in our production database, and is taking 5\n> > seconds.\n> > We have executed VACUUM FULL and yet to run ANALYZE. Can i expect the\n> CREATE\n> > TABLE to be faster after ANALYZE finishes ?\n> > Or is there anything serious ?\n>\n> just ruling out something obvious -- this is vanilla create table, not\n> CREATE TABLE AS SELECT...?\n>\n> also, what's i/o wait -- are you sure your not i/o bound and waiting\n> on transaction commit?\n>\n> merlin\n>\n\nI did not calculate the IO behavior of the server.What i noticed for the logs is that, the checkpoints are occurring too frequently each checkpoint is taking up to minimum 80 - 200+ seconds to complete write and checkpoint sync is taking 80 - 200+ seconds to sync, which is i believe IO intensive.\nThanksVBOn Thu, Sep 29, 2011 at 10:22 PM, Merlin Moncure <[email protected]> wrote:\nOn Wed, Sep 28, 2011 at 12:06 PM, Venkat Balaji <[email protected]> wrote:\n\n> Hello Everyone,\n> I am back with an issue (likely).\n> I am trying to create a table in our production database, and is taking 5\n> seconds.\n> We have executed VACUUM FULL and yet to run ANALYZE. Can i expect the CREATE\n> TABLE to be faster after ANALYZE finishes ?\n> Or is there anything serious ?\n\njust ruling out something obvious -- this is vanilla create table, not\nCREATE TABLE AS SELECT...?\n\nalso, what's i/o wait -- are you sure your not i/o bound and waiting\non transaction commit?\n\nmerlin",
"msg_date": "Fri, 30 Sep 2011 10:52:30 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Create table taking time"
},
{
"msg_contents": "CPU load was hitting 100% constantly with high IOs.\n\nWe tuned some queries to decrease the CPU usage and everything is normal\nnow.\n\nThanks\nVB\n\nOn Fri, Sep 30, 2011 at 10:52 AM, Venkat Balaji <[email protected]>wrote:\n\n> I did not calculate the IO behavior of the server.\n>\n> What i noticed for the logs is that, the checkpoints are occurring too\n> frequently each checkpoint is taking up to minimum 80 - 200+ seconds to\n> complete write and checkpoint sync is taking 80 - 200+ seconds to sync,\n> which is i believe IO intensive.\n>\n> Thanks\n> VB\n>\n>\n>\n> On Thu, Sep 29, 2011 at 10:22 PM, Merlin Moncure <[email protected]>wrote:\n>\n>> On Wed, Sep 28, 2011 at 12:06 PM, Venkat Balaji <[email protected]>\n>> wrote:\n>> > Hello Everyone,\n>> > I am back with an issue (likely).\n>> > I am trying to create a table in our production database, and is taking\n>> 5\n>> > seconds.\n>> > We have executed VACUUM FULL and yet to run ANALYZE. Can i expect the\n>> CREATE\n>> > TABLE to be faster after ANALYZE finishes ?\n>> > Or is there anything serious ?\n>>\n>> just ruling out something obvious -- this is vanilla create table, not\n>> CREATE TABLE AS SELECT...?\n>>\n>> also, what's i/o wait -- are you sure your not i/o bound and waiting\n>> on transaction commit?\n>>\n>> merlin\n>>\n>\n>\n\nCPU load was hitting 100% constantly with high IOs.We tuned some queries to decrease the CPU usage and everything is normal now.ThanksVB\nOn Fri, Sep 30, 2011 at 10:52 AM, Venkat Balaji <[email protected]> wrote:\nI did not calculate the IO behavior of the server.What i noticed for the logs is that, the checkpoints are occurring too frequently each checkpoint is taking up to minimum 80 - 200+ seconds to complete write and checkpoint sync is taking 80 - 200+ seconds to sync, which is i believe IO intensive.\nThanksVBOn Thu, Sep 29, 2011 at 10:22 PM, Merlin Moncure <[email protected]> wrote:\nOn Wed, Sep 28, 2011 at 12:06 PM, Venkat Balaji <[email protected]> wrote:\n\n\n> Hello Everyone,\n> I am back with an issue (likely).\n> I am trying to create a table in our production database, and is taking 5\n> seconds.\n> We have executed VACUUM FULL and yet to run ANALYZE. Can i expect the CREATE\n> TABLE to be faster after ANALYZE finishes ?\n> Or is there anything serious ?\n\njust ruling out something obvious -- this is vanilla create table, not\nCREATE TABLE AS SELECT...?\n\nalso, what's i/o wait -- are you sure your not i/o bound and waiting\non transaction commit?\n\nmerlin",
"msg_date": "Fri, 30 Sep 2011 16:50:51 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Create table taking time"
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nI have been working on PostgreSQL for quite a while (2 yrs) now.\n\nI have got \"PostgreSQL 9.0 High Performance\" book today and quite excited to\ngo through it.\n\nPlease let me know any source where i can get more books on PG, I am\nespecially looking for books on PG internals, architecture, Backup &\nRecovery and HA.\n\n(This will help me do thorough research and testing on production. I would\nlike to come back with documentation on results, learning's, findings and\nsuggestions).\n\nLooking forward for the information.\n\nRegards,\nVB\n\nHello Everyone,I have been working on PostgreSQL for quite a while (2 yrs) now.I have got \"PostgreSQL 9.0 High Performance\" book today and quite excited to go through it.\nPlease let me know any source where i can get more books on PG, I am especially looking for books on PG internals, architecture, Backup & Recovery and HA.(This will help me do thorough research and testing on production. I would like to come back with documentation on results, learning's, findings and suggestions).\nLooking forward for the information.Regards,VB",
"msg_date": "Wed, 28 Sep 2011 22:47:44 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": ": Looking for PG Books"
},
{
"msg_contents": "\n> Please let me know any source where i can get more books on PG, I am\n> especially looking for books on PG internals, architecture, Backup &\n> Recovery and HA.\n\nI'm afraid that you have to rely on the primary docs for now. The books\nwhich are out are a bit old (version 8.0 and 8.1) and do not cover\ninternals.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Wed, 28 Sep 2011 11:54:22 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Looking for PG Books"
}
] |
[
{
"msg_contents": "Hola!!\n\nTengo un serio inconveniente, estoy trabajando con postgresql 8.2 y tomcat 5.5.20 en un equipo con Centos 5.3\n\nY se me presenta un problema con una consulta, si la ejecuto le toma alrededores de 2.6 segundos la ejecución.\n\npero en ocaciones se queda pegada esa consulta y luego para cada usuario (desde la interfaz web) que ejecute la misma consulta\nse\n queda en espera pero la primera nunca termina, luego consume casi todo \nel pool de conexiones y el equipo en general se coloca muy lento\nal hacer un top encuentro que el uso de memoria es cerca al 7% y el de CPU del 3%, pero aún así las consultas siguen pegadas.\n\nlo que no sé es si postgresql en la version 8.2 bloquee consultas (SELECT)\n\nAgradezco la colaboración prestada.\n\n\nNumael Vacca Duran\n\n\n\n \t\t \t \t\t \n\n\n\n\n\nHola!!Tengo un serio inconveniente, estoy trabajando con postgresql 8.2 y tomcat 5.5.20 en un equipo con Centos 5.3Y se me presenta un problema con una consulta, si la ejecuto le toma alrededores de 2.6 segundos la ejecución.pero en ocaciones se queda pegada esa consulta y luego para cada usuario (desde la interfaz web) que ejecute la misma consultase\n queda en espera pero la primera nunca termina, luego consume casi todo \nel pool de conexiones y el equipo en general se coloca muy lentoal hacer un top encuentro que el uso de memoria es cerca al 7% y el de CPU del 3%, pero aún así las consultas siguen pegadas.lo que no sé es si postgresql en la version 8.2 bloquee consultas (SELECT)Agradezco la colaboración prestada.Numael Vacca Duran",
"msg_date": "Thu, 29 Sep 2011 12:43:37 +0000",
"msg_from": "Numael Vacca Duran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Select se bloquea"
},
{
"msg_contents": "2011/9/29 Numael Vacca Duran <[email protected]>:\n>\n> Hola!!\n>\n> Tengo un serio inconveniente, estoy trabajando con postgresql 8.2 y tomcat\n> 5.5.20 en un equipo con Centos 5.3\n\n1- 8.2 es viejito\n2- Hacen falta muchos más datos. Las consultas en sí, un EXPLAIN y\nEXLAIN ANALYZE de las consultas, vendrían bien para saber de qué\nhablás\n3- También detalles del hardare,\n4- y de la estructura y tamaño de tu base de datos\n",
"msg_date": "Thu, 29 Sep 2011 16:21:40 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select se bloquea"
}
] |
[
{
"msg_contents": "Hey,\n\n Is there a suggested number of child tables for table partitioning, I ran a stress test on a master\ntable (with 800 thousand rows), trying to create 500,000 child tables for it, each child table has 2\nindexes and 3 constraints (Primary key and foreign key). I wrote a script to do it: after 17 hours,\nonly 7600 child tables are created.\n\n The script is still running, I can see that one child tables is created about every minute. The CPU\nUsage is 100%. The query speed is really slow now (I set constraint_exclusion=on).\n\n This stress test is for the partition plan I'm going to make, since we don't want to add another\nField just for partitioning. So is there something I did wrong? Or postgres cannot handle too many\nChild tables? That way I need to come up with a new partition plan.\n\n The system is 32-bit Linux, dual core, 4G memory. Postgres version is 8.1.21.\n\nThanks,\nJohn\n\n\n\n\n\n\n\n\n\n\n\n\nHey,\n \n Is there a suggested number of child tables for table partitioning, I ran a stress test on a master\n\ntable (with 800 thousand rows), trying to create 500,000 child tables for it, each child table has 2\nindexes and 3 constraints (Primary key and foreign key). I wrote a script to do it: after 17 hours,\n\nonly 7600 child tables are created.\n \n The script is still running, I can see that one child tables is created about every minute. The CPU\nUsage is 100%. The query speed is really slow now (I set constraint_exclusion=on).\n\n \n This stress test is for the partition plan I’m going to make, since we don’t want to add another\nField just for partitioning. So is there something I did wrong? Or postgres cannot handle too many\nChild tables? That way I need to come up with a new partition plan.\n \n The system is 32-bit Linux, dual core, 4G memory. Postgres version is 8.1.21.\n \nThanks,\nJohn",
"msg_date": "Thu, 29 Sep 2011 14:14:35 +0000",
"msg_from": "Jian Shi <[email protected]>",
"msg_from_op": true,
"msg_subject": "the number of child tables --table partitioning"
},
{
"msg_contents": "Jian Shi <[email protected]> wrote:\n \n[moving the last sentence to the top]\n \n> The system is 32-bit Linux, dual core, 4G memory. Postgres version\n> is 8.1.21.\n \nVersion 8.1 is out of support and doesn't perform nearly as well as\nmodern versions.\n \nhttp://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy\n \nThe system you're talking about is the same as what I bought as a\nhome computer four years ago. You don't mention your disk system,\nbut that doesn't sound like server-class hardware to me.\n \n> Is there a suggested number of child tables for table\n> partitioning,\n \nGenerally, don't go over about 100 partitions per table.\n \n> I ran a stress test on a master table (with 800 thousand rows),\n> trying to create 500,000 child tables for it, each child table has\n> 2 indexes and 3 constraints (Primary key and foreign key).\n \nThat probably at least 5 disk files per table, to say nothing of the\nsystem table entries and catalog caching. Some file systems really\nbog down with millions of disk files in a single subdirectory.\n \nThat is never going to work on the hardware you cite, and is a very,\nvery, very bad design on any hardware.\n \n> This stress test is for the partition plan I'm going to make,\n> since we don't want to add another Field just for partitioning.\n \nWhy not?\n \n-Kevin\n",
"msg_date": "Thu, 29 Sep 2011 10:08:53 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: the number of child tables --table partitioning"
},
{
"msg_contents": "Hi,\n\nOn 30 September 2011 01:08, Kevin Grittner <[email protected]> wrote:\n>> Is there a suggested number of child tables for table\n>> partitioning,\n>\n> Generally, don't go over about 100 partitions per table.\n\nHaving 365 partitions per table is fine...\n\n-- \nOndrej Ivanic\n([email protected])\n",
"msg_date": "Fri, 30 Sep 2011 07:12:41 +1000",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: the number of child tables --table partitioning"
},
{
"msg_contents": "2011/9/29 Ondrej Ivanič <[email protected]>:\n> Hi,\n>\n> On 30 September 2011 01:08, Kevin Grittner <[email protected]> wrote:\n>>> Is there a suggested number of child tables for table\n>>> partitioning,\n>>\n>> Generally, don't go over about 100 partitions per table.\n>\n> Having 365 partitions per table is fine...\n\nyeah -- the system was certainly designed to support 'dozens to\nhundreds', but 'hundreds of thousands' is simply not realistic. any\nmeasurable benefit gained from partitioning is going to be var\nexceeded by the database having to track so many tables.\n\nbtw, partitioning for purposes of performance is a dubious strategy\nunless you can leverage non-uniform access patterns of the data or do\nother tricks that allow simplification of structures (like removing\n'company_id' from all tables and indexes because it's implied by the\npartition itself).\n\nmerlin\n",
"msg_date": "Fri, 30 Sep 2011 12:01:27 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: the number of child tables --table partitioning"
},
{
"msg_contents": "Em 30-09-2011 14:01, Merlin Moncure escreveu:\n> 2011/9/29 Ondrej Ivanič<[email protected]>:\n>> Hi,\n>>\n>> On 30 September 2011 01:08, Kevin Grittner<[email protected]> wrote:\n>>>> Is there a suggested number of child tables for table\n>>>> partitioning,\n>>> Generally, don't go over about 100 partitions per table.\n>> Having 365 partitions per table is fine...\n> yeah -- the system was certainly designed to support 'dozens to\n> hundreds', but 'hundreds of thousands' is simply not realistic. any\n> measurable benefit gained from partitioning is going to be var\n> exceeded by the database having to track so many tables.\n>\n> btw, partitioning for purposes of performance is a dubious strategy\n> unless you can leverage non-uniform access patterns of the data or do\n> other tricks that allow simplification of structures (like removing\n> 'company_id' from all tables and indexes because it's implied by the\n> partition itself).\n>\n> merlin\n>\n\nCan we see the transparent table partitioningimplemented in Postgres 9.2 \nversion before the end of the world in 2012? ;)\nToday, it is very difficult to maintain table partitioning schemes even \nwith small number of partitions.\n\nAnyway, congrats for the superb 9.1 version!\n",
"msg_date": "Fri, 30 Sep 2011 14:13:37 -0300",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: the number of child tables --table partitioning"
}
] |
[
{
"msg_contents": "All,\n\nHere's a case which it seems like we ought to be able to optimize for:\n\ndatamart-# ORDER BY txn_timestamp DESC\ndatamart-# LIMIT 200\ndatamart-# OFFSET 6000;\n\n QUERY PLAN\n\n---------------------------\n Limit (cost=560529.82..560529.82 rows=1 width=145) (actual\ntime=22419.760..22419.760 rows=0 loops=1)\n -> Sort (cost=560516.17..560529.82 rows=5459 width=145) (actual\ntime=22418.076..22419.144 rows=5828 loops=1)\n Sort Key: lh.txn_timestamp\n Sort Method: quicksort Memory: 1744kB\n -> Nested Loop Left Join (cost=0.00..560177.32 rows=5459\nwidth=145) (actual time=4216.898..22398.658 rows=5828 loops=1)\n -> Nested Loop Left Join (cost=0.00..88186.22 rows=5459\nwidth=135) (actual time=4216.747..19250.891 rows=5828 loops=1)\n -> Nested Loop Left Join (cost=0.00..86657.26\nrows=5459 width=124) (actual time=4216.723..19206.461 rows=5828 loops=1)\n\n... it seems like, if we get as far as the sort and the executors knows\nthat there are less rows than the final offset, it ought to be able to\nskip the final sort.\n\nIs there some non-obvious reason which would make this kind of\noptimization difficult? Doesn't the executor know at that point how\nmany rows it has?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Thu, 29 Sep 2011 17:39:28 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Shortcutting too-large offsets?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Here's a case which it seems like we ought to be able to optimize for:\n> [ offset skips all the output of a sort node ]\n> Is there some non-obvious reason which would make this kind of\n> optimization difficult? Doesn't the executor know at that point how\n> many rows it has?\n\nIn principle, yeah, we could make it do that, but it seems like a likely\nsource of maintenance headaches. This example is not exactly compelling\nenough to make me want to do it. Large OFFSETs are always going to be\nproblematic from a performance standpoint, and the fact that we could\nshort-circuit this one corner case isn't really going to make them much\nmore usable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Sep 2011 10:36:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shortcutting too-large offsets? "
},
{
"msg_contents": "It may be difficult, i think. When unsorted recordset is stored in\ntemp table, number of records may be saved and used. Otherwise it is\nunknown.\n\n2011/9/30, Josh Berkus <[email protected]>:\n> All,\n>\n> Here's a case which it seems like we ought to be able to optimize for:\n>\n> datamart-# ORDER BY txn_timestamp DESC\n> datamart-# LIMIT 200\n> datamart-# OFFSET 6000;\n>\n> QUERY PLAN\n>\n> ---------------------------\n> Limit (cost=560529.82..560529.82 rows=1 width=145) (actual\n> time=22419.760..22419.760 rows=0 loops=1)\n> -> Sort (cost=560516.17..560529.82 rows=5459 width=145) (actual\n> time=22418.076..22419.144 rows=5828 loops=1)\n> Sort Key: lh.txn_timestamp\n> Sort Method: quicksort Memory: 1744kB\n> -> Nested Loop Left Join (cost=0.00..560177.32 rows=5459\n> width=145) (actual time=4216.898..22398.658 rows=5828 loops=1)\n> -> Nested Loop Left Join (cost=0.00..88186.22 rows=5459\n> width=135) (actual time=4216.747..19250.891 rows=5828 loops=1)\n> -> Nested Loop Left Join (cost=0.00..86657.26\n> rows=5459 width=124) (actual time=4216.723..19206.461 rows=5828 loops=1)\n>\n> ... it seems like, if we get as far as the sort and the executors knows\n> that there are less rows than the final offset, it ought to be able to\n> skip the final sort.\n>\n> Is there some non-obvious reason which would make this kind of\n> optimization difficult? Doesn't the executor know at that point how\n> many rows it has?\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n-- \n------------\npasman\n",
"msg_date": "Fri, 30 Sep 2011 17:08:15 +0200",
"msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shortcutting too-large offsets?"
},
{
"msg_contents": "Tom,\n\n> In principle, yeah, we could make it do that, but it seems like a likely\n> source of maintenance headaches. This example is not exactly compelling\n> enough to make me want to do it. Large OFFSETs are always going to be\n> problematic from a performance standpoint, and the fact that we could\n> short-circuit this one corner case isn't really going to make them much\n> more usable.\n\nIt's not that uncommon of a corner case though; it's one which happens\nall the time in webapps which paginate. People just have to ask for a\npage after the end. It's really a question of how simple the code to\nmake the optimization would be; if it would be a 5-line patch, then it's\nworth it; if it would be a 110-line refactor, no.\n\nLemme see if I can figure it out ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 30 Sep 2011 11:44:25 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shortcutting too-large offsets?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> In principle, yeah, we could make it do that, but it seems like a likely\n>> source of maintenance headaches. This example is not exactly compelling\n>> enough to make me want to do it. Large OFFSETs are always going to be\n>> problematic from a performance standpoint, and the fact that we could\n>> short-circuit this one corner case isn't really going to make them much\n>> more usable.\n\n> It's not that uncommon of a corner case though; it's one which happens\n> all the time in webapps which paginate. People just have to ask for a\n> page after the end. It's really a question of how simple the code to\n> make the optimization would be; if it would be a 5-line patch, then it's\n> worth it; if it would be a 110-line refactor, no.\n\nNo, it's really a question of whether it's worth any lines at all,\nand I remain unconvinced. If someone asks for the last page, or any\npage near the end, it'll take just about the same amount of time as\nasking for a page past the end. So any app like this is going to have\nsucky performance, and kluging the corner case isn't going to help much.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Sep 2011 14:56:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shortcutting too-large offsets? "
},
{
"msg_contents": "On Fri, Sep 30, 2011 at 2:56 PM, Tom Lane <[email protected]> wrote:\n> Josh Berkus <[email protected]> writes:\n>>> In principle, yeah, we could make it do that, but it seems like a likely\n>>> source of maintenance headaches. This example is not exactly compelling\n>>> enough to make me want to do it. Large OFFSETs are always going to be\n>>> problematic from a performance standpoint, and the fact that we could\n>>> short-circuit this one corner case isn't really going to make them much\n>>> more usable.\n>\n>> It's not that uncommon of a corner case though; it's one which happens\n>> all the time in webapps which paginate. People just have to ask for a\n>> page after the end. It's really a question of how simple the code to\n>> make the optimization would be; if it would be a 5-line patch, then it's\n>> worth it; if it would be a 110-line refactor, no.\n>\n> No, it's really a question of whether it's worth any lines at all,\n> and I remain unconvinced. If someone asks for the last page, or any\n> page near the end, it'll take just about the same amount of time as\n> asking for a page past the end. So any app like this is going to have\n> sucky performance, and kluging the corner case isn't going to help much.\n\nIt looks to me like it took 22.3 seconds to do the nested loop and\nthen 22.4 seconds to do the nested loop plus the sort. So the sort\nitself only took 100 ms, which is hardly worth getting excited about.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 27 Oct 2011 14:46:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shortcutting too-large offsets?"
}
] |
[
{
"msg_contents": "I recently had need of an \"array_except\" function but couldn't find\nany good/existing examples. Based off the neat \"array_intersect\"\nfunction at http://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks#Intersection_of_arrays,\nI put together an \"array_except\" version to return the array elements\nthat are not found in both arrays.\nCan anyone think of a faster version of this function? Maybe in C?\nThe generate_series example takes about 3.5s on the dev db I'm testing\non, which isn't too bad (for my needs at least).\n\ncreate or replace function array_except(anyarray,anyarray) returns\nanyarray as $$\nselect array_agg(elements)\nfrom (\n (select unnest($1) except select unnest($2))\n union\n (select unnest($2) except select unnest($1))\n ) as r (elements)\n$$ language sql strict immutable;\n\nselect array_except('{this,is,a,test}'::text[],'{also,part,of,a,test,run}'::text[]);\n\nselect array_to_relation(arr)\nfrom array_except( (select array_agg(n) from\ngenerate_series(1,1000000,1) as n),\n (select array_agg(n) from\ngenerate_series(5,1000005,1) as n)\n ) as arr;\n\nI'm testing on 9.0.4\n",
"msg_date": "Thu, 29 Sep 2011 19:32:43 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": true,
"msg_subject": "array_except -- Find elements that are not common to both arrays"
},
{
"msg_contents": "On Thursday, September 29, 2011, bricklen <[email protected]> wrote:\n> I recently had need of an \"array_except\" function but couldn't find\n> any good/existing examples. Based off the neat \"array_intersect\"\n> function at\nhttp://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks#Intersection_of_arrays\n,\n> I put together an \"array_except\" version to return the array elements\n> that are not found in both arrays.\n> Can anyone think of a faster version of this function? Maybe in C?\n> The generate_series example takes about 3.5s on the dev db I'm testing\n> on, which isn't too bad (for my needs at least).\n>\n> create or replace function array_except(anyarray,anyarray) returns\n> anyarray as $$\n> select array_agg(elements)\n> from (\n> (select unnest($1) except select unnest($2))\n> union\n> (select unnest($2) except select unnest($1))\n> ) as r (elements)\n> $$ language sql strict immutable;\n>\n> select\narray_except('{this,is,a,test}'::text[],'{also,part,of,a,test,run}'::text[]);\n>\n> select array_to_relation(arr)\n> from array_except( (select array_agg(n) fro>\ngenerate_series(1,1000000,1) as n),\n> (select array_agg(n) from\n> generate_series(5,1000005,1) as n)\n> ) as arr;\n>\n> I'm testing on 9.0.4\n>r\n> --\n> Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n*) Prefer union all to union\n*) prefer array constructor to array_agg when not grouping.\n*) perhaps consider not reusing 'except' name with different semantic\nmeaning\n\nWell done\nmerlin (on phone & in bed)\n\nOn Thursday, September 29, 2011, bricklen <[email protected]> wrote:> I recently had need of an \"array_except\" function but couldn't find> any good/existing examples. Based off the neat \"array_intersect\"\n> function at http://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks#Intersection_of_arrays,> I put together an \"array_except\" version to return the array elements\n> that are not found in both arrays.> Can anyone think of a faster version of this function? Maybe in C?> The generate_series example takes about 3.5s on the dev db I'm testing> on, which isn't too bad (for my needs at least).\n>> create or replace function array_except(anyarray,anyarray) returns> anyarray as $$> select array_agg(elements)> from (> (select unnest($1) except select unnest($2))> union\n> (select unnest($2) except select unnest($1))> ) as r (elements)> $$ language sql strict immutable;>> select array_except('{this,is,a,test}'::text[],'{also,part,of,a,test,run}'::text[]);\n>> select array_to_relation(arr)> from array_except( (select array_agg(n) fro> generate_series(1,1000000,1) as n),> (select array_agg(n) from> generate_series(5,1000005,1) as n)\n> ) as arr;>> I'm testing on 9.0.4>r> --> Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance*) Prefer union all to union *) prefer array constructor to array_agg when not grouping.\n*) perhaps consider not reusing 'except' name with different semantic meaning Well donemerlin (on phone & in bed)",
"msg_date": "Thu, 29 Sep 2011 22:08:19 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: array_except -- Find elements that are not common to both arrays"
},
{
"msg_contents": "On Thu, Sep 29, 2011 at 8:08 PM, Merlin Moncure <[email protected]> wrote:\n> *) Prefer union all to union\n> *) prefer array constructor to array_agg when not grouping.\n> *) perhaps consider not reusing 'except' name with different semantic\n> meaning\n>\n> Well done\n> merlin (on phone & in bed)\n\nHi Merlin,\n\nThanks for the tips. I have implemented suggestion 1 & 2, and that has\nshaved about 1/2 of a second off of the generate_series example below\n(3.52s -> 3.48s)\n\nDo you have a suggestion for a better name? I considered array_unique,\narray_distinct etc, but those don't really describe what is being\nreturned IMO. Something that conjures up the \"return elements that are\nnot common to both arrays\" would be nice.\n\ncreate or replace function array_except(anyarray,anyarray) returns\nanyarray as $$\nselect ARRAY(\n(\nselect r.*\nfrom (\n (select unnest($1) except select unnest($2))\n union all\n (select unnest($2) except select unnest($1))\n ) as r (elements)\n))\n$$ language sql strict immutable;\n\n\nselect array_except('{this,is,a,test}'::text[],'{also,part,of,a,test}'::text[]);\n\nselect array_to_relation(arr)\nfrom array_except( (select array_agg(n) from\ngenerate_series(1,1000000,1) as n) , (select array_agg(n) from\ngenerate_series(5,1000005,1) as n) ) as arr;\n\n\nMore improvement suggestions gladly accepted!\n",
"msg_date": "Thu, 29 Sep 2011 20:27:44 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: array_except -- Find elements that are not common to both arrays"
},
{
"msg_contents": "Since you are using except and not except all, you are not looking at \narrays with duplicates.\nFor this case next function what the fastest for me:\n\ncreate or replace function array_except2(anyarray,anyarray) returns\nanyarray as $$\nselect ARRAY(\n(\nselect r.elements\nfrom (\n (select 1,unnest($1))\n union all\n (select 2,unnest($2))\n ) as r (arr, elements)\n group by 1\n having min(arr)=max(arr)\n))\n$$ language sql strict immutable;\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Fri, 30 Sep 2011 15:23:11 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: array_except -- Find elements that are not common to both arrays"
},
{
"msg_contents": "On Fri, Sep 30, 2011 at 5:23 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n> Since you are using except and not except all, you are not looking at arrays\n> with duplicates.\n> For this case next function what the fastest for me:\n>\n> create or replace function array_except2(anyarray,anyarray) returns\n> anyarray as $$\n> select ARRAY(\n> (\n> select r.elements\n> from (\n> (select 1,unnest($1))\n> union all\n> (select 2,unnest($2))\n> ) as r (arr, elements)\n> group by 1\n> having min(arr)=max(arr)\n> ))\n> $$ language sql strict immutable;\n\nNice! Your version shaved almost a full second off, now 2.5s from 3.4s\nit was earlier.\n",
"msg_date": "Fri, 30 Sep 2011 07:28:36 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: array_except -- Find elements that are not common to both arrays"
},
{
"msg_contents": "On Fri, Sep 30, 2011 at 5:23 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n> Since you are using except and not except all, you are not looking at arrays\n> with duplicates.\n> For this case next function what the fastest for me:\n>\n> create or replace function array_except2(anyarray,anyarray) returns\n> anyarray as $$\n> select ARRAY(\n> (\n> select r.elements\n> from (\n> (select 1,unnest($1))\n> union all\n> (select 2,unnest($2))\n> ) as r (arr, elements)\n> group by 1\n> having min(arr)=max(arr)\n> ))\n> $$ language sql strict immutable;\n>\n\nI've been informed that this type of operation is called \"symmetric\ndifference\"[1], and can be represented by A ∆ B. A couple of\nalternative names were proposed, \"array_symmetric_difference\" and\n\"array_xor\".\nDoes anyone have a preference for the name? I assume that this\nfunction might potentially be used by others now that it is in the pg\nlists, so might as well give it an appropriate name now.\nIs this something that could be written in C to make it faster (I don't know C)\n\n[1] http://en.wikipedia.org/wiki/Symmetric_difference\n",
"msg_date": "Fri, 30 Sep 2011 11:07:54 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: array_except -- Find elements that are not common to both arrays"
},
{
"msg_contents": "\nOn Sep 30, 2011, at 12:07 PM, bricklen wrote:\n\n> I've been informed that this type of operation is called \"symmetric\n> difference\"[1], and can be represented by A ∆ B. A couple of\n> alternative names were proposed, \"array_symmetric_difference\" and\n> \"array_xor\".\n> Does anyone have a preference for the name? I assume that this\n> function might potentially be used by others now that it is in the pg\n> lists, so might as well give it an appropriate name now.\n> Is this something that could be written in C to make it faster (I don't know C)\n\nFWIW, speaking as somebody who has no need of this function, \"array_xor\" is a pretty clear name that indicates what's going to happen.",
"msg_date": "Fri, 30 Sep 2011 14:15:43 -0600",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: array_except -- Find elements that are not common to both arrays"
},
{
"msg_contents": "On Fri, Sep 30, 2011 at 3:15 PM, Ben Chobot <[email protected]> wrote:\n>\n> On Sep 30, 2011, at 12:07 PM, bricklen wrote:\n>\n>> I've been informed that this type of operation is called \"symmetric\n>> difference\"[1], and can be represented by A ∆ B. A couple of\n>> alternative names were proposed, \"array_symmetric_difference\" and\n>> \"array_xor\".\n>> Does anyone have a preference for the name? I assume that this\n>> function might potentially be used by others now that it is in the pg\n>> lists, so might as well give it an appropriate name now.\n>> Is this something that could be written in C to make it faster (I don't know C)\n>\n> FWIW, speaking as somebody who has no need of this function, \"array_xor\" is a pretty clear name that indicates what's going to happen.\n\n+1 on this -- was going to suggest until you beat me to it. I also\nfor the record really think the array_ prefix on array handling\nfunctions is pretty silly since we support overloading -- greatly\nprefer unnest() to array_unnest() etc. So, how about xor()?\n\nmerlin\n",
"msg_date": "Fri, 30 Sep 2011 16:12:07 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: array_except -- Find elements that are not common to both arrays"
},
{
"msg_contents": "On Fri, Sep 30, 2011 at 2:12 PM, Merlin Moncure <[email protected]> wrote:\n>> FWIW, speaking as somebody who has no need of this function, \"array_xor\" is a pretty clear name that indicates what's going to happen.\n>\n> +1 on this -- was going to suggest until you beat me to it. I also\n> for the record really think the array_ prefix on array handling\n> functions is pretty silly since we support overloading -- greatly\n> prefer unnest() to array_unnest() etc. So, how about xor()?\n\nMakes sense, in light of your comment about overloading.\n",
"msg_date": "Fri, 30 Sep 2011 16:43:56 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: array_except -- Find elements that are not common to both arrays"
},
{
"msg_contents": "On 01/10/11 01:23, Vitalii Tymchyshyn wrote:\n> Since you are using except and not except all, you are not looking at \n> arrays with duplicates.\n> For this case next function what the fastest for me:\n>\n> create or replace function array_except2(anyarray,anyarray) returns\n> anyarray as $$\n> select ARRAY(\n> (\n> select r.elements\n> from (\n> (select 1,unnest($1))\n> union all\n> (select 2,unnest($2))\n> ) as r (arr, elements)\n> group by 1\n> having min(arr)=max(arr)\n> ))\n> $$ language sql strict immutable;\n>\n> Best regards, Vitalii Tymchyshyn\n>\nVery neat!\n\nI could see that this function could trivially be modified to handle 3 \narrays.\n\nQUESTION: Could this be modified to take an arbitrary number of arrays?\n",
"msg_date": "Tue, 04 Oct 2011 20:16:39 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: array_except -- Find elements that are not common to both arrays"
},
{
"msg_contents": "On Tue, Oct 4, 2011 at 2:16 AM, Gavin Flower\n<[email protected]> wrote:\n> On 01/10/11 01:23, Vitalii Tymchyshyn wrote:\n>>\n>> Since you are using except and not except all, you are not looking at\n>> arrays with duplicates.\n>> For this case next function what the fastest for me:\n>>\n>> create or replace function array_except2(anyarray,anyarray) returns\n>> anyarray as $$\n>> select ARRAY(\n>> (\n>> select r.elements\n>> from (\n>> (select 1,unnest($1))\n>> union all\n>> (select 2,unnest($2))\n>> ) as r (arr, elements)\n>> group by 1\n>> having min(arr)=max(arr)\n>> ))\n>> $$ language sql strict immutable;\n>>\n>> Best regards, Vitalii Tymchyshyn\n>>\n> Very neat!\n>\n> I could see that this function could trivially be modified to handle 3\n> arrays.\n>\n> QUESTION: Could this be modified to take an arbitrary number of arrays?\n\nhm good question. not in sql aiui, because variadic arguments are\npushed through as arrays, and there is no such thing in postgres as a\n'anyarray[]' (or any array of array for that matter).\n\nin c, you get to do more detail processing of variadic arguments, so\nyou could probably rig something that way -- but the implementation\nwould be completely different.\n\nalternate way to avoid the variadic problem would be to make an xor()\naggregate which chains the arrays down using the 'all sql' method\nposted above -- not as fast maybe, but pretty darn cool if you ask me.\n\nmerlin\n",
"msg_date": "Tue, 4 Oct 2011 11:10:25 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: array_except -- Find elements that are not common to both arrays"
}
] |
[
{
"msg_contents": "\nI have a 710 (Lyndonville) SSD in a test server. Ultimately we'll run\ncapacity tests using our application (which in turn uses PG), but it'll\ntake a while to get those set up. In the meantime, I'd be happy to\nentertain running whatever tests folks here would like to suggest,\nspare time-permitting.\n\nI've already tried bonnie++, sysbench and a simple WAL emulation\ntest program I wrote more than 10 years ago. The drive tests at\naround 160Mbyte/s on bulk data and 4k tps for commit rate writing\nsmall blocks.\n\n\n",
"msg_date": "Sat, 01 Oct 2011 20:39:27 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "Do you have an Intel 320? I'd love to see tests comparing 710 to 320 and see if it's worth the price premium.\n\n\n________________________________\nFrom: David Boreham <[email protected]>\nTo: PGSQL Performance <[email protected]>\nSent: Saturday, October 1, 2011 10:39 PM\nSubject: [PERFORM] Suggestions for Intel 710 SSD test\n\n\nI have a 710 (Lyndonville) SSD in a test server. Ultimately we'll run\ncapacity tests using our application (which in turn uses PG), but it'll\ntake a while to get those set up. In the meantime, I'd be happy to\nentertain running whatever tests folks here would like to suggest,\nspare time-permitting.\n\nI've already tried bonnie++, sysbench and a simple WAL emulation\ntest program I wrote more than 10 years ago. The drive tests at\naround 160Mbyte/s on bulk data and 4k tps for commit rate writing\nsmall blocks.\n\n\n\n-- Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nDo you have an Intel 320? I'd love to see tests comparing 710 to 320 and see if it's worth the price premium.From: David Boreham <[email protected]>To: PGSQL Performance <[email protected]>Sent: Saturday, October 1, 2011 10:39 PMSubject: [PERFORM] Suggestions for Intel 710 SSD test\nI have a 710 (Lyndonville) SSD in a test server. Ultimately we'll runcapacity tests using our application (which in turn uses PG), but it'lltake a while to get those set up. In the meantime, I'd be happy toentertain running whatever tests folks here would like to suggest,spare time-permitting.I've already tried bonnie++, sysbench and a simple WAL emulationtest program I wrote more than 10 years ago. The drive tests ataround 160Mbyte/s on bulk data and 4k tps for commit rate writingsmall blocks.-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 1 Oct 2011 20:22:42 -0700 (PDT)",
"msg_from": "Andy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "How does this same benchmark compare on similar (or same) hardware but with magnetic media?\n\nOn Oct 1, 2011, at 8:22 PM, Andy wrote:\n\n> Do you have an Intel 320? I'd love to see tests comparing 710 to 320 and see if it's worth the price premium.\n> \n> From: David Boreham <[email protected]>\n> To: PGSQL Performance <[email protected]>\n> Sent: Saturday, October 1, 2011 10:39 PM\n> Subject: [PERFORM] Suggestions for Intel 710 SSD test\n> \n> \n> I have a 710 (Lyndonville) SSD in a test server. Ultimately we'll run\n> capacity tests using our application (which in turn uses PG), but it'll\n> take a while to get those set up. In the meantime, I'd be happy to\n> entertain running whatever tests folks here would like to suggest,\n> spare time-permitting.\n> \n> I've already tried bonnie++, sysbench and a simple WAL emulation\n> test program I wrote more than 10 years ago. The drive tests at\n> around 160Mbyte/s on bulk data and 4k tps for commit rate writing\n> small blocks.\n> \n> \n> \n> -- Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n\nHow does this same benchmark compare on similar (or same) hardware but with magnetic media?On Oct 1, 2011, at 8:22 PM, Andy wrote:Do you have an Intel 320? I'd love to see tests comparing 710 to 320 and see if it's worth the price premium.From: David Boreham <[email protected]>To: PGSQL Performance <[email protected]>Sent: Saturday, October 1, 2011 10:39 PMSubject: [PERFORM] Suggestions for Intel 710 SSD test\nI have a 710 (Lyndonville) SSD in a test server. Ultimately we'll runcapacity tests using our application (which in turn uses PG), but it'lltake a while to get those set up. In the meantime, I'd be happy toentertain running whatever tests folks here would like to suggest,spare time-permitting.I've already tried bonnie++, sysbench and a simple WAL emulationtest program I wrote more than 10 years ago. The drive tests ataround 160Mbyte/s on bulk data and 4k tps for commit rate writingsmall blocks.-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 01 Oct 2011 21:00:12 -0700",
"msg_from": "Gregory Gerard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "Anandtech took the trouble of doing that:\nhttp://www.anandtech.com/show/4902/intel-ssd-710-200gb-review\n\nI think the main advantage of the 710 compared to the 320 is its much \nheavier over-provisioning and better quality MLC-chips. Both the 320 and \n710 use the same controller and offer similar performance. But 320GB of \nraw capacity is sold as a 300GB Intel 320 and as a 200GB Intel 710...\n\nSo if you don't need write-endurance, you can probably assume the 320 \nwill be more capacity and bang for the buck and will be good enough. If \nyou're a worried about write-endurance, you should have a look at the \n710. You can obviously also only provision about 200GB of that 300GB \n320-ssd and thus increase its expected live span, but you'd still miss \nthe higher quality MLC. Given the fact that you can get two 320's for \nthe price of one 710, its probably always a bit difficult to actually \nmake the choice (unless you want a fixed amount of disks and the best \nendurance possible for that).\n\nBest regards,\n\nArjen\n\nOn 2-10-2011 5:22 Andy wrote:\n> Do you have an Intel 320? I'd love to see tests comparing 710 to 320 and\n> see if it's worth the price premium.\n>\n> ------------------------------------------------------------------------\n> *From:* David Boreham <[email protected]>\n> *To:* PGSQL Performance <[email protected]>\n> *Sent:* Saturday, October 1, 2011 10:39 PM\n> *Subject:* [PERFORM] Suggestions for Intel 710 SSD test\n>\n>\n> I have a 710 (Lyndonville) SSD in a test server. Ultimately we'll run\n> capacity tests using our application (which in turn uses PG), but it'll\n> take a while to get those set up. In the meantime, I'd be happy to\n> entertain running whatever tests folks here would like to suggest,\n> spare time-permitting.\n>\n> I've already tried bonnie++, sysbench and a simple WAL emulation\n> test program I wrote more than 10 years ago. The drive tests at\n> around 160Mbyte/s on bulk data and 4k tps for commit rate writing\n> small blocks.\n>\n>\n>\n> -- Sent via pgsql-performance mailing list\n> ([email protected] <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n",
"msg_date": "Sun, 02 Oct 2011 10:33:45 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "On 10/1/2011 10:00 PM, Gregory Gerard wrote:\n> How does this same benchmark compare on similar (or same) hardware but \n> with magnetic media?\n\nI don't have that data at present :(\n\nSo far I've been comparing performance with our current production \nmachines, which are older.\nThose machines use 'raptors with no BBU/WBC, so their tps performance is \nvery low (100 per second or so).\nSince our application tends to be limited by the transaction commit \nrate, the SSD-based machine\nis clearly going to be at least an order of magnitude faster. But I \ndon't know its relative performance\nvs. the traditional BBU/WBC controller + string of drives approach (we \nrejected that as an option already\nfor power consumption and space reasons).\n\nI could install a raptor drive in the new server and test that, but it \nwouldn't have a BBU/WBC controller.\nI guess I could enable the drive's write cache which might produce \ncomparable results to a controller's WBC.\n\n\n",
"msg_date": "Sun, 02 Oct 2011 07:49:26 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "On 10/1/2011 9:22 PM, Andy wrote:\n> Do you have an Intel 320? I'd love to see tests comparing 710 to 320 \n> and see if it's worth the price premium.\n>\nGood question. I don't have a 320 drive today, but will probably get one \nfor testing soon.\n\nHowever, my conclusion based on the Intel spec documents is that the 710 \nand 320 will have similar performance.\nWe elected to use 710 devices not for performance reasons vs the 320 but \nfor the specified lifetime endurance.\nThe 710 series is specified at around 4k complete drive overwrites where \nthe 320 is specified at only\naround 100. So I see the 710 series as \"like the 320 but much less \nlikely to wear out\". It may also have\nbetter behavior under constant load (the white paper makes vague mention \nof different background GC\nvs the non-enterprise drives).\n\nSo for our uses, the 320 series looks great except for concerns that :\na) the may wear out quite quickly, leading to extra cost to enter data \ncenter and pull drives, etc and the need to maintain a long-term test \nrig to determine if and when they wear out before it happens in production.\nb) the GC may behave badly under constant load (leading for example to \nunexpected periods of relatively very poor performance).\n\nThe value proposition for the 710 vs. 320 for me is not performance but \nthe avoidance of these two worries.\n\n\n",
"msg_date": "Sun, 02 Oct 2011 08:02:26 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "On 10/2/2011 2:33 AM, Arjen van der Meijden wrote:\n>\n>\n> Given the fact that you can get two 320's for the price of one 710, \n> its probably always a bit difficult to actually make the choice \n> (unless you want a fixed amount of disks and the best endurance \n> possible for that).\n\nOne thing I'd add to this is that the price/bit is more like 4X ($2k for \nthe 300G 710 vs $540 for the 300G 320).\nThe largest 710 drive is 300G whereas the largest 320 is 600G which may \nimply that the 710's are twice\nas over-provisioned as the 320. It may be that at present we're paying \n2x for the relative over-provisioning\nand another 2x to enjoy the better silicon and firmware. This hopefully \nimplies that prices will fall\nin the future provided a credible competitor emerges (Hitachi??).\n\n\n\n\n\n\n",
"msg_date": "Sun, 02 Oct 2011 17:33:47 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "If I may ask what were your top three candidates before choosing the intel?\n\nAlso why not just plan a graceful switch to a replicated server? At some point you have to detect the drive is about to go (or it just goes without warning). Presumably that point will be in a while and be coordinated with an upgrade like 9.2 in a year.\n\nFinally why not the pci based cards?\n\nOn Oct 2, 2011, at 16:33, David Boreham <[email protected]> wrote:\n\n> On 10/2/2011 2:33 AM, Arjen van der Meijden wrote:\n>> \n>> \n>> Given the fact that you can get two 320's for the price of one 710, its probably always a bit difficult to actually make the choice (unless you want a fixed amount of disks and the best endurance possible for that).\n> \n> One thing I'd add to this is that the price/bit is more like 4X ($2k for the 300G 710 vs $540 for the 300G 320).\n> The largest 710 drive is 300G whereas the largest 320 is 600G which may imply that the 710's are twice\n> as over-provisioned as the 320. It may be that at present we're paying 2x for the relative over-provisioning\n> and another 2x to enjoy the better silicon and firmware. This hopefully implies that prices will fall\n> in the future provided a credible competitor emerges (Hitachi??).\n> \n> \n> \n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 02 Oct 2011 17:26:25 -0700",
"msg_from": "Gregory Gerard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "On 10/2/2011 6:26 PM, Gregory Gerard wrote:\n> If I may ask what were your top three candidates before choosing the intel?\nAll the other options considered viable were using traditional \nrotational disks.\nI personally don't have any confidence in the other SSD vendors today,\nexcept perhaps for FusionIO (where a couple of old friends work, and I \ncan certainly vouch for\ntheir competence) but their products are too costly for our application \nat present.\n>\n> Also why not just plan a graceful switch to a replicated server? At some point you have to detect the drive is about to go (or it just goes without warning). Presumably that point will be in a while and be coordinated with an upgrade like 9.2 in a year.\nSure, we have this capability but once you walk through what has to \nhappen if you are burning through\nSSDs every few months, the 710 value proposition is more attractive for \nus. For example our data center\nis 1200 miles from our HQ and it takes a very long road trip or a plane \nflight to get hands-on with the\nboxes. We spent some considerable time planning for the 320 style \ndeployment to be honest --\nfiguring out how to predict when the drive would wear out, building \nreplication mechanisms that\nwould cope gracefully and so on. But given the option of the 710 where \nwear out can essentially\nbe crossed off the list of things to worry about, that's the way we \ndecided to go.\n>\n> Finally why not the pci based cards?\n>\nFew reasons: 1) Today Intel doesn't make them (that will change soon),\n2) a desire to maintain backwards compatibility at least for this \ngeneration, on a system architecture level\nwith traditional disk drives, 3) concerns about mechanical integrity and \nsystem airflow issues with the\nPCI card and connector in 1U enclosures. The SSD fits into the same \nlocation as a traditional disk\nbut can be velcro'ed down rather than bolted for easier field replacement.\n\n\n",
"msg_date": "Sun, 02 Oct 2011 18:44:21 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "On 10/01/2011 07:39 PM, David Boreham wrote:\n> I've already tried bonnie++, sysbench and a simple WAL emulation\n> test program I wrote more than 10 years ago. The drive tests at\n> around 160Mbyte/s on bulk data and 4k tps for commit rate writing\n> small blocks.\n\nThat sounds about the same performance as the 320 drive I tested earlier \nthis year then. You might try duplicating some of the benchmarks I ran \non that: \nhttp://archives.postgresql.org/message-id/[email protected]\n\nMake sure to reference the capacity of the drive though. The 320 units \ndo scale their performance based on that, presumably there's some of \nthat with the 710s as well.\n\nI just released a new benchmarking wrapper to measure seek performance \nof drives and graph the result, and that's giving me interesting results \nwhen comparing the 320 vs. traditional disk arrays. I've attached a \nteaser output from it on a few different drive setups I've tested \nrecently. (The 320 test there was seeking against a smaller data set \nthan the regular arrays, but its performance doesn't degrade much based \non that anyway)\n\nThe program is at https://github.com/gregs1104/seek-scaling but it's \nstill quite rough to use.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Sun, 02 Oct 2011 21:35:43 -0700",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "On 10/2/2011 10:35 PM, Greg Smith wrote:\n>\n> That sounds about the same performance as the 320 drive I tested \n> earlier this year then. You might try duplicating some of the \n> benchmarks I ran on that: \n> http://archives.postgresql.org/message-id/[email protected]\nThanks. Actually I had been looking for that email, which by brain \nremembered but my computer and Google could not find ;)\n>\n> Make sure to reference the capacity of the drive though. The 320 \n> units do scale their performance based on that, presumably there's \n> some of that with the 710s as well.\nThis is a 100G drive. The performance specs vary curiously vs capacity \nfor the 710 series : write tps actually goes down as the size increases, \nbut bulk write data rate is higher for the larger drives.\n\nI ran some pgbench earlier this evening, before reading your old email \nabove, so the parameters are different:\n\nThis is the new server with 100G 710 (AMD 6128 with 64G):\n\nbash-4.1$ /usr/pgsql-9.1/bin/pgbench -T 600 -j 8 -c 64\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 8\nduration: 600 s\nnumber of transactions actually processed: 2182014\ntps = 3636.520405 (including connections establishing)\ntps = 3636.738290 (excluding connections establishing)\n\nThis is the output from iostat while the test runs:\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 4851.00 0.00 14082.00 0.00 73.76 \n10.73 45.72 3.25 0.05 71.60\n\nThis is our current production server type (Dual AMD 2346HE 32G 10K 300G \n'raptor) with disk write cache turned off and with data and wal on the \nsame drive:\n\nbash-3.2$ /usr/pgsql-9.1/bin/pgbench -T 600 -j 8 -c 64\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 8\nduration: 600 s\nnumber of transactions actually processed: 66426\ntps = 108.925653 (including connections establishing)\ntps = 108.941509 (excluding connections establishing)\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz \navgqu-sz await svctm %util\nsda 0.00 438.00 0.00 201.00 0.00 2.60 \n26.47 55.95 286.09 4.98 100.00\n\nsame server with disk write cache turned on:\n\nbash-3.2$ /usr/pgsql-9.1/bin/pgbench -T 600 -j 8 -c 64\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 8\nduration: 600 s\nnumber of transactions actually processed: 184724\ntps = 305.654008 (including connections establishing)\ntps = 305.694317 (excluding connections establishing)\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz \navgqu-sz await svctm %util\nsda 0.00 668.00 0.00 530.00 0.00 4.55 \n17.57 142.99 277.28 1.89 100.10\n\nThere are some OS differences between the old and new servers : old is \nrunning CentOS 5.7 while the new is running 6.0.\nOld server has atime enabled while new has relatime mount option \nspecified. Both are running PG 9.1.1 from the yum repo.\n\nOne very nice measurement is the power consumption on the new server : \npeak dissipation is 135W under the pgbench load\n(measured on the ac input to the psu). Idle is draws around 90W.\n\n\n\n\n\n",
"msg_date": "Sun, 02 Oct 2011 22:49:43 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "On 10/2/2011 10:49 PM, David Boreham wrote:\n>\n> There are some OS differences between the old and new servers : old is \n> running CentOS 5.7 while the new is running 6.0.\n> Old server has atime enabled while new has relatime mount option \n> specified. Both are running PG 9.1.1 from the yum repo.\n>\n>\nAlso the old server is using ext3 while the new has ext4 (with \ndiscard/trim enabled).\n\n\n\n",
"msg_date": "Sun, 02 Oct 2011 23:24:04 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "Which repo did you get them from?\n\nOn Oct 2, 2011, at 10:24 PM, David Boreham wrote:\n\n> On 10/2/2011 10:49 PM, David Boreham wrote:\n>> \n>> There are some OS differences between the old and new servers : old is running CentOS 5.7 while the new is running 6.0.\n>> Old server has atime enabled while new has relatime mount option specified. Both are running PG 9.1.1 from the yum repo.\n>> \n>> \n> Also the old server is using ext3 while the new has ext4 (with discard/trim enabled).\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Sun, 02 Oct 2011 22:55:47 -0700",
"msg_from": "Gregory Gerard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
},
{
"msg_contents": "On 10/2/2011 11:55 PM, Gregory Gerard wrote:\n> Which repo did you get them from?\n>\n>\nhttp://yum.postgresql.org/9.1/redhat/rhel-$releasever-$basearch\n\n\n",
"msg_date": "Sun, 02 Oct 2011 23:57:42 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for Intel 710 SSD test"
}
] |
[
{
"msg_contents": "password-show\n\npassword-show",
"msg_date": "Sun, 2 Oct 2011 06:59:28 -0300",
"msg_from": "Eduardo Nazor <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Since it's my first on this list, I'd like to say \"Hi guys\" :)\n\nHere is definition of my table:\na9-dev=> \\d records;\n Table \"public.records\"\n Column | Type | Modifiers \n--------------------------------------+-----------------------------+-----------\n id | bigint | not null\n checksum | character varying(32) | not null\n data | text | not null\n delete_date | timestamp without time zone | \n last_processing_date | timestamp without time zone | \n object_id | character varying(255) | not null\n processing_path | character varying(255) | not null\n schema_id | character varying(255) | not null\n source_id | character varying(255) | not null\n source_object_last_modification_date | timestamp without time zone | not null\nIndexes:\n \"records_pkey\" PRIMARY KEY, btree (id)\n \"unq_records_0\" UNIQUE, btree (object_id, schema_id, source_id, processing_path)\n \"length_processing_path_id_idx\" btree (length(processing_path::text), id)\n \"length_processing_path_idx\" btree (length(processing_path::text))\n \"object_id_id_idx\" btree (object_id, id)\n \"schema_id_id_idx\" btree (schema_id, id)\n \"schema_id_idx\" btree (schema_id)\n \"source_id_id_idx\" btree (source_id, id)\n \"source_id_idx\" btree (source_id)\n \"source_object_last_modification_date_id_idx\" btree (source_object_last_modification_date, id)\n \"source_object_last_modification_date_idx\" btree (source_object_last_modification_date)\n\nAverage length of value of \"data\" column = 2991.7947061626100466\n\nWhen I perform query such as this: \"select * from records where source_id = 'XXX' order by id limit 200;\" I expect DB to use index source_id_id_idx with XXX as filter. It is true for all but one values of XXX - when I ask for records with most common source_id, records_pkey index is used instead and performance is terrible! Explain analyze results below.\n\na9-dev=> explain analyze select * from records where source_id ='http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' order by id limit 200; \n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..755.61 rows=200 width=1127) (actual time=75.292..684.582 rows=200 loops=1)\n -> Index Scan using source_id_id_idx on records (cost=0.00..1563542.89 rows=413849 width=1127) (actual time=75.289..684.495 rows=200 loops=1)\n Index Cond: ((source_id)::text = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml'::text)\n Total runtime: 690.358 ms\n(4 rows)\n\na9-dev=> explain analyze select * from records where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by id limit 200; \n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..466.22 rows=200 width=1127) (actual time=124093.485..124095.540 rows=200 loops=1)\n -> Index Scan using records_pkey on records (cost=0.00..2333280.84 rows=1000937 width=1127) (actual time=124093.484..124095.501 rows=200 loops=1)\n Filter: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text)\n Total runtime: 124130.247 ms\n(4 rows)\n\n\nSome info about data distrubution:\n\na9-dev=> select min(id) from records;\n min \n--------\n 190830\n(1 row)\n\na9-dev=> select min(id), max(id) from records where source_id='http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml';\n min | max \n---------+---------\n 1105217 | 3811326\n(1 row)\na9-dev=> select min(id), max(id) from records where source_id='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml';\n min | max \n---------+---------\n 1544991 | 3811413\n(1 row)\n\na9-dev=> select min(id), max(id) from (select id from records where source_id = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' order by id limit 200) as a;\n min | max \n---------+---------\n1105217 | 1105416\n(1 row)\n\na9-dev=> select min(id), max(id) from (select id from records where source_id = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by id limit 200) as a;\n min | max \n---------+---------\n1544991 | 1545190\n(1 row)\n\n\n\na9-dev=> select source_id, count(*) from records where source_id = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' or source_id = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' group by source_id;\n source_id | count \n--------------------------------------------------------+--------\n http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml | 427254\n http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml | 989184\n(2 rows)\n\na9-dev=> select count(*) from records;\n count \n---------\n 3620311\n(1 row)\n\n\nDB settings:\n\na9-dev=> SELECT\na9-dev-> 'version'::text AS \"name\",\na9-dev-> version() AS \"current_setting\"\na9-dev-> UNION ALL\na9-dev-> SELECT\na9-dev-> name,current_setting(name) \na9-dev-> FROM pg_settings \na9-dev-> WHERE NOT source='default' AND NOT name IN\na9-dev-> ('config_file','data_directory','hba_file','ident_file',\na9-dev(> 'log_timezone','DateStyle','lc_messages','lc_monetary',\na9-dev(> 'lc_numeric','lc_time','timezone_abbreviations',\na9-dev(> 'default_text_search_config','application_name',\na9-dev(> 'transaction_deferrable','transaction_isolation',\na9-dev(> 'transaction_read_only');\n name | current_setting \n--------------------------+-----------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.0.4 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.6.0 20110530 (Red Hat 4.6.0-9), 64-bit\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_rotation_age | 1d\n log_rotation_size | 0\n log_truncate_on_rotation | on\n logging_collector | on\n max_connections | 100\n max_stack_depth | 2MB\n port | 5432\n server_encoding | UTF8\n shared_buffers | 24MB\n TimeZone | Poland\n(14 rows)\n\n\nThis query was always slow. Autovacuum is on, and I ran VACUUM ANALYZE manually few minutes before writing this email. \n\nPlease help me with my problem. I'll be happy to provide any additional information if needed.\nMichal Nowak\n\n",
"msg_date": "Mon, 03 Oct 2011 11:44:21 +0200",
"msg_from": "=?iso-8859-2?Q?Nowak_Micha=B3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query with order by and limit is very slow - wrong index used"
},
{
"msg_contents": "How many rows do you have in that table?\n\nI think , that planner thinks that the element you are looking for is\nso common - that it will be to expensive to use index to fetch it.\nPerhaps try increasing default_statistics_target , and revacuuming the table.\n\nYou could also try changing it just for the column:\n\nALTER TABLE records ALTER id SET source_id 1000; vacuum analyze verbose records;\n",
"msg_date": "Mon, 3 Oct 2011 11:02:26 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with order by and limit is very slow - wrong\n index used"
},
{
"msg_contents": "> How many rows do you have in that table?\n\na9-dev=> select count(*) from records;\n count \n---------\n3620311\n(1 row)\n\na9-dev=> select source_id, count(*) from records where source_id = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' or source_id = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' group by source_id;\n source_id | count \n--------------------------------------------------------+--------\nhttp://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml | 427254\nhttp://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml | 989184\n(2 rows)\n\n\n> ALTER TABLE records ALTER id SET source_id 1000; vacuum analyze verbose records;\nDid you mean ALTER TABLE records ALTER id SET STATISTICS 1000;?\n\n\n\n\nWiadomość napisana przez Gregg Jaskiewicz w dniu 3 paź 2011, o godz. 12:02:\n\n> How many rows do you have in that table?\n> \n> I think , that planner thinks that the element you are looking for is\n> so common - that it will be to expensive to use index to fetch it.\n> Perhaps try increasing default_statistics_target , and revacuuming the table.\n> \n> You could also try changing it just for the column:\n> \n> ALTER TABLE records ALTER id SET source_id 1000; vacuum analyze verbose records;\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 03 Oct 2011 12:51:52 +0200",
"msg_from": "=?iso-8859-2?Q?Nowak_Micha=B3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with order by and limit is very slow - wrong index\n used"
},
{
"msg_contents": "2011/10/3 Nowak Michał <[email protected]>:\n>> How many rows do you have in that table?\n>\n> a9-dev=> select count(*) from records;\n> count\n> ---------\n> 3620311\n> (1 row)\n\n\n>\n> a9-dev=> select source_id, count(*) from records where source_id = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' or source_id = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' group by source_id;\n> source_id | count\n> --------------------------------------------------------+--------\n> http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml | 427254\n> http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml | 989184\n> (2 rows)\n\nSo the second one is roughly 27% of the table. I don't know the\nactual condition under which planner changes over the seqscan, but\nthat value seems quite common it seems.\nThe other thing planner is going to look at is the correlation, most\ncommon values, most common frequencies.\nIn other words, if the value is 27% of all values, but is evenly\nspread across - I think planner will go for seq scan regardless.\n\nAt the end of the day (afaik), index scan only pics pages for narrowed\ndown seqscan really. So imagine if your index scan returned all the\npages, you would still have to do a seqscan on all of them. Planner is\ntrying to avoid that by weighting the costs of both operations.\nIf it is too slow to run the current queries, you could try\nnormalizing the table by splitting source_id into separate one and\nreferencing it by an id. Very often what you'd find is that doing so\nlowers I/O required, hence saves a lot of time in queries. Downside\nis, that it is bit harder to add/update the tables. But that's where\ntriggers and procedures come handy.\n\n\n>\n>> ALTER TABLE records ALTER id SET source_id 1000; vacuum analyze verbose records;\n> Did you mean ALTER TABLE records ALTER id SET STATISTICS 1000;?\n\nYup, that's what I meant. Sorry.\n\n\n-- \nGJ\n",
"msg_date": "Mon, 3 Oct 2011 12:20:10 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with order by and limit is very slow - wrong\n index used"
},
{
"msg_contents": "Setting statistics to 1000 on id and source_id didn't solve my problem:\n\na9-dev=> explain analyze select * from records where source_id ='http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' order by id limit 200; \n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..757.51 rows=200 width=1126) (actual time=43.648..564.798 rows=200 loops=1)\n -> Index Scan using source_id_id_idx on records (cost=0.00..1590267.66 rows=419868 width=1126) (actual time=43.631..564.700 rows=200 loops=1)\n Index Cond: ((source_id)::text = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml'::text)\nTotal runtime: 564.895 ms\n(4 rows)\n\na9-dev=> explain analyze select * from records where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by id limit 200; \n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..489.57 rows=200 width=1126) (actual time=99701.773..99703.858 rows=200 loops=1)\n -> Index Scan using records_pkey on records (cost=0.00..2441698.81 rows=997489 width=1126) (actual time=99684.878..99686.936 rows=200 loops=1)\n Filter: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text)\nTotal runtime: 99705.916 ms\n(4 rows)\n\n\nAs you can see, query looking for records with most common source_id is still by many magnitudes slower.\n\n> In other words, if the value is 27% of all values, but is evenly\n> spread across - I think planner will go for seq scan regardless.\n\nSeq scan is not used - problem is, that planner chooses records_pkey index and checks every record's source_id until it finds 200 of them. I think that if source_id_id_idx index was used, query would execute as fast as for every other value of source_id.\nI even made an experiment: I created table records2 as copy of records, and added additional column id2 with same values as id. I created same indexes on records2 as on records. Difference is, that there is no index on id2.\nHere are the results of problematic queries:\na9-dev=> explain analyze select * from records2 where source_id ='http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' order by id2 limit 200; \n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..770.15 rows=200 width=1124) (actual time=0.071..0.220 rows=200 loops=1)\n -> Index Scan using source_id2_id2_id2x on records2 (cost=0.00..1585807.44 rows=411820 width=1124) (actual time=0.070..0.196 rows=200 loops=1)\n Index Cond: ((source_id)::text = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml'::text)\nTotal runtime: 0.255 ms\n(4 rows)\n\na9-dev=> explain analyze select * from records2 where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by id2 limit 200; \n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..770.01 rows=200 width=1124) (actual time=0.076..0.205 rows=200 loops=1)\n -> Index Scan using source_id2_id2_id2x on records2 (cost=0.00..3735751.15 rows=970308 width=1124) (actual time=0.074..0.180 rows=200 loops=1)\n Index Cond: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text)\nTotal runtime: 0.235 ms\n(4 rows)\n\nAs you can see, even for most common (~27%) values of source_id, planner chooses to use souce_id2_id2_id2x (I replaced id with id2 when creating indexes on records2 :]) index and query executes as fast as for other values.\n\nSo, the question is: why planner chooses records_pkey over source_id_id_idx for the most common value of source_id?\n\n\n\nWiadomość napisana przez Gregg Jaskiewicz w dniu 3 paź 2011, o godz. 13:20:\n\n> 2011/10/3 Nowak Michał <[email protected]>:\n>>> How many rows do you have in that table?\n>> \n>> a9-dev=> select count(*) from records;\n>> count\n>> ---------\n>> 3620311\n>> (1 row)\n> \n> \n>> \n>> a9-dev=> select source_id, count(*) from records where source_id = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' or source_id = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' group by source_id;\n>> source_id | count\n>> --------------------------------------------------------+--------\n>> http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml | 427254\n>> http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml | 989184\n>> (2 rows)\n> \n> So the second one is roughly 27% of the table. I don't know the\n> actual condition under which planner changes over the seqscan, but\n> that value seems quite common it seems.\n> The other thing planner is going to look at is the correlation, most\n> common values, most common frequencies.\n> In other words, if the value is 27% of all values, but is evenly\n> spread across - I think planner will go for seq scan regardless.\n> \n> At the end of the day (afaik), index scan only pics pages for narrowed\n> down seqscan really. So imagine if your index scan returned all the\n> pages, you would still have to do a seqscan on all of them. Planner is\n> trying to avoid that by weighting the costs of both operations.\n> If it is too slow to run the current queries, you could try\n> normalizing the table by splitting source_id into separate one and\n> referencing it by an id. Very often what you'd find is that doing so\n> lowers I/O required, hence saves a lot of time in queries. Downside\n> is, that it is bit harder to add/update the tables. But that's where\n> triggers and procedures come handy.\n> \n> \n>> \n>>> ALTER TABLE records ALTER id SET source_id 1000; vacuum analyze verbose records;\n>> Did you mean ALTER TABLE records ALTER id SET STATISTICS 1000;?\n> \n> Yup, that's what I meant. Sorry.\n> \n> \n> -- \n> GJ\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n",
"msg_date": "Mon, 03 Oct 2011 15:08:20 +0200",
"msg_from": "=?iso-8859-2?Q?Nowak_Micha=B3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Query with order by and limit is very slow - wrong\n index used"
},
{
"msg_contents": "Please compare costs and actual times in those queries:\n\na9-dev=> explain analyze select * from records where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by id limit 200;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..489.57 rows=200 width=1126) (actual time=99701.773..99703.858 rows=200 loops=1)\n -> Index Scan using records_pkey on records (cost=0.00..2441698.81 rows=997489 width=1126) (actual time=99684.878..99686.936 rows=200 loops=1)\n Filter: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text)\n Total runtime: 99705.916 ms\n(4 rows)\n\na9-dev=> explain analyze select * from records2 where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by id2 limit 200;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..770.01 rows=200 width=1124) (actual time=0.076..0.205 rows=200 loops=1)\n -> Index Scan using source_id2_id2_id2x on records2 (cost=0.00..3735751.15 rows=970308 width=1124) (actual time=0.074..0.180 rows=200 loops=1)\n Index Cond: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text)\n Total runtime: 0.235 ms\n(4 rows)\n\n\nFirst one uses records_pkey, and with estimated cost 2441698.81 runs in over 1,5 minute.\nSecond uses index on (source_id, id) and with estimated cost 3735751.15 runs in 235 miliseconds.\n\nIMHO, problem lies not in records distribution nor normalization, but in planner's wrong cost estimation. I don't know to tell/force him to use proper index.\n\n\nWiadomość napisana przez Gregg Jaskiewicz w dniu 3 paź 2011, o godz. 15:12:\n\n> You forgot to do 'reply all' mate.\n> If what I said about correlation is true (record spread between pages)\n> - then increasing stats won't help you.\n> \n> As a test, try clustering the table by the source_id column. Vacuum it\n> again, and retry. Unfortunately even if that helps, it won't actually\n> fix it permanently.\n> you probably need to normalize the table.\n> \n> \n> 2011/10/3 Nowak Michał <[email protected]>:\n>> Setting statistics to 1000 on id and source_id didn't solve my problem:\n>> \n>> a9-dev=> explain analyze select * from records where source_id ='http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' order by id limit 200;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..757.51 rows=200 width=1126) (actual time=43.648..564.798 rows=200 loops=1)\n>> -> Index Scan using source_id_id_idx on records (cost=0.00..1590267.66 rows=419868 width=1126) (actual time=43.631..564.700 rows=200 loops=1)\n>> Index Cond: ((source_id)::text = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml'::text)\n>> Total runtime: 564.895 ms\n>> (4 rows)\n>> \n>> a9-dev=> explain analyze select * from records where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by id limit 200;\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..489.57 rows=200 width=1126) (actual time=99701.773..99703.858 rows=200 loops=1)\n>> -> Index Scan using records_pkey on records (cost=0.00..2441698.81 rows=997489 width=1126) (actual time=99684.878..99686.936 rows=200 loops=1)\n>> Filter: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text)\n>> Total runtime: 99705.916 ms\n>> (4 rows)\n>> \n>> \n>> As you can see, query looking for records with most common source_id is still by many magnitudes slower.\n>> \n>>> In other words, if the value is 27% of all values, but is evenly\n>>> spread across - I think planner will go for seq scan regardless.\n>> \n>> Seq scan is not used - problem is, that planner chooses records_pkey index and checks every record's source_id until it finds 200 of them. I think that if source_id_id_idx index was used, query would execute as fast as for every other value of source_id.\n>> I even made an experiment: I created table records2 as copy of records, and added additional column id2 with same values as id. I created same indexes on records2 as on records. Difference is, that there is no index on id2.\n>> Here are the results of problematic queries:\n>> a9-dev=> explain analyze select * from records2 where source_id ='http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' order by id2 limit 200;\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..770.15 rows=200 width=1124) (actual time=0.071..0.220 rows=200 loops=1)\n>> -> Index Scan using source_id2_id2_id2x on records2 (cost=0.00..1585807.44 rows=411820 width=1124) (actual time=0.070..0.196 rows=200 loops=1)\n>> Index Cond: ((source_id)::text = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml'::text)\n>> Total runtime: 0.255 ms\n>> (4 rows)\n>> \n>> a9-dev=> explain analyze select * from records2 where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by id2 limit 200;\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..770.01 rows=200 width=1124) (actual time=0.076..0.205 rows=200 loops=1)\n>> -> Index Scan using source_id2_id2_id2x on records2 (cost=0.00..3735751.15 rows=970308 width=1124) (actual time=0.074..0.180 rows=200 loops=1)\n>> Index Cond: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text)\n>> Total runtime: 0.235 ms\n>> (4 rows)\n>> \n>> As you can see, even for most common (~27%) values of source_id, planner chooses to use souce_id2_id2_id2x (I replaced id with id2 when creating indexes on records2 :]) index and query executes as fast as for other values.\n>> \n>> So, the question is: why planner chooses records_pkey over source_id_id_idx for the most common value of source_id?\n>> \n>> \n>> \n>> Wiadomość napisana przez Gregg Jaskiewicz w dniu 3 paź 2011, o godz. 13:20:\n>> \n>>> 2011/10/3 Nowak Michał <[email protected]>:\n>>>>> How many rows do you have in that table?\n>>>> \n>>>> a9-dev=> select count(*) from records;\n>>>> count\n>>>> ---------\n>>>> 3620311\n>>>> (1 row)\n>>> \n>>> \n>>>> \n>>>> a9-dev=> select source_id, count(*) from records where source_id = 'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' or source_id = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' group by source_id;\n>>>> source_id | count\n>>>> --------------------------------------------------------+--------\n>>>> http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml | 427254\n>>>> http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml | 989184\n>>>> (2 rows)\n>>> \n>>> So the second one is roughly 27% of the table. I don't know the\n>>> actual condition under which planner changes over the seqscan, but\n>>> that value seems quite common it seems.\n>>> The other thing planner is going to look at is the correlation, most\n>>> common values, most common frequencies.\n>>> In other words, if the value is 27% of all values, but is evenly\n>>> spread across - I think planner will go for seq scan regardless.\n>>> \n>>> At the end of the day (afaik), index scan only pics pages for narrowed\n>>> down seqscan really. So imagine if your index scan returned all the\n>>> pages, you would still have to do a seqscan on all of them. Planner is\n>>> trying to avoid that by weighting the costs of both operations.\n>>> If it is too slow to run the current queries, you could try\n>>> normalizing the table by splitting source_id into separate one and\n>>> referencing it by an id. Very often what you'd find is that doing so\n>>> lowers I/O required, hence saves a lot of time in queries. Downside\n>>> is, that it is bit harder to add/update the tables. But that's where\n>>> triggers and procedures come handy.\n>>> \n>>> \n>>>> \n>>>>> ALTER TABLE records ALTER id SET source_id 1000; vacuum analyze verbose records;\n>>>> Did you mean ALTER TABLE records ALTER id SET STATISTICS 1000;?\n>>> \n>>> Yup, that's what I meant. Sorry.\n>>> \n>>> \n>>> --\n>>> GJ\n>>> \n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n>> \n> \n> \n> \n> -- \n> GJ\n\n",
"msg_date": "Mon, 03 Oct 2011 15:59:19 +0200",
"msg_from": "=?iso-8859-2?Q?Nowak_Micha=B3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with order by and limit is very slow - wrong index\n used"
},
{
"msg_contents": "> a9-dev=> explain analyze select * from records where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by id limit 200;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..489.57 rows=200 width=1126) (actual time=99701.773..99703.858 rows=200 loops=1)\n> -> Index Scan using records_pkey on records (cost=0.00..2441698.81 rows=997489 width=1126) (actual time=99684.878..99686.936 rows=200 loops=1)\n> Filter: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text)\n> Total runtime: 99705.916 ms\n> (4 rows)\n>\n> a9-dev=> explain analyze select * from records2 where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by id2 limit 200;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..770.01 rows=200 width=1124) (actual time=0.076..0.205 rows=200 loops=1)\n> -> Index Scan using source_id2_id2_id2x on records2 (cost=0.00..3735751.15 rows=970308 width=1124) (actual time=0.074..0.180 rows=200 loops=1)\n> Index Cond: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text)\n> Total runtime: 0.235 ms\n> (4 rows)\n>\n>\n> First one uses records_pkey, and with estimated cost 2441698.81 runs in over 1,5 minute.\n> Second uses index on (source_id, id) and with estimated cost 3735751.15 runs in 235 miliseconds.\n>\n> IMHO, problem lies not in records distribution nor normalization, but in planner's wrong cost estimation. I don't know to tell/force him to use proper index.\n\nGetting information on your current configuration should help.\nPlease see http://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nYou should take care of the cache effect of your queries between your\ntests, here it is not a problem, but this explain was way longer for\nthis similar query.\n\na9-dev=> explain analyze select * from records where source_id\n='http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml' order by id\nlimit 200;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..757.51 rows=200 width=1126) (actual\ntime=43.648..564.798 rows=200 loops=1)\n -> Index Scan using source_id_id_idx on records\n(cost=0.00..1590267.66 rows=419868 width=1126) (actual\ntime=43.631..564.700 rows=200 loops=1)\n Index Cond: ((source_id)::text =\n'http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml'::text)\nTotal runtime: 564.895 ms\n\n\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Mon, 3 Oct 2011 16:28:56 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with order by and limit is very slow - wrong\n index used"
},
{
"msg_contents": "=?iso-8859-2?Q?Nowak_Micha=B3?= <[email protected]> writes:\n> When I perform query such as this: \"select * from records where source_id = 'XXX' order by id limit 200;\" I expect DB to use index source_id_id_idx with XXX as filter. It is true for all but one values of XXX - when I ask for records with most common source_id, records_pkey index is used instead and performance is terrible! Explain analyze results below.\n\nI'm thinking it probably sees the pkey index as cheaper because that's\nhighly correlated with the physical order of the table. (It would be\nuseful to see pg_stats.correlation for these columns.) With a\nsufficiently unselective filter, scanning in pkey order looks cheaper\nthan scanning in source_id order.\n\nIf so, what you probably need to do to get the estimates more in line\nwith reality is to reduce random_page_cost. That will reduce the\nassumed penalty for non-physical-order scanning.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 03 Oct 2011 11:12:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with order by and limit is very slow - wrong index used "
},
{
"msg_contents": "2011/10/3 Nowak Michał <[email protected]>:\n\n> Some info about data distrubution:\n>\n> a9-dev=> select min(id) from records;\n> min\n> --------\n> 190830\n> (1 row)\n>\n> a9-dev=> select min(id), max(id) from records where source_id='http://ebuw.uw.edu.pl/dlibra/oai-pmh-repository.xml';\n> min | max\n> ---------+---------\n> 1105217 | 3811326\n> (1 row)\n> a9-dev=> select min(id), max(id) from records where source_id='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml';\n> min | max\n> ---------+---------\n> 1544991 | 3811413\n> (1 row)\n\nPG assumes that the \"wbc.poznan.pl\" rows are all over the range of\nids, which seems not to be the case. There is no sense of cross-column\ncorrelation in the planner currently.\n\nYou are going to have to resort to some more or less cute hacks, like\nmaking an index on (source_id, id - 1) and doing \"... order by\nsource_id, id - 1\" .\n\nGreetings\nMarcin Mańk\n",
"msg_date": "Tue, 4 Oct 2011 00:45:47 +0200",
"msg_from": "=?UTF-8?B?TWFyY2luIE1hxYRr?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with order by and limit is very slow - wrong\n index used"
},
{
"msg_contents": "Wiadomość napisana przez Tom Lane w dniu 3 paź 2011, o godz. 17:12:\n\n> I'm thinking it probably sees the pkey index as cheaper because that's\n> highly correlated with the physical order of the table. (It would be\n> useful to see pg_stats.correlation for these columns.) With a\n> sufficiently unselective filter, scanning in pkey order looks cheaper\n> than scanning in source_id order.\n\na9-dev=> select attname, null_frac, avg_width, n_distinct, correlation from pg_stats where tablename = 'records';\n attname | null_frac | avg_width | n_distinct | correlation \n--------------------------------------+-----------+-----------+------------+-------------\n id | 0 | 8 | -1 | 0.932887\n last_processing_date | 0.886093 | 8 | 38085 | 0.427959\n object_id | 0 | 27 | -0.174273 | 0.227186\n processing_path | 0 | 14 | 14 | 0.970166\n schema_id | 0 | 17 | 68 | 0.166175\n delete_date | 0.999897 | 8 | 29 | 0.63629\n data | 0 | 949 | -0.267811 | 0.158279\n checksum | 0 | 33 | -0.267495 | 0.0269071\n source_id | 0 | 54 | 69 | 0.303059\n source_object_last_modification_date | 0 | 8 | 205183 | 0.137143\n(10 rows)\n\n\n> If so, what you probably need to do to get the estimates more in line\n> with reality is to reduce random_page_cost. That will reduce the\n> assumed penalty for non-physical-order scanning.\n\nI'll try that.\n\nRegards,\nMichal Nowak",
"msg_date": "Tue, 04 Oct 2011 09:10:27 +0200",
"msg_from": "=?iso-8859-2?Q?Nowak_Micha=B3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with order by and limit is very slow - wrong index\n used"
},
{
"msg_contents": "2011/10/4 Nowak Michał <[email protected]>:\n>\n> a9-dev=> select attname, null_frac, avg_width, n_distinct, correlation from pg_stats where tablename = 'records';\n> attname | null_frac | avg_width | n_distinct | correlation\n> --------------------------------------+-----------+-----------+------------+-------------\n> source_id | 0 | 54 | 69 | 0.303059\n\nhttp://www.postgresql.org/docs/9.0/interactive/view-pg-stats.html\n\n\"Statistical correlation between physical row ordering and logical\nordering of the column values. This ranges from -1 to +1. When the\nvalue is near -1 or +1, an index scan on the column will be estimated\nto be cheaper than when it is near zero, due to reduction of random\naccess to the disk. (This column is null if the column data type does\nnot have a < operator.)\"\n\nKind of like I and Tom said, 0.3 correlation there sounds like the cause.\nSeriously, try normalisation as well, before discarding it.\n\n\n-- \nGJ\n",
"msg_date": "Tue, 4 Oct 2011 09:08:13 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with order by and limit is very slow - wrong\n index used"
},
{
"msg_contents": "Lowering random_page_cost didn't help -- I've tried values 2.0 and 1.5.\n\nThen I tried \"order by id -1\" hack Marcin Mańk proposed...\n\na9-dev=> create index foo on records(source_id, (id - 1));\nCREATE INDEX\na9-dev=> explain analyze select * from records where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by (id -1) limit 200;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..379.42 rows=200 width=1124) (actual time=0.137..255.283 rows=200 loops=1)\n -> Index Scan using foo on records (cost=0.00..1864617.14 rows=982887 width=1124) (actual time=0.137..255.237 rows=200 loops=1)\n Index Cond: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text)\nTotal runtime: 255.347 ms\n(4 rows)\n\nSignificant improvement :)\n\nAs we can see, it is possible to query records fast without changing table structure. Question is: can I do it without \"hacks\"? \n\nMichal Nowak\n\n\n",
"msg_date": "Tue, 04 Oct 2011 10:55:18 +0200",
"msg_from": "=?iso-8859-2?Q?Nowak_Micha=B3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with order by and limit is very slow - wrong index\n used"
},
{
"msg_contents": "Nowak Micha*<[email protected]> wrote:\n \n> Lowering random_page_cost didn't help -- I've tried values 2.0 and\n> 1.5.\n \nFirst off, I don't remember you saying how much RAM is on the\nsystem, but be sure to set effective_cache_size to the sum of your\nshared_buffers and OS cache. I've often found that the optimizer\nundervalues cpu_tuple_cost; try setting that to 0.05. Then,\ndepending on how well cached the active portion of your database is,\nyou may want to drop your random_page_cost down close to or equal to\nseq_page_cost. If your cache hit rate is high enough, you might\nwant to drop *both* seq_page_cost and random_page_cost to something\nas low as 0.1 or even 0.05.\n \nThe objective is to model the actual costs of your workload against\nyour data on your hardware. Sometimes that takes a bit of\ntinkering.\n \n-Kevin\n",
"msg_date": "Tue, 04 Oct 2011 18:33:23 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with order by and limit is very slow -\n\t wrong index used"
}
] |
[
{
"msg_contents": "How can i get record by data block not by sql?\r\n \r\n I want to read and write lots of data by data blocks and write record to a appointed data block and read it.\r\n so i can form a disk-resident tree by recording the block address. But i don't know how to implement in postgresql. \r\n Is there system function can do this? \r\n Can someone help me?? Thank you very very much!!!!1\nHow can i get record by data block not by sql?\n \nI want to read and write lots of data by data blocks and write record to a appointed data block and read it.\nso i can form a disk-resident tree by recording the block address. But i don't know how to implement in postgresql. \nIs there system function can do this? \nCan someone help me?? Thank you very very much!!!!1",
"msg_date": "Mon, 3 Oct 2011 21:52:07 +0800",
"msg_from": "\"=?gbk?B?varNtw==?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How can i get record by data block not by sql?"
},
{
"msg_contents": "On Oct 3, 2011, at 6:52 AM, 姜头 wrote:\n\n> How can i get record by data block not by sql?\n> \n> I want to read and write lots of data by data blocks and write record to a appointed data block and read it.\n> so i can form a disk-resident tree by recording the block address. But i don't know how to implement in postgresql.\n> Is there system function can do this? \n> Can someone help me?? Thank you very very much!!!!1\n\nIt sounds like you should look into the COPY command, or, if you're adventurous, the pg_bulkload project. They might get you the speed you're after, if not quite the implementation. But if what you're really after is to manipulate the table files directly - and I'm not sure why that would be a goal in itself - then perhaps SQL isn't for you.",
"msg_date": "Mon, 3 Oct 2011 07:29:16 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can i get record by data block not by sql?"
}
] |
[
{
"msg_contents": "Hi, \n\n \n\nI need help to understand the issue on a productive database for a select\nthat takes more time than expected.\n\n \n\n1- On a development database I ran the query (select) and I can see on\nExplain Analyze pgAdmin use all the indexes and primary keys defined. Dev db\nhas almost 10% of productive data.\n\n2- On productive database the same query on Explain Analyze from\npgAdmin shows a diferent planning, not using a pkey index and instead uses\nthe progressive scan on a million of rows. I can see Primary key is defined\nfor the table on pgAdmin.\n\n \n\nWhat could be the issue on a productive for not use the pkey index for the\nSELECT?\n\n \n\nCristian C. Bittel\n\n \n\n\nHi, I need help to understand the issue on a productive database for a select that takes more time than expected. 1- On a development database I ran the query (select) and I can see on Explain Analyze pgAdmin use all the indexes and primary keys defined. Dev db has almost 10% of productive data.2- On productive database the same query on Explain Analyze from pgAdmin shows a diferent planning, not using a pkey index and instead uses the progressive scan on a million of rows. I can see Primary key is defined for the table on pgAdmin. What could be the issue on a productive for not use the pkey index for the SELECT? Cristian C. Bittel",
"msg_date": "Mon, 3 Oct 2011 14:48:10 -0300",
"msg_from": "\"Soporte @ TEKSOL S.A.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pkey is not used on productive database"
},
{
"msg_contents": "On Mon, Oct 03, 2011 at 02:48:10PM -0300, Soporte @ TEKSOL S.A. wrote:\n> Hi, \n> \n> \n> \n> I need help to understand the issue on a productive database for a select\n> that takes more time than expected.\n> \n> \n> \n> 1- On a development database I ran the query (select) and I can see on\n> Explain Analyze pgAdmin use all the indexes and primary keys defined. Dev db\n> has almost 10% of productive data.\n> \n> 2- On productive database the same query on Explain Analyze from\n> pgAdmin shows a diferent planning, not using a pkey index and instead uses\n> the progressive scan on a million of rows. I can see Primary key is defined\n> for the table on pgAdmin.\n> \n> \n> \n> What could be the issue on a productive for not use the pkey index for the\n> SELECT?\n\ngeneral answer:\nhttp://www.depesz.com/index.php/2010/09/09/why-is-my-index-not-being-used/\n\nbut for more details, we'd need to see the query and \\d of the table.\n\nBest regards,\n\ndepesz\n\n-- \nThe best thing about modern society is how easy it is to avoid contact with it.\n http://depesz.com/\n",
"msg_date": "Tue, 4 Oct 2011 14:55:36 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pkey is not used on productive database"
},
{
"msg_contents": "On Mon, Oct 3, 2011 at 11:48 AM, Soporte @ TEKSOL S.A.\n<[email protected]> wrote:\n> Hi,\n>\n> I need help to understand the issue on a productive database for a select\n> that takes more time than expected.\n\n> 1- On a development database I ran the query (select) and I can see on\n> Explain Analyze pgAdmin use all the indexes and primary keys defined. Dev db\n> has almost 10% of productive data.\n>\n> 2- On productive database the same query on Explain Analyze from\n> pgAdmin shows a diferent planning, not using a pkey index and instead uses\n> the progressive scan on a million of rows. I can see Primary key is defined\n> for the table on pgAdmin.\n>\n> What could be the issue on a productive for not use the pkey index for the\n> SELECT?\n\nPlease post the schema, query and explain analyze output of the runs\non both machines.\n",
"msg_date": "Tue, 4 Oct 2011 07:01:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pkey is not used on productive database"
}
] |
[
{
"msg_contents": "I have the following setup:\n\ncreate table test(id integer, seq integer);\ninsert into test select generate_series(0, 100), generate_series(0, 1000);\ncreate unique index test_idx on test(id, seq);\nanalyze test;\n\nNow I try to fetch the latest 5 values per id, ordered by seq from the \ntable:\n\nselect * from (\nselect id, seq, row_number() over (partition by id order by seq)\n from test\n where id in (1, 2, 3)\n) where row_number() <= 5;\n\nThis does not use the index on id, seq for sorting the data. It uses a \nbitmap index scan, and sequential scan when issued SET enable_bitmapscan \nto false. Tested both on git head and 8.4.8. See end of post for plans. \nIt seems it would be possible to fetch the first values per id using the \nindex, or at least skip the sorting.\n\nIf I emulate the behavior I want by using:\n (select id, seq from test where id = 1 order by seq limit 5)\nunion\n (select id, seq from test where id = 2 order by seq limit 5)\nunion\n (select id, seq from test where id = 2 order by seq limit 5);\nI get two orders of magnitude faster execution.\n\nIs there some other way to run the query so that it would use the index? \nIs there plans to support the index usage for the above query assuming \nthat it is possible to use the index for that query?\n\nThe real world use case would be to fetch latest 5 threads per \ndiscussion forum in one query. Or fetch 3 latest comments for all \npatches in given commit fest in single query.\n\n - Anssi Kääriäinen\n\nNormal plan (by the way, note the wildly inaccurate topmost row estimate):\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\nSubquery Scan on tmp (cost=711.65..805.05 rows=958 width=16) (actual \ntime=10.543..27.469 rows=15 loops=1)\n Filter: (tmp.row_number <= 5)\n -> WindowAgg (cost=711.65..769.13 rows=2874 width=8) (actual \ntime=10.537..23.551 rows=3003 loops=1)\n -> Sort (cost=711.65..718.83 rows=2874 width=8) (actual \ntime=10.528..13.798 rows=3003 loops=1)\n Sort Key: test.id, test.seq\n Sort Method: quicksort Memory: 182kB\n -> Bitmap Heap Scan on test (cost=59.04..546.55 \nrows=2874 width=8) (actual time=0.580..4.750 rows=3003 loops=1)\n Recheck Cond: (id = ANY ('{1,2,3}'::integer[]))\n -> Bitmap Index Scan on test_idx \n(cost=0.00..58.32 rows=2874 width=0) (actual time=0.490..0.490 rows=3003 \nloops=1)\n Index Cond: (id = ANY ('{1,2,3}'::integer[]))\n Total runtime: 27.531 ms\n(11 rows)\n\nPlan with set enable_bitmapscan set to off:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Subquery Scan on tmp (cost=2003.23..2096.64 rows=958 width=16) \n(actual time=32.907..47.279 rows=15 loops=1)\n Filter: (tmp.row_number <= 5)\n -> WindowAgg (cost=2003.23..2060.71 rows=2874 width=8) (actual \ntime=32.898..44.053 rows=3003 loops=1)\n -> Sort (cost=2003.23..2010.42 rows=2874 width=8) (actual \ntime=32.883..36.067 rows=3003 loops=1)\n Sort Key: test.id, test.seq\n Sort Method: quicksort Memory: 182kB\n -> Seq Scan on test (cost=0.00..1838.14 rows=2874 \nwidth=8) (actual time=0.017..26.733 rows=3003 loops=1)\n Filter: (id = ANY ('{1,2,3}'::integer[]))\n Total runtime: 47.334 ms\n(9 rows)\n\nThe UNION approach:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=28.47..28.62 rows=15 width=8) (actual \ntime=0.176..0.194 rows=15 loops=1)\n -> Append (cost=0.00..28.40 rows=15 width=8) (actual \ntime=0.026..0.149 rows=15 loops=1)\n -> Limit (cost=0.00..9.42 rows=5 width=8) (actual \ntime=0.024..0.045 rows=5 loops=1)\n -> Index Scan using test_idx on test \n(cost=0.00..1820.87 rows=967 width=8) (actual time=0.022..0.034 rows=5 \nloops=1)\n Index Cond: (id = 1)\n -> Limit (cost=0.00..9.42 rows=5 width=8) (actual \ntime=0.017..0.034 rows=5 loops=1)\n -> Index Scan using test_idx on test \n(cost=0.00..1820.87 rows=967 width=8) (actual time=0.015..0.023 rows=5 \nloops=1)\n Index Cond: (id = 2)\n -> Limit (cost=0.00..9.42 rows=5 width=8) (actual \ntime=0.021..0.037 rows=5 loops=1)\n -> Index Scan using test_idx on test \n(cost=0.00..1820.87 rows=967 width=8) (actual time=0.019..0.026 rows=5 \nloops=1)\n Index Cond: (id = 3)\n Total runtime: 0.258 ms\n(12 rows)\n\n\n\n\n",
"msg_date": "Tue, 4 Oct 2011 12:39:59 +0300",
"msg_from": "=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Window functions and index usage"
},
{
"msg_contents": "On Tue, Oct 4, 2011 at 11:39 AM, Anssi Kääriäinen\n<[email protected]> wrote:\n> I have the following setup:\n>\n> create table test(id integer, seq integer);\n> insert into test select generate_series(0, 100), generate_series(0, 1000);\n> create unique index test_idx on test(id, seq);\n> analyze test;\n>\n> Now I try to fetch the latest 5 values per id, ordered by seq from the\n> table:\n>\n> select * from (\n> select id, seq, row_number() over (partition by id order by seq)\n> from test\n> where id in (1, 2, 3)\n> ) where row_number() <= 5;\n\nIt seems this fetches the *first* 5 values per id - and not the\nlatest. For that you would need to \"order by seq desc\" in the window\nand probably also in the index.\n\n> This does not use the index on id, seq for sorting the data. It uses a\n> bitmap index scan, and sequential scan when issued SET enable_bitmapscan to\n> false. Tested both on git head and 8.4.8. See end of post for plans. It\n> seems it would be possible to fetch the first values per id using the index,\n> or at least skip the sorting.\n\nJust guessing: since row_number is an analytic function and it can be\ncombined with any type of windowing only in \"rare\" cases do the\nordering of index columns and the ordering in the window align. So\nwhile your particular use case could benefit from this optimization\nthe overall judgement might be that it's not worthwhile to make the\noptimizer more complex to cover this case - or I fail to see the more\ngeneral pattern here. :-)\n\n> Is there some other way to run the query so that it would use the index? Is\n> there plans to support the index usage for the above query assuming that it\n> is possible to use the index for that query?\n>\n> The real world use case would be to fetch latest 5 threads per discussion\n> forum in one query. Or fetch 3 latest comments for all patches in given\n> commit fest in single query.\n\nIs it really that realistic that someone wants the latest n entries\nfor *all* threads / patches? It seems since this can result in a very\nlarge data set this is probably not the type of query which is done\noften.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Tue, 4 Oct 2011 15:27:49 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Window functions and index usage"
},
{
"msg_contents": "On 10/04/2011 04:27 PM, Robert Klemme wrote:\n> On Tue, Oct 4, 2011 at 11:39 AM, Anssi Kääriäinen\n> <[email protected]> wrote:\n>> I have the following setup:\n>>\n>> create table test(id integer, seq integer);\n>> insert into test select generate_series(0, 100), generate_series(0, 1000);\n>> create unique index test_idx on test(id, seq);\n>> analyze test;\n>>\n>> Now I try to fetch the latest 5 values per id, ordered by seq from the\n>> table:\n>>\n>> select * from (\n>> select id, seq, row_number() over (partition by id order by seq)\n>> from test\n>> where id in (1, 2, 3)\n>> ) where row_number()<= 5;\n> It seems this fetches the *first* 5 values per id - and not the\n> latest. For that you would need to \"order by seq desc\" in the window\n> and probably also in the index.\n>\nYeah. Sorry, wrong order. And the last line is wrong, it should be \") \ntmp where row_number <= 5;\".\n>> This does not use the index on id, seq for sorting the data. It uses a\n>> bitmap index scan, and sequential scan when issued SET enable_bitmapscan to\n>> false. Tested both on git head and 8.4.8. See end of post for plans. It\n>> seems it would be possible to fetch the first values per id using the index,\n>> or at least skip the sorting.\n> Just guessing: since row_number is an analytic function and it can be\n> combined with any type of windowing only in \"rare\" cases do the\n> ordering of index columns and the ordering in the window align. So\n> while your particular use case could benefit from this optimization\n> the overall judgement might be that it's not worthwhile to make the\n> optimizer more complex to cover this case - or I fail to see the more\n> general pattern here. :-)\n>\nI think there are common use cases for this - see end of message for an \nexample.\n>> Is there some other way to run the query so that it would use the index? Is\n>> there plans to support the index usage for the above query assuming that it\n>> is possible to use the index for that query?\n>>\n>> The real world use case would be to fetch latest 5 threads per discussion\n>> forum in one query. Or fetch 3 latest comments for all patches in given\n>> commit fest in single query.\n> Is it really that realistic that someone wants the latest n entries\n> for *all* threads / patches? It seems since this can result in a very\n> large data set this is probably not the type of query which is done\n> often.\nThe idea is that the dataset isn't that large. And you don't have to \nfetch them for all threads / patches. You might fetch them only for \npatches in currently viewed commit fest. See \nhttps://commitfest.postgresql.org/action/commitfest_view?id=12 for one \nsuch use. What I have in mind is fetching first all the patches in the \ncommit fest in one go. Then issue query which would look something like:\n select * from\n (select comment_data, row_number() over (partition by patch_id \norder by comment_date desc)\n from patch_comments\n where patch_id in (list of patch_ids fetched in first query)\n ) tmp where row_number <= 3;\n\nNow you have all the data needed for the first column in the above \nmentioned page.\n\nI guess the commit fest application is fetching all the comments for the \npatches in the commit fest in one query, and then doing the limit in \napplication code. This is fine because there aren't that many comments \nper patch. But if you have dozens of forums and thousands of threads per \nforum you can't do that.\n\nThis is useful in any situation where you want to show n latest entries \ninstead of the last entry in a list view. Latest modifications to an \nobject, latest commits to a file, latest messages to a discussion thread \nor latest payments per project. Or 5 most popular videos per category, \n10 highest paid employees per department and so on.\n\n - Anssi Kääriäinen\n",
"msg_date": "Tue, 4 Oct 2011 17:06:45 +0300",
"msg_from": "=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Window functions and index usage"
},
{
"msg_contents": "=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= <[email protected]> writes:\n> I have the following setup:\n> create table test(id integer, seq integer);\n> insert into test select generate_series(0, 100), generate_series(0, 1000);\n> create unique index test_idx on test(id, seq);\n> analyze test;\n\n> Now I try to fetch the latest 5 values per id, ordered by seq from the \n> table:\n\n> select * from (\n> select id, seq, row_number() over (partition by id order by seq)\n> from test\n> where id in (1, 2, 3)\n> ) where row_number() <= 5;\n\n> This does not use the index on id, seq for sorting the data. It uses a \n> bitmap index scan, and sequential scan when issued SET enable_bitmapscan \n> to false.\n\nThe cost estimates I get are 806 for bitmap scan and sort, 2097 for\nseqscan and sort, 4890 for indexscan without sort. It *can* use the\nindex for that query ... it just doesn't think it's a good idea. It's\nprobably right, too. At least, the actual runtimes go in the same order\non my machine. Seqscan-and-sort very often beats an indexscan for\nsorting a table, unless the table is clustered on the index or nearly so.\n\nNote that it cannot use the index for both ordering and satisfying the\nIN condition. If it used the =ANY clause as an index condition, what\nthat would imply is three separate index searches and so the results\nwouldn't necessarily be correctly ordered. This is why the plain\nindexscan costs out so expensive: it's a full-table scan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 04 Oct 2011 10:36:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Window functions and index usage "
},
{
"msg_contents": "On Tue, Oct 4, 2011 at 4:06 PM, Anssi Kääriäinen\n<[email protected]> wrote:\n> On 10/04/2011 04:27 PM, Robert Klemme wrote:\n>>\n>> On Tue, Oct 4, 2011 at 11:39 AM, Anssi Kääriäinen\n>> <[email protected]> wrote:\n>>>\n>>> I have the following setup:\n>>>\n>>> create table test(id integer, seq integer);\n>>> insert into test select generate_series(0, 100), generate_series(0,\n>>> 1000);\n>>> create unique index test_idx on test(id, seq);\n>>> analyze test;\n>>>\n>>> Now I try to fetch the latest 5 values per id, ordered by seq from the\n>>> table:\n>>>\n>>> select * from (\n>>> select id, seq, row_number() over (partition by id order by seq)\n>>> from test\n>>> where id in (1, 2, 3)\n>>> ) where row_number()<= 5;\n>>\n>> It seems this fetches the *first* 5 values per id - and not the\n>> latest. For that you would need to \"order by seq desc\" in the window\n>> and probably also in the index.\n>>\n> Yeah. Sorry, wrong order. And the last line is wrong, it should be \") tmp\n> where row_number <= 5;\".\n>>>\n>>> This does not use the index on id, seq for sorting the data. It uses a\n>>> bitmap index scan, and sequential scan when issued SET enable_bitmapscan\n>>> to\n>>> false. Tested both on git head and 8.4.8. See end of post for plans. It\n>>> seems it would be possible to fetch the first values per id using the\n>>> index,\n>>> or at least skip the sorting.\n>>\n>> Just guessing: since row_number is an analytic function and it can be\n>> combined with any type of windowing only in \"rare\" cases do the\n>> ordering of index columns and the ordering in the window align. So\n>> while your particular use case could benefit from this optimization\n>> the overall judgement might be that it's not worthwhile to make the\n>> optimizer more complex to cover this case - or I fail to see the more\n>> general pattern here. :-)\n>>\n> I think there are common use cases for this - see end of message for an\n> example.\n>>>\n>>> Is there some other way to run the query so that it would use the index?\n>>> Is\n>>> there plans to support the index usage for the above query assuming that\n>>> it\n>>> is possible to use the index for that query?\n>>>\n>>> The real world use case would be to fetch latest 5 threads per discussion\n>>> forum in one query. Or fetch 3 latest comments for all patches in given\n>>> commit fest in single query.\n>>\n>> Is it really that realistic that someone wants the latest n entries\n>> for *all* threads / patches? It seems since this can result in a very\n>> large data set this is probably not the type of query which is done\n>> often.\n>\n> The idea is that the dataset isn't that large.\n\nBut then why do require using the second index column in the first\nplace? If the data set is small then the query is likely fast if the\nselection via id can use any index.\n\n> And you don't have to fetch\n> them for all threads / patches. You might fetch them only for patches in\n> currently viewed commit fest. See\n> https://commitfest.postgresql.org/action/commitfest_view?id=12 for one such\n> use. What I have in mind is fetching first all the patches in the commit\n> fest in one go. Then issue query which would look something like:\n> select * from\n> (select comment_data, row_number() over (partition by patch_id order by\n> comment_date desc)\n> from patch_comments\n> where patch_id in (list of patch_ids fetched in first query)\n> ) tmp where row_number <= 3;\n\nInteresting: I notice that I the query cannot successfully be simplified on 8.4:\n\nrklemme=> select *,\nrow_number() over (partition by id order by seq desc) as rn\nfrom test\nwhere id in (1,2,3)\nand rn <= 3\n;\nERROR: column \"rn\" does not exist\nLINE 5: and rn <= 3\n ^\nrklemme=> select *,\nrow_number() over (partition by id order by seq desc) as rn\nfrom test\nwhere id in (1,2,3)\nand row_number() <= 3;\nERROR: window function call requires an OVER clause\nLINE 5: and row_number() <= 3;\n ^\nrklemme=> select *,\nrow_number() over (partition by id order by seq desc) as rn\nfrom test\nwhere id in (1,2,3)\nand row_number() over (partition by id order by seq desc) <= 3;\nERROR: window functions not allowed in WHERE clause\nLINE 5: and row_number() over (partition by id order by seq desc) <=...\n ^\nrklemme=>\n\nI think I need to switch to 9.1 soon. :-)\n\n> Now you have all the data needed for the first column in the above mentioned\n> page.\n>\n> I guess the commit fest application is fetching all the comments for the\n> patches in the commit fest in one query, and then doing the limit in\n> application code. This is fine because there aren't that many comments per\n> patch. But if you have dozens of forums and thousands of threads per forum\n> you can't do that.\n>\n> This is useful in any situation where you want to show n latest entries\n> instead of the last entry in a list view. Latest modifications to an object,\n> latest commits to a file, latest messages to a discussion thread or latest\n> payments per project. Or 5 most popular videos per category, 10 highest paid\n> employees per department and so on.\n\nAgain, what is easy for you as a human will likely be quite complex\nfor the optimizer (knowing that the order by and the row_number output\nalign).\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Tue, 4 Oct 2011 16:52:20 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Window functions and index usage"
},
{
"msg_contents": "On 10/04/2011 05:36 PM, Tom Lane wrote:\n> The cost estimates I get are 806 for bitmap scan and sort, 2097 for\n> seqscan and sort, 4890 for indexscan without sort. It *can* use the\n> index for that query ... it just doesn't think it's a good idea. It's\n> probably right, too. At least, the actual runtimes go in the same order\n> on my machine. Seqscan-and-sort very often beats an indexscan for\n> sorting a table, unless the table is clustered on the index or nearly so.\nI tested it and yes, it can use the index scan. But not in the way I \nthough it would be used.\n> Note that it cannot use the index for both ordering and satisfying the\n> IN condition. If it used the =ANY clause as an index condition, what\n> that would imply is three separate index searches and so the results\n> wouldn't necessarily be correctly ordered. This is why the plain\n> indexscan costs out so expensive: it's a full-table scan.\nThis I don't understand. I would imagine it would be possible to execute \nthis query as get 5 first values for id 1, get 5 first values for id 2, \nget 5 first values for id 3. At least if I do this by hand using UNION I \nget two orders of magnitude faster execution time. I am not saying it \nwould be easy to do that, but to me it seems it would be possible to use \nthe index more efficiently for the example query. Or is the following \nUNION query not equivalent to the window function query, assuming I am \nnot interested in the row_number column itself?\n\n(select id, seq from test where id = 1 order by seq limit 5)\nunion\n(select id, seq from test where id = 2 order by seq limit 5)\nunion\n(select id, seq from test where id = 3 order by seq limit 5);\n\nThe results are in different order, but there is no order by in the \noriginal query except in the OVER clause, so it should not matter.\n\n - Anssi Kääriäinen\n",
"msg_date": "Tue, 4 Oct 2011 18:00:55 +0300",
"msg_from": "=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Window functions and index usage"
},
{
"msg_contents": "On 10/04/2011 05:52 PM, Robert Klemme wrote:\n> But then why do require using the second index column in the first\n> place? If the data set is small then the query is likely fast if the\n> selection via id can use any index.\nI mean the fetched dataset is not large, I didn't mean the dataset in \ntotal isn't large. Imagine the commit fest application, but with 10000 \ncomments per patch. You want to fetch the 100 patches in the current \ncommit fest, and 3 latest comments per patch.\n>> And you don't have to fetch\n>> them for all threads / patches. You might fetch them only for patches in\n>> currently viewed commit fest. See\n>> https://commitfest.postgresql.org/action/commitfest_view?id=12 for one such\n>> use. What I have in mind is fetching first all the patches in the commit\n>> fest in one go. Then issue query which would look something like:\n>> select * from\n>> (select comment_data, row_number() over (partition by patch_id order by\n>> comment_date desc)\n>> from patch_comments\n>> where patch_id in (list of patch_ids fetched in first query)\n>> ) tmp where row_number<= 3;\n> Interesting: I notice that I the query cannot successfully be simplified on 8.4:\n>\n> rklemme=> select *,\n> row_number() over (partition by id order by seq desc) as rn\n> from test\n> where id in (1,2,3)\n> and rn<= 3\n> ;\nThat can't be done, where conditions targeting window functions must be \ndone using subquery. There is no difference in 9.1 as far as I know.\n\n> Again, what is easy for you as a human will likely be quite complex\n> for the optimizer (knowing that the order by and the row_number output\n> align).\nI am not trying to say it is easy.\n\n - Anssi\n",
"msg_date": "Tue, 4 Oct 2011 18:05:10 +0300",
"msg_from": "=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Window functions and index usage"
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nGenerally when it comes to query performance, I check how the vacuuming and\nstatistics collection is performed on Tables and Indexes hit by the query.\n\nApart from the above i check the code logic ( for any bad joins ) and column\nstatistics as well.\n\nI got hold of two catalog tables \"pg_stats\" and \"pg_class\".\n\nColumn \"avg_width\" and \"distinct\" in pg_stats gets me lot of sensible\ninformation regarding, column values and size of the column.\n\nCan someone help me know when the values in these columns are bound to\nchange ? Is it only when ANALYZE runs ?\n\nI am about to calculate lot of metrics depending on above values. Please\nhelp !\n\nThanks\nVenkat\n\nHello Everyone,Generally when it comes to query performance, I check how the vacuuming and statistics collection is performed on Tables and Indexes hit by the query.Apart from the above i check the code logic ( for any bad joins ) and column statistics as well.\nI got hold of two catalog tables \"pg_stats\" and \"pg_class\".Column \"avg_width\" and \"distinct\" in pg_stats gets me lot of sensible information regarding, column values and size of the column.\nCan someone help me know when the values in these columns are bound to change ? Is it only when ANALYZE runs ?I am about to calculate lot of metrics depending on above values. Please help !\nThanksVenkat",
"msg_date": "Tue, 4 Oct 2011 15:55:17 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": ": Column Performance in a query"
},
{
"msg_contents": "Hi,\n\nOn 4 October 2011 21:25, Venkat Balaji <[email protected]> wrote:\n> I got hold of two catalog tables \"pg_stats\" and \"pg_class\".\n> Column \"avg_width\" and \"distinct\" in pg_stats gets me lot of sensible\n> information regarding, column values and size of the column.\n> Can someone help me know when the values in these columns are bound to\n> change ? Is it only when ANALYZE runs ?\n\nyes, ANALYZE updates underlaying pg_statistic table.\n\n-- \nOndrej Ivanic\n([email protected])\n",
"msg_date": "Wed, 5 Oct 2011 09:26:08 +1100",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Column Performance in a query"
}
] |
[
{
"msg_contents": "Hello,\n\nSorry. I should have put some more details in the email.\n\nI have got a situation where in i see the production system is loaded with\nthe checkpoints and at-least 1000+ buffers are being written for every\ncheckpoint.\n\nCheckpoint occurs every 3 to 4 minutes and every checkpoint takes 150\nseconds minimum to write off the buffers and 150+ seconds for checkpoint\nsyncing. A warning messages can be seen in the dbserver logs \"checkpoint\noccuring too frequently\".\n\nI had a look at the pg_stat_bgwriter as well. Below is what i see.\n\n select * from pg_stat_bgwriter;\n\n checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |\nmaxwritten_clean | buffers_backend | buffers_alloc\n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------------------------------------------------------\n 9785 | 36649 | 493002109 |\n282600872 | 1276056 | 382124461 | 7417638175\n(1 row)\n\nI am thinking of increasing the checkpoint_segments.\n\nBelow are our current settings -\n\ncheckpoint_segments = 8\ncheckpoint_timeout = 5 mins\ncheckpoint_completion_target = 0.5\nbgwriter_delay = 200ms\nbgwriter_lru_maxpages = 100\nbgwriter_lru_multiplier = 2\n\nLooking forward for suggestions.\n\nThanks\nVB\n\n\n\n\nOn Thu, Sep 29, 2011 at 12:40 PM, Venkat Balaji <[email protected]>wrote:\n\n> Hello Everyone,\n>\n> We are experience a huge drop in performance for one of our production\n> servers.\n>\n> I suspect this is because of high IO due to frequent Checkpoints. Attached\n> is the excel sheet with checkpoint information we tracked.\n>\n> Below is the configuration we have\n>\n> checkpoint_segments = default\n> checkpoint_timeout = default\n>\n> I suspect archive data generation to be around 250 MB.\n>\n> Please share your thoughts !\n>\n> Thanks\n> VB\n>\n>\n>\n>\n\nHello,Sorry. I should have put some more details in the email.I have got a situation where in i see the production system is loaded with the checkpoints and at-least 1000+ buffers are being written for every checkpoint.\nCheckpoint occurs every 3 to 4 minutes and every checkpoint takes 150 seconds minimum to write off the buffers and 150+ seconds for checkpoint syncing. A warning messages can be seen in the dbserver logs \"checkpoint occuring too frequently\".\nI had a look at the pg_stat_bgwriter as well. Below is what i see. select * from pg_stat_bgwriter; checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc\n-------------------+-----------------+--------------------+---------------+------------------+-----------------+--------------------------------------------------------------- 9785 | 36649 | 493002109 | 282600872 | 1276056 | 382124461 | 7417638175\n(1 row)I am thinking of increasing the checkpoint_segments.Below are our current settings -checkpoint_segments = 8checkpoint_timeout = 5 mins\ncheckpoint_completion_target = 0.5bgwriter_delay = 200msbgwriter_lru_maxpages = 100bgwriter_lru_multiplier = 2Looking forward for suggestions.\nThanksVBOn Thu, Sep 29, 2011 at 12:40 PM, Venkat Balaji <[email protected]> wrote:\nHello Everyone,We are experience a huge drop in performance for one of our production servers.\nI suspect this is because of high IO due to frequent Checkpoints. Attached is the excel sheet with checkpoint information we tracked.\nBelow is the configuration we havecheckpoint_segments = defaultcheckpoint_timeout = defaultI suspect archive data generation to be around 250 MB.\nPlease share your thoughts !ThanksVB",
"msg_date": "Tue, 4 Oct 2011 16:20:34 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
},
{
"msg_contents": "On 04.10.2011 13:50, Venkat Balaji wrote:\n> I have got a situation where in i see the production system is loaded with\n> the checkpoints and at-least 1000+ buffers are being written for every\n> checkpoint.\n\n1000 buffers isn't very much, that's only 8 MB, so that's not alarming \nitself.\n\n> I am thinking of increasing the checkpoint_segments.\n>\n> Below are our current settings -\n>\n> checkpoint_segments = 8\n> checkpoint_timeout = 5 mins\n> checkpoint_completion_target = 0.5\n> bgwriter_delay = 200ms\n> bgwriter_lru_maxpages = 100\n> bgwriter_lru_multiplier = 2\n>\n> Looking forward for suggestions.\n\nYep, increase checkpoint_segments. And you probably want to raise \ncheckpoint_timeout too.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 04 Oct 2011 14:08:57 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
},
{
"msg_contents": "Thanks Heikki !\n\nRegards,\nVB\n\nOn Tue, Oct 4, 2011 at 4:38 PM, Heikki Linnakangas <\[email protected]> wrote:\n\n> On 04.10.2011 13:50, Venkat Balaji wrote:\n>\n>> I have got a situation where in i see the production system is loaded with\n>> the checkpoints and at-least 1000+ buffers are being written for every\n>> checkpoint.\n>>\n>\n> 1000 buffers isn't very much, that's only 8 MB, so that's not alarming\n> itself.\n>\n>\n> I am thinking of increasing the checkpoint_segments.\n>>\n>> Below are our current settings -\n>>\n>> checkpoint_segments = 8\n>> checkpoint_timeout = 5 mins\n>> checkpoint_completion_target = 0.5\n>> bgwriter_delay = 200ms\n>> bgwriter_lru_maxpages = 100\n>> bgwriter_lru_multiplier = 2\n>>\n>> Looking forward for suggestions.\n>>\n>\n> Yep, increase checkpoint_segments. And you probably want to raise\n> checkpoint_timeout too.\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n\nThanks Heikki !Regards,VBOn Tue, Oct 4, 2011 at 4:38 PM, Heikki Linnakangas <[email protected]> wrote:\nOn 04.10.2011 13:50, Venkat Balaji wrote:\n\nI have got a situation where in i see the production system is loaded with\nthe checkpoints and at-least 1000+ buffers are being written for every\ncheckpoint.\n\n\n1000 buffers isn't very much, that's only 8 MB, so that's not alarming itself.\n\n\nI am thinking of increasing the checkpoint_segments.\n\nBelow are our current settings -\n\ncheckpoint_segments = 8\ncheckpoint_timeout = 5 mins\ncheckpoint_completion_target = 0.5\nbgwriter_delay = 200ms\nbgwriter_lru_maxpages = 100\nbgwriter_lru_multiplier = 2\n\nLooking forward for suggestions.\n\n\nYep, increase checkpoint_segments. And you probably want to raise checkpoint_timeout too.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com",
"msg_date": "Tue, 4 Oct 2011 16:53:10 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
},
{
"msg_contents": "8 checkpoint segments is very small, try 50\n\n2011/10/4, Venkat Balaji <[email protected]>:\n> Hello,\n>\n> Sorry. I should have put some more details in the email.\n>\n> I have got a situation where in i see the production system is loaded with\n> the checkpoints and at-least 1000+ buffers are being written for every\n> checkpoint.\n>\n> Checkpoint occurs every 3 to 4 minutes and every checkpoint takes 150\n> seconds minimum to write off the buffers and 150+ seconds for checkpoint\n> syncing. A warning messages can be seen in the dbserver logs \"checkpoint\n> occuring too frequently\".\n>\n> I had a look at the pg_stat_bgwriter as well. Below is what i see.\n>\n> select * from pg_stat_bgwriter;\n>\n> checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |\n> maxwritten_clean | buffers_backend | buffers_alloc\n> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------------------------------------------------------\n> 9785 | 36649 | 493002109 |\n> 282600872 | 1276056 | 382124461 | 7417638175\n> (1 row)\n>\n> I am thinking of increasing the checkpoint_segments.\n>\n> Below are our current settings -\n>\n> checkpoint_segments = 8\n> checkpoint_timeout = 5 mins\n> checkpoint_completion_target = 0.5\n> bgwriter_delay = 200ms\n> bgwriter_lru_maxpages = 100\n> bgwriter_lru_multiplier = 2\n>\n> Looking forward for suggestions.\n>\n> Thanks\n> VB\n>\n>\n>\n>\n> On Thu, Sep 29, 2011 at 12:40 PM, Venkat Balaji\n> <[email protected]>wrote:\n>\n>> Hello Everyone,\n>>\n>> We are experience a huge drop in performance for one of our production\n>> servers.\n>>\n>> I suspect this is because of high IO due to frequent Checkpoints. Attached\n>> is the excel sheet with checkpoint information we tracked.\n>>\n>> Below is the configuration we have\n>>\n>> checkpoint_segments = default\n>> checkpoint_timeout = default\n>>\n>> I suspect archive data generation to be around 250 MB.\n>>\n>> Please share your thoughts !\n>>\n>> Thanks\n>> VB\n>>\n>>\n>>\n>>\n>\n\n\n-- \n------------\npasman\n",
"msg_date": "Tue, 4 Oct 2011 16:15:58 +0200",
"msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
},
{
"msg_contents": "On 10/04/2011 03:50 AM, Venkat Balaji wrote:\n> I had a look at the pg_stat_bgwriter as well.\n\nTry saving it like this instead:\n\nselect now(),* from pg_stat_bgwriter;\n\nAnd collect two data points, space a day or more apart. That gives a \nlot more information about the rate at which things are actually \nhappening. The long-term totals are less interesting than that.\n\nGenerally the first round of tuning work here is to increase \ncheckpoint_segments until most checkpoints appear in checkpoints_timed \nrather than checkpoints_req. After that, increasing checkpoint_timeout \nmight also be useful.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 04 Oct 2011 15:32:00 -0700",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
},
{
"msg_contents": "On Tue, Oct 4, 2011 at 4:32 PM, Greg Smith <[email protected]> wrote:\n> On 10/04/2011 03:50 AM, Venkat Balaji wrote:\n>>\n>> I had a look at the pg_stat_bgwriter as well.\n>\n> Try saving it like this instead:\n>\n> select now(),* from pg_stat_bgwriter;\n>\n> And collect two data points, space a day or more apart. That gives a lot\n> more information about the rate at which things are actually happening. The\n> long-term totals are less interesting than that.\n>\n> Generally the first round of tuning work here is to increase\n> checkpoint_segments until most checkpoints appear in checkpoints_timed\n> rather than checkpoints_req. After that, increasing checkpoint_timeout\n> might also be useful.\n\nThat last paragraph should be printed out and posted on every pgsql\nadmin's cubicle wall.\n",
"msg_date": "Tue, 4 Oct 2011 16:47:10 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
},
{
"msg_contents": "I was thinking to increase checkpoint_segments to around 16 or 20.\n\nI think 50 is a bit higher.\n\nGreg,\n\nSure. I would collect the info from pg_stat_bgwriter on regular intervals.\n\nAs we have too many transactions going on I am thinking to collect the info\nevery 6 or 8 hrs.\n\nThanks\nVB\n\nOn Wed, Oct 5, 2011 at 4:02 AM, Greg Smith <[email protected]> wrote:\n\n> On 10/04/2011 03:50 AM, Venkat Balaji wrote:\n>\n>> I had a look at the pg_stat_bgwriter as well.\n>>\n>\n> Try saving it like this instead:\n>\n> select now(),* from pg_stat_bgwriter;\n>\n> And collect two data points, space a day or more apart. That gives a lot\n> more information about the rate at which things are actually happening. The\n> long-term totals are less interesting than that.\n>\n> Generally the first round of tuning work here is to increase\n> checkpoint_segments until most checkpoints appear in checkpoints_timed\n> rather than checkpoints_req. After that, increasing checkpoint_timeout\n> might also be useful.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nI was thinking to increase checkpoint_segments to around 16 or 20.I think 50 is a bit higher.Greg,Sure. I would collect the info from pg_stat_bgwriter on regular intervals. \nAs we have too many transactions going on I am thinking to collect the info every 6 or 8 hrs.ThanksVBOn Wed, Oct 5, 2011 at 4:02 AM, Greg Smith <[email protected]> wrote:\nOn 10/04/2011 03:50 AM, Venkat Balaji wrote:\n\nI had a look at the pg_stat_bgwriter as well.\n\n\nTry saving it like this instead:\n\nselect now(),* from pg_stat_bgwriter;\n\nAnd collect two data points, space a day or more apart. That gives a lot more information about the rate at which things are actually happening. The long-term totals are less interesting than that.\n\nGenerally the first round of tuning work here is to increase checkpoint_segments until most checkpoints appear in checkpoints_timed rather than checkpoints_req. After that, increasing checkpoint_timeout might also be useful.\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 5 Oct 2011 08:20:18 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
},
{
"msg_contents": "On 10/04/2011 07:50 PM, Venkat Balaji wrote:\n> I was thinking to increase checkpoint_segments to around 16 or 20.\n>\n> I think 50 is a bit higher.\n>\n\nDon't be afraid to increase that a lot. You could set it to 1000 and \nthat would be probably turn out fine; checkpoints will still happen \nevery 5 minutes.\n\nCheckpoints represent a lot of the I/O in a PostgreSQL database. The \nmain downside to making them less frequent is that recovery after a \ncrash will take longer; a secondary one is that WAL files in pg_xlog \nwill take up more space. Most places don't care much about either of \nthose things. The advantage to making them happen less often is that \nyou get less total writes. People need to be careful about going a long \n*time* between checkpoints. But there's very few cases where you need \nto worry about the segment count going too high before another one is \ntriggered.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 04 Oct 2011 22:32:09 -0700",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
},
{
"msg_contents": "Thanks Greg !\n\nSorry for delayed response.\n\nWe are actually waiting to change the checkpoint_segments in our production\nsystems (waiting for the downtime).\n\nThanks\nVB\n\nOn Wed, Oct 5, 2011 at 11:02 AM, Greg Smith <[email protected]> wrote:\n\n> On 10/04/2011 07:50 PM, Venkat Balaji wrote:\n>\n>> I was thinking to increase checkpoint_segments to around 16 or 20.\n>>\n>> I think 50 is a bit higher.\n>>\n>>\n> Don't be afraid to increase that a lot. You could set it to 1000 and that\n> would be probably turn out fine; checkpoints will still happen every 5\n> minutes.\n>\n> Checkpoints represent a lot of the I/O in a PostgreSQL database. The main\n> downside to making them less frequent is that recovery after a crash will\n> take longer; a secondary one is that WAL files in pg_xlog will take up more\n> space. Most places don't care much about either of those things. The\n> advantage to making them happen less often is that you get less total\n> writes. People need to be careful about going a long *time* between\n> checkpoints. But there's very few cases where you need to worry about the\n> segment count going too high before another one is triggered.\n>\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n>\n>\n\nThanks Greg !Sorry for delayed response.We are actually waiting to change the checkpoint_segments in our production systems (waiting for the downtime).\nThanksVBOn Wed, Oct 5, 2011 at 11:02 AM, Greg Smith <[email protected]> wrote:\nOn 10/04/2011 07:50 PM, Venkat Balaji wrote:\n\nI was thinking to increase checkpoint_segments to around 16 or 20.\n\nI think 50 is a bit higher.\n\n\n\nDon't be afraid to increase that a lot. You could set it to 1000 and that would be probably turn out fine; checkpoints will still happen every 5 minutes.\n\nCheckpoints represent a lot of the I/O in a PostgreSQL database. The main downside to making them less frequent is that recovery after a crash will take longer; a secondary one is that WAL files in pg_xlog will take up more space. Most places don't care much about either of those things. The advantage to making them happen less often is that you get less total writes. People need to be careful about going a long *time* between checkpoints. But there's very few cases where you need to worry about the segment count going too high before another one is triggered.\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Mon, 24 Oct 2011 17:46:41 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
},
{
"msg_contents": "On Oct 24, 2011, at 8:16 AM, Venkat Balaji <[email protected]> wrote:\n> Thanks Greg !\n> \n> Sorry for delayed response.\n> \n> We are actually waiting to change the checkpoint_segments in our production systems (waiting for the downtime).\n\nThat setting can be changed without downtime.\n\n...Robert",
"msg_date": "Mon, 24 Oct 2011 22:17:39 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
},
{
"msg_contents": "Oh yes.\n\nThanks a lot Robert !\n\nRegards\nVB\n\nOn Tue, Oct 25, 2011 at 7:47 AM, Robert Haas <[email protected]> wrote:\n\n> On Oct 24, 2011, at 8:16 AM, Venkat Balaji <[email protected]> wrote:\n> > Thanks Greg !\n> >\n> > Sorry for delayed response.\n> >\n> > We are actually waiting to change the checkpoint_segments in our\n> production systems (waiting for the downtime).\n>\n> That setting can be changed without downtime.\n>\n> ...Robert\n\nOh yes.Thanks a lot Robert !RegardsVBOn Tue, Oct 25, 2011 at 7:47 AM, Robert Haas <[email protected]> wrote:\nOn Oct 24, 2011, at 8:16 AM, Venkat Balaji <[email protected]> wrote:\n\n> Thanks Greg !\n>\n> Sorry for delayed response.\n>\n> We are actually waiting to change the checkpoint_segments in our production systems (waiting for the downtime).\n\nThat setting can be changed without downtime.\n\n...Robert",
"msg_date": "Tue, 25 Oct 2011 10:30:10 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : PG9.0 - Checkpoint tuning and pg_stat_bgwriter"
}
] |
[
{
"msg_contents": "\nI ran a test using Intel's timed workload wear indication feature on a \n100G 710 series SSD.\n\nThe test works like this : you reset the wear indication counters, then \nstart running\nsome workload (in my case pgbench at scale 100 for 4 hours). During the \ntest run\na wear indication attribute can be read via smartctl. This contains the \ndrive's estimate\nfor the total drive wear lifetime used during the test run time. \nProvided we trust\nthe drive's estimate, this in turn provides an answer to the interesting \nquestion\n\"how long before my drive wears out in production?\". That is, provided \nthe test\nworkload can be reasonably accurately related to production workload.\n\nExecutive summary : the drive says that we used 0.025% of its wear life\nduring this 4h test. The test performed 47 million transactions on a\nroughly 17G database.\n\nWe think that this pgbench workload performs transactions at roughly\n10X the rate we expect from our application in production under heavy load.\nSo the drive says you could run this pgbench test for about two years\nbefore wearing out the flash devices. Or 20 years under our expected \nproduction\nworkload.\n\nSmart attributes read just after test completion:\n\n[root@server1 9.1]# smartctl -A /dev/sda\nsmartctl 5.41 2011-06-09 r3365 [x86_64-linux-2.6.32-71.29.1.el6.x86_64] \n(local build)\nCopyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net\n\n=== START OF READ SMART DATA SECTION ===\nSMART Attributes Data Structure revision number: 5\nVendor Specific SMART Attributes with Thresholds:\nID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE \nUPDATED WHEN_FAILED RAW_VALUE\n 3 Spin_Up_Time 0x0020 100 100 000 Old_age \nOffline - 0\n 4 Start_Stop_Count 0x0030 100 100 000 Old_age \nOffline - 0\n 5 Reallocated_Sector_Ct 0x0032 100 100 000 Old_age \nAlways - 0\n 9 Power_On_Hours 0x0032 100 100 000 Old_age \nAlways - 90\n 12 Power_Cycle_Count 0x0032 100 100 000 Old_age \nAlways - 8\n170 Reserve_Block_Count 0x0033 100 100 010 Pre-fail \nAlways - 0\n171 Program_Fail_Count 0x0032 100 100 000 Old_age \nAlways - 0\n172 Erase_Fail_Count 0x0032 100 100 000 Old_age \nAlways - 0\n174 Unknown_Attribute 0x0032 100 100 000 Old_age \nAlways - 4\n183 SATA_Downshift_Count 0x0030 100 100 000 Old_age \nOffline - 0\n184 End-to-End_Error 0x0032 100 100 090 Old_age \nAlways - 0\n187 Reported_Uncorrect 0x0032 100 100 000 Old_age \nAlways - 0\n190 Airflow_Temperature_Cel 0x0032 073 070 000 Old_age \nAlways - 27 (Min/Max 17/30)\n192 Unsafe_Shutdown_Count 0x0032 100 100 000 Old_age \nAlways - 4\n194 Temperature_Celsius 0x0032 100 100 000 Old_age \nAlways - 32\n199 UDMA_CRC_Error_Count 0x0030 100 100 000 Old_age \nOffline - 0\n225 Host_Writes_32MiB 0x0032 100 100 000 Old_age \nAlways - 51362\n226 Workld_Media_Wear_Indic 0x0032 100 100 000 Old_age \nAlways - 26\n227 Workld_Host_Reads_Perc 0x0032 100 100 000 Old_age \nAlways - 1\n228 Workload_Minutes 0x0032 100 100 000 Old_age \nAlways - 242\n232 Available_Reservd_Space 0x0033 100 100 010 Pre-fail \nAlways - 0\n233 Media_Wearout_Indicator 0x0032 100 100 000 Old_age \nAlways - 0\n241 Host_Writes_32MiB 0x0032 100 100 000 Old_age \nAlways - 51362\n242 Host_Reads_32MiB 0x0032 100 100 000 Old_age \nAlways - 822\n\npgbench output from the test run:\n\nbash-4.1$ /usr/pgsql-9.1/bin/pgbench -T 14400 -j 8 -c 64\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1000\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 8\nduration: 14400 s\nnumber of transactions actually processed: 47205109\ntps = 3278.127690 (including connections establishing)\ntps = 3278.135396 (excluding connections establishing)\n\n\n",
"msg_date": "Wed, 05 Oct 2011 13:15:15 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Intel 710 Endurance Test Results"
}
] |
[
{
"msg_contents": "Hello,\n\nHas any performance or evaluation done for pg9.x streaming replication\nover WAN ?\nHow adequate is the protocol to push WALs over long distance ?\nAny best practice tuning wal_* for WAN ?\n\nCheers,\nBen-\n",
"msg_date": "Wed, 5 Oct 2011 14:53:15 -0700",
"msg_from": "Ben Ciceron <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg9 replication over WAN ?"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Ben Ciceron\n> Sent: Wednesday, October 05, 2011 3:53 PM\n> To: [email protected]\n> Subject: [PERFORM] pg9 replication over WAN ?\n> \n> Hello,\n> \n> Has any performance or evaluation done for pg9.x streaming replication\n> over WAN ?\n> How adequate is the protocol to push WALs over long distance ?\n> Any best practice tuning wal_* for WAN ?\n> \n> Cheers,\n> Ben-\n\n\nWorks for us between the middle of the US and the UK. This on some databases\nthat do a few gig of log files per day. \n\nI keep a large number of wal_keep_segments because I have the disk capacity\nfor it and the initial rsync takes a while at ~2-4MB/s. \n\n\nYou may find you need to tune the tcp-keep alive settings, but I am not sure\nif that was really a change in the setting or a fix to our load balancers\nthat fixed an issue I was seeing. \n\nOverall I am extremely pleased with streaming replication + read only hot\nstandby. \n\n\n\n\n-Mark\n\n\n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 6 Oct 2011 21:22:12 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg9 replication over WAN ?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a problem with my postgres 8.2.\n\nI Have an application that write ojbect (file, folder, ecc.) and another\ntable that have account. This to tables are likend eith another\ntablenthat have a permissions foreach objects + accounts.\n\nMy structure is:\n\nTABLE WITH USERS\n# \\d auth_accounts\n Table \"public.auth_accounts\"\n Column | Type | \nModifiers \n------------+---------+----------------------------------------------------------------------\n id | integer | not null default\nnextval(('\"auth_accounts_id_seq\"'::text)::regclass)\n login | text | not null\n password | text | not null\n first_name | text |\n last_name | text |\n email | text |\n phone | text |\nIndexes:\n \"auth_accounts_pkey\" PRIMARY KEY, btree (id)\n \"auth_accounts_id_key\" UNIQUE, btree (id)\n\n \nTABLE WITH OBJECTS:\n\\d dm_object\n Table \"public.dm_object\"\n Column | Type | \nModifiers \n--------------+-----------------------------+------------------------------------------------------------------\n id | integer | not null default\nnextval(('\"dm_object_id_seq\"'::text)::regclass)\n name | text | not null\n summary | text |\n object_type | text |\n create_date | timestamp without time zone |\n object_owner | integer |\n status | smallint | not null\n status_date | timestamp without time zone |\n status_owner | integer |\n version | integer | not null default 1\n reindex | smallint | default 0\n filesize | numeric |\n token | text |\n delete_date | date |\nIndexes:\n \"dm_object_id_key\" UNIQUE, btree (id)\n \"delete_date_index\" btree (delete_date)\n \"dm_object_object_type_idx\" btree (object_type)\n \"dm_object_search_key\" btree (name, summary)\n \"filesize_index\" btree (filesize)\n \"id_index\" btree (id)\n \"name_index\" btree (name)\n \"object_type_index\" btree (object_type)\n \"summary_index\" btree (summary)\n\n\nTABLE WITH PERMISSIONS:\ndocmgr=# \\d dm_object_perm\n Table \"public.dm_object_perm\"\n Column | Type | Modifiers\n------------+----------+-----------\n object_id | integer | not null\n account_id | integer |\n group_id | integer |\n bitset | smallint |\nIndexes:\n \"account_id_index\" btree (account_id)\n \"bitset_index\" btree (bitset)\n \"dm_object_perm_group_id\" btree (group_id)\n \"dm_object_perm_id_key\" btree (object_id)\n \"idx_dm_object_perm_nulls\" btree (bitset) WHERE bitset IS NULL\n \"object_id_index\" btree (object_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (object_id) REFERENCES dm_object(id)\n\n\nIf i count the records foreach tables i have:\nselect count(*) from dm_object;\n count\n-------\n 9778\n(1 row)\n\nselect count(*) from auth_accounts;\n count\n-------\n 4334\n\nselect count(*) from dm_object_perm;\n count \n----------\n 38928077\n(1 row)\n\nThe dm_object_perm have 38928077 of record.\n\nIf i run the \"EXPLAIN ANALYZE\" of \"select *\" in auth_accounts and\ndm_object i have good time:\ndocmgr=# explain analyze select * from auth_accounts;\n QUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------\n Seq Scan on auth_accounts (cost=0.00..131.33 rows=4333 width=196)\n(actual time=20.000..200.000 rows=4334 loops=1)\n Total runtime: 200.000 ms\n(2 rows)\n\ndocmgr=# explain analyze select * from dm_object;\n QUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------\n Seq Scan on dm_object (cost=0.00..615.78 rows=9778 width=411) (actual\ntime=0.000..10.000 rows=9778 loops=1)\n Total runtime: 10.000 ms\n(2 rows)\n\n\nIf i run \"explain analyze select * from dm_object_perm;\" it goes on for\nmany hours.\n\nIf i try to execute a left join: \"SELECT dm_object.id FROM dm_object\nLEFT JOIN dm_object_perm ON dm_object.id = dm_object_perm.object_id;\" my\ndb is unusable.\n\nhow can I fix this?\n\nThanks\n-- \n\n*Giovanni Mancuso*\nSystem Architect\nBabel S.r.l. - http://www.babel.it <http://www.babel.it/>\n*T:* 06.9826.9600 *M:* 3406580739 *F:* 06.9826.9680\nP.zza S.Benedetto da Norcia, 33 - 00040 Pomezia (Roma)\n------------------------------------------------------------------------\nCONFIDENZIALE: Questo messaggio ed i suoi allegati sono di carattere\nconfidenziale per i destinatari in indirizzo.\nE' vietato l'inoltro non autorizzato a destinatari diversi da quelli\nindicati nel messaggio originale.\nSe ricevuto per errore, l'uso del contenuto e' proibito; si prega di\ncomunicarlo al mittente e cancellarlo immediatamente.",
"msg_date": "Fri, 07 Oct 2011 12:04:39 +0200",
"msg_from": "Giovanni Mancuso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance problem with a table with 38928077 record"
},
{
"msg_contents": "Giovanni Mancuso <gmancuso 'at' babel.it> writes:\n\n> select count(*) from dm_object_perm;\n> � count��\n> ----------\n> �38'928'077\n> (1 row)\n\n[...]\n\n> If i run \"explain analyze select * from dm_object_perm;\" it goes on for many\n> hours.\n\nAlmost 39 million records is not small, especially if you run on\npoor hardware[1], poor configuration[2], poor database optimization[3],\nbloat[4], or a combination of these.\n\n[1] you could tell what hardware you use\n[2] you could report if your DB configuration is tuned/good\n[3] you could report if the DB is regularly analyzed/vacuumed\n[4] you could try a VACUUM FULL or CLUSTER and/or REINDEX on your\n large table(s) if you suspect answer to [3] is \"no\" -\n warning, these block some/all DB operations while running,\n and they will probably run for long in your situation\n\n> If i try to execute a left join: \"SELECT dm_object.id FROM dm_object LEFT JOIN\n> dm_object_perm ON dm_object.id = dm_object_perm.object_id;\" my db is unusable.\n\nEXPLAIN on this query would probably tell you PG has quite some\nwork to do to produce the result.\n\n\n> how can I fix this?\n\nI'm wondering if your DB design (storing almost all \"object x\naccount\" combinations in object_perm) is optimal.\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "Fri, 07 Oct 2011 12:24:17 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with a table with 38928077 record"
},
{
"msg_contents": "Do you need left join ?\nCan you further normalize the tables? (to lower the I/O)\nCan you upgrade to at least 8.3 ? It has huuge performance\nimprovements over 8.3.\n",
"msg_date": "Fri, 7 Oct 2011 11:29:22 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with a table with 38928077 record"
},
{
"msg_contents": "2011/10/7 Giovanni Mancuso <[email protected]>\n\n> Hi,\n>\n> I have a problem with my postgres 8.2.\n>\n> I Have an application that write ojbect (file, folder, ecc.) and another\n> table that have account. This to tables are likend eith another tablenthat\n> have a permissions foreach objects + accounts.\n>\n> My structure is:\n>\n> TABLE WITH USERS\n> # \\d auth_accounts\n> Table \"public.auth_accounts\"\n> Column | Type |\n> Modifiers\n>\n> ------------+---------+----------------------------------------------------------------------\n> id | integer | not null default\n> nextval(('\"auth_accounts_id_seq\"'::text)::regclass)\n> login | text | not null\n> password | text | not null\n> first_name | text |\n> last_name | text |\n> email | text |\n> phone | text |\n> Indexes:\n> \"auth_accounts_pkey\" PRIMARY KEY, btree (id)\n> \"auth_accounts_id_key\" UNIQUE, btree (id)\n>\n>\n> TABLE WITH OBJECTS:\n> \\d dm_object\n> Table \"public.dm_object\"\n> Column | Type |\n> Modifiers\n>\n> --------------+-----------------------------+------------------------------------------------------------------\n> id | integer | not null default\n> nextval(('\"dm_object_id_seq\"'::text)::regclass)\n> name | text | not null\n> summary | text |\n> object_type | text |\n> create_date | timestamp without time zone |\n> object_owner | integer |\n> status | smallint | not null\n> status_date | timestamp without time zone |\n> status_owner | integer |\n> version | integer | not null default 1\n> reindex | smallint | default 0\n> filesize | numeric |\n> token | text |\n> delete_date | date |\n> Indexes:\n> \"dm_object_id_key\" UNIQUE, btree (id)\n> \"delete_date_index\" btree (delete_date)\n> \"dm_object_object_type_idx\" btree (object_type)\n> \"dm_object_search_key\" btree (name, summary)\n> \"filesize_index\" btree (filesize)\n> \"id_index\" btree (id)\n> \"name_index\" btree (name)\n> \"object_type_index\" btree (object_type)\n> \"summary_index\" btree (summary)\n>\n>\n> TABLE WITH PERMISSIONS:\n> docmgr=# \\d dm_object_perm\n> Table \"public.dm_object_perm\"\n> Column | Type | Modifiers\n> ------------+----------+-----------\n> object_id | integer | not null\n> account_id | integer |\n> group_id | integer |\n> bitset | smallint |\n> Indexes:\n> \"account_id_index\" btree (account_id)\n> \"bitset_index\" btree (bitset)\n> \"dm_object_perm_group_id\" btree (group_id)\n> \"dm_object_perm_id_key\" btree (object_id)\n> \"idx_dm_object_perm_nulls\" btree (bitset) WHERE bitset IS NULL\n> \"object_id_index\" btree (object_id)\n> Foreign-key constraints:\n> \"$1\" FOREIGN KEY (object_id) REFERENCES dm_object(id)\n>\n>\n> If i count the records foreach tables i have:\n> select count(*) from dm_object;\n> count\n> -------\n> 9778\n> (1 row)\n>\n> select count(*) from auth_accounts;\n> count\n> -------\n> 4334\n>\n> select count(*) from dm_object_perm;\n> count\n> ----------\n> 38928077\n> (1 row)\n>\n> The dm_object_perm have 38928077 of record.\n>\n> If i run the \"EXPLAIN ANALYZE\" of \"select *\" in auth_accounts and dm_object\n> i have good time:\n> docmgr=# explain analyze select * from auth_accounts;\n> QUERY\n> PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------\n> Seq Scan on auth_accounts (cost=0.00..131.33 rows=4333 width=196) (actual\n> time=20.000..200.000 rows=4334 loops=1)\n> Total runtime: 200.000 ms\n> (2 rows)\n>\n> docmgr=# explain analyze select * from dm_object;\n> QUERY\n> PLAN\n>\n> --------------------------------------------------------------------------------------------------------------\n> Seq Scan on dm_object (cost=0.00..615.78 rows=9778 width=411) (actual\n> time=0.000..10.000 rows=9778 loops=1)\n> Total runtime: 10.000 ms\n> (2 rows)\n>\n>\n> If i run \"explain analyze select * from dm_object_perm;\" it goes on for\n> many hours.\n>\n> If i try to execute a left join: \"SELECT dm_object.id FROM dm_object LEFT\n> JOIN dm_object_perm ON dm_object.id = dm_object_perm.object_id;\" my db is\n> unusable.\n>\n> how can I fix this?\n>\n\nonce you've provided more informations as required by other people it should\nbe easier to help you. What's duration do you expect your hardware to take\nto read 1GB ? (or 10GB ?)\n\nEven without this 'slow' (really?) query Your must review your indexes\nusages: duplicate indexes are useless and reduce overall performance.\nThe first task here is to remove the duplicates.\n\n\n\n\n\n>\n> Thanks\n> --\n>\n> *Giovanni Mancuso*\n> System Architect\n> Babel S.r.l. - http://www.babel.it\n> *T:* 06.9826.9600 *M:* 3406580739 *F:* 06.9826.9680\n> P.zza S.Benedetto da Norcia, 33 - 00040 Pomezia (Roma)\n> ------------------------------\n> CONFIDENZIALE: Questo messaggio ed i suoi allegati sono di carattere\n> confidenziale per i destinatari in indirizzo.\n> E' vietato l'inoltro non autorizzato a destinatari diversi da quelli\n> indicati nel messaggio originale.\n> Se ricevuto per errore, l'uso del contenuto e' proibito; si prega di\n> comunicarlo al mittente e cancellarlo immediatamente.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation",
"msg_date": "Fri, 7 Oct 2011 14:59:26 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with a table with 38928077 record"
},
{
"msg_contents": "Il 07/10/2011 12:24, Guillaume Cottenceau ha scritto:\n> Giovanni Mancuso <gmancuso 'at' babel.it> writes:\n>\n>> select count(*) from dm_object_perm;\n>> count \n>> ----------\n>> 38'928'077\n>> (1 row)\n> [...]\n>\n>> If i run \"explain analyze select * from dm_object_perm;\" it goes on for many\n>> hours.\n> Almost 39 million records is not small, especially if you run on\n> poor hardware[1], poor configuration[2], poor database optimization[3],\n> bloat[4], or a combination of these.\n>\n> [1] you could tell what hardware you use\nMy Memory:\n# cat /proc/meminfo\n total: used: free: shared: buffers: cached:\nMem: 4022861824 2201972736 1820889088 0 8044544 1983741952\nSwap: 8589926400 199303168 8390623232\nMemTotal: 3928576 kB\nMemFree: 1778212 kB\nMemShared: 0 kB\nBuffers: 7856 kB\nCached: 1897356 kB\nSwapCached: 39892 kB\nActive: 1330076 kB\nActiveAnon: 554472 kB\nActiveCache: 775604 kB\nInact_dirty: 539124 kB\nInact_laundry: 55348 kB\nInact_clean: 36504 kB\nInact_target: 392208 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 3928576 kB\nLowFree: 1778212 kB\nSwapTotal: 8388600 kB\nSwapFree: 8193968 kB\nCommitLimit: 10352888 kB\nCommitted_AS: 1713308 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugepagesize: 2048 kB\n\nMy CPU:\n# egrep 'processor|model name|cpu MHz|cache size|flags' /proc/cpuinfo\nprocessor : 0\nmodel name : Dual Core AMD Opteron(tm) Processor 275\ncpu MHz : 2193.798\ncache size : 1024 KB\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm\n3dnowext 3dnow\nprocessor : 1\nmodel name : Dual Core AMD Opteron(tm) Processor 275\ncpu MHz : 2193.798\ncache size : 1024 KB\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm\n3dnowext 3dnow\nprocessor : 2\nmodel name : Dual Core AMD Opteron(tm) Processor 275\ncpu MHz : 2193.798\ncache size : 1024 KB\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm\n3dnowext 3dnow\nprocessor : 3\nmodel name : Dual Core AMD Opteron(tm) Processor 275\ncpu MHz : 2193.798\ncache size : 1024 KB\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm\n3dnowext 3dnow\n\n\n> [2] you could report if your DB configuration is tuned/good\nmax_connections = 50\nshared_buffers = 512MB\ntemp_buffers = 128MB\nmax_prepared_transactions = 55\nmax_fsm_pages = 153600\nvacuum_cost_delay = 0\nvacuum_cost_page_hit = 1\nvacuum_cost_page_miss = 10\nvacuum_cost_page_dirty = 20\nvacuum_cost_limit = 200\neffective_cache_size = 256MB\nautovacuum = on\nautovacuum_naptime = 1min\nautovacuum_vacuum_threshold = 500\nautovacuum_analyze_threshold = 250\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_freeze_max_age = 200000000\nautovacuum_vacuum_cost_delay = -1\nautovacuum_vacuum_cost_limit = -1\nvacuum_freeze_min_age = 100000000\n\n> [3] you could report if the DB is regularly analyzed/vacuumed\n> [4] you could try a VACUUM FULL or CLUSTER and/or REINDEX on your\n> large table(s) if you suspect answer to [3] is \"no\" -\n> warning, these block some/all DB operations while running,\n> and they will probably run for long in your situation\nI run VACUUM yesterday.\n>\n>> If i try to execute a left join: \"SELECT dm_object.id FROM dm_object LEFT JOIN\n>> dm_object_perm ON dm_object.id = dm_object_perm.object_id;\" my db is unusable.\n> EXPLAIN on this query would probably tell you PG has quite some\n> work to do to produce the result.\n>\n>\n>> how can I fix this?\n> I'm wondering if your DB design (storing almost all \"object x\n> account\" combinations in object_perm) is optimal.\n>\n\n\n-- \n\n*Giovanni Mancuso*\nSystem Architect\nBabel S.r.l. - http://www.babel.it <http://www.babel.it/>\n*T:* 06.9826.9600 *M:* 3406580739 *F:* 06.9826.9680\nP.zza S.Benedetto da Norcia, 33 - 00040 Pomezia (Roma)\n------------------------------------------------------------------------\nCONFIDENZIALE: Questo messaggio ed i suoi allegati sono di carattere\nconfidenziale per i destinatari in indirizzo.\nE' vietato l'inoltro non autorizzato a destinatari diversi da quelli\nindicati nel messaggio originale.\nSe ricevuto per errore, l'uso del contenuto e' proibito; si prega di\ncomunicarlo al mittente e cancellarlo immediatamente.",
"msg_date": "Fri, 07 Oct 2011 17:12:06 +0200",
"msg_from": "Giovanni Mancuso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problem with a table with 38928077 record"
},
{
"msg_contents": "I clean all unused data, run VACUUM FULL and run REINDEX d,_object_perm.\n\nUn my table, now i have:\n----------\n 24089952\n\nBut the problem is the same.\n\nThanks\n\nIl 07/10/2011 17:12, Giovanni Mancuso ha scritto:\n> Il 07/10/2011 12:24, Guillaume Cottenceau ha scritto:\n>> Giovanni Mancuso <gmancuso 'at' babel.it> writes:\n>>\n>>> select count(*) from dm_object_perm;\n>>> count \n>>> ----------\n>>> 38'928'077\n>>> (1 row)\n>> [...]\n>>\n>>> If i run \"explain analyze select * from dm_object_perm;\" it goes on for many\n>>> hours.\n>> Almost 39 million records is not small, especially if you run on\n>> poor hardware[1], poor configuration[2], poor database optimization[3],\n>> bloat[4], or a combination of these.\n>>\n>> [1] you could tell what hardware you use\n> My Memory:\n> # cat /proc/meminfo\n> total: used: free: shared: buffers: cached:\n> Mem: 4022861824 2201972736 1820889088 0 8044544 1983741952\n> Swap: 8589926400 199303168 8390623232\n> MemTotal: 3928576 kB\n> MemFree: 1778212 kB\n> MemShared: 0 kB\n> Buffers: 7856 kB\n> Cached: 1897356 kB\n> SwapCached: 39892 kB\n> Active: 1330076 kB\n> ActiveAnon: 554472 kB\n> ActiveCache: 775604 kB\n> Inact_dirty: 539124 kB\n> Inact_laundry: 55348 kB\n> Inact_clean: 36504 kB\n> Inact_target: 392208 kB\n> HighTotal: 0 kB\n> HighFree: 0 kB\n> LowTotal: 3928576 kB\n> LowFree: 1778212 kB\n> SwapTotal: 8388600 kB\n> SwapFree: 8193968 kB\n> CommitLimit: 10352888 kB\n> Committed_AS: 1713308 kB\n> HugePages_Total: 0\n> HugePages_Free: 0\n> Hugepagesize: 2048 kB\n>\n> My CPU:\n> # egrep 'processor|model name|cpu MHz|cache size|flags' /proc/cpuinfo\n> processor : 0\n> model name : Dual Core AMD Opteron(tm) Processor 275\n> cpu MHz : 2193.798\n> cache size : 1024 KB\n> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\n> mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm\n> 3dnowext 3dnow\n> processor : 1\n> model name : Dual Core AMD Opteron(tm) Processor 275\n> cpu MHz : 2193.798\n> cache size : 1024 KB\n> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\n> mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm\n> 3dnowext 3dnow\n> processor : 2\n> model name : Dual Core AMD Opteron(tm) Processor 275\n> cpu MHz : 2193.798\n> cache size : 1024 KB\n> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\n> mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm\n> 3dnowext 3dnow\n> processor : 3\n> model name : Dual Core AMD Opteron(tm) Processor 275\n> cpu MHz : 2193.798\n> cache size : 1024 KB\n> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\n> mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm\n> 3dnowext 3dnow\n>\n>\n>> [2] you could report if your DB configuration is tuned/good\n> max_connections = 50\n> shared_buffers = 512MB\n> temp_buffers = 128MB\n> max_prepared_transactions = 55\n> max_fsm_pages = 153600\n> vacuum_cost_delay = 0\n> vacuum_cost_page_hit = 1\n> vacuum_cost_page_miss = 10\n> vacuum_cost_page_dirty = 20\n> vacuum_cost_limit = 200\n> effective_cache_size = 256MB\n> autovacuum = on\n> autovacuum_naptime = 1min\n> autovacuum_vacuum_threshold = 500\n> autovacuum_analyze_threshold = 250\n> autovacuum_vacuum_scale_factor = 0.2\n> autovacuum_analyze_scale_factor = 0.1\n> autovacuum_freeze_max_age = 200000000\n> autovacuum_vacuum_cost_delay = -1\n> autovacuum_vacuum_cost_limit = -1\n> vacuum_freeze_min_age = 100000000\n>\n>> [3] you could report if the DB is regularly analyzed/vacuumed\n>> [4] you could try a VACUUM FULL or CLUSTER and/or REINDEX on your\n>> large table(s) if you suspect answer to [3] is \"no\" -\n>> warning, these block some/all DB operations while running,\n>> and they will probably run for long in your situation\n> I run VACUUM yesterday.\n>>> If i try to execute a left join: \"SELECT dm_object.id FROM dm_object LEFT JOIN\n>>> dm_object_perm ON dm_object.id = dm_object_perm.object_id;\" my db is unusable.\n>> EXPLAIN on this query would probably tell you PG has quite some\n>> work to do to produce the result.\n>>\n>>\n>>> how can I fix this?\n>> I'm wondering if your DB design (storing almost all \"object x\n>> account\" combinations in object_perm) is optimal.\n>>\n>\n>\n> -- \n>\n> *Giovanni Mancuso*\n> System Architect\n> Babel S.r.l. - http://www.babel.it <http://www.babel.it/>\n> *T:* 06.9826.9600 *M:* 3406580739 *F:* 06.9826.9680\n> P.zza S.Benedetto da Norcia, 33 - 00040 Pomezia (Roma)\n> ------------------------------------------------------------------------\n> CONFIDENZIALE: Questo messaggio ed i suoi allegati sono di carattere\n> confidenziale per i destinatari in indirizzo.\n> E' vietato l'inoltro non autorizzato a destinatari diversi da quelli\n> indicati nel messaggio originale.\n> Se ricevuto per errore, l'uso del contenuto e' proibito; si prega di\n> comunicarlo al mittente e cancellarlo immediatamente.\n\n\n-- \n\n*Giovanni Mancuso*\nSystem Architect\nBabel S.r.l. - http://www.babel.it <http://www.babel.it/>\n*T:* 06.9826.9600 *M:* 3406580739 *F:* 06.9826.9680\nP.zza S.Benedetto da Norcia, 33 - 00040 Pomezia (Roma)\n------------------------------------------------------------------------\nCONFIDENZIALE: Questo messaggio ed i suoi allegati sono di carattere\nconfidenziale per i destinatari in indirizzo.\nE' vietato l'inoltro non autorizzato a destinatari diversi da quelli\nindicati nel messaggio originale.\nSe ricevuto per errore, l'uso del contenuto e' proibito; si prega di\ncomunicarlo al mittente e cancellarlo immediatamente.",
"msg_date": "Sat, 08 Oct 2011 17:04:46 +0200",
"msg_from": "Giovanni Mancuso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problem with a table with 38928077 record"
},
{
"msg_contents": "What is a bit strange about this is that you can do this:\n\nOn Fri, Oct 7, 2011 at 6:04 AM, Giovanni Mancuso <[email protected]> wrote:\n\n> select count(*) from dm_object_perm;\n> count\n> ----------\n> 38928077\n> (1 row)\n>\n\nBut not this:\n\nIf i run \"explain analyze select * from dm_object_perm;\" it goes on for many\n> hours.\n>\n\nIf I had to guess, I'd bet that the second one is trying to spool the\nresultset in memory someplace and that's driving the machine into swap. But\nthat's just a shot in the dark. You might want to use tools like top,\nvmstat, iostat, free, etc. to see what the system is actually doing while\nthis is running. I'd start the query up, let it run for 10 minutes or so,\nand then see whether the machine is CPU-bound or I/O-bound, and whether the\namount of swap in use is growing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nWhat is a bit strange about this is that you can do this:On Fri, Oct 7, 2011 at 6:04 AM, Giovanni Mancuso <[email protected]> wrote:\n\n\n select count(*) from dm_object_perm;\n count \n ----------\n 38928077\n (1 row)But not this: \n\n If i run \"explain analyze select * from dm_object_perm;\" it goes on\n for many hours.If I had to guess, I'd bet that the second one is trying to spool the resultset in memory someplace and that's driving the machine into swap. But that's just a shot in the dark. You might want to use tools like top, vmstat, iostat, free, etc. to see what the system is actually doing while this is running. I'd start the query up, let it run for 10 minutes or so, and then see whether the machine is CPU-bound or I/O-bound, and whether the amount of swap in use is growing.\n-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 27 Oct 2011 15:13:11 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with a table with 38928077 record"
},
{
"msg_contents": "On 10/27/2011 02:13 PM, Robert Haas wrote:\n\n> If I had to guess, I'd bet that the second one is trying to spool the\n> resultset in memory someplace and that's driving the machine into\n> swap.\n\nThat would be my guess too. SELECT * on a 40-million row table is a \n*lot* different than getting the count, which throws away the rows once \nit verifies they're valid.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Thu, 27 Oct 2011 15:31:00 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with a table with 38928077 record"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> On 10/27/2011 02:13 PM, Robert Haas wrote:\n>> If I had to guess, I'd bet that the second one is trying to spool the\n>> resultset in memory someplace and that's driving the machine into\n>> swap.\n\n> That would be my guess too. SELECT * on a 40-million row table is a \n> *lot* different than getting the count, which throws away the rows once \n> it verifies they're valid.\n\nBut EXPLAIN ANALYZE throws away the rows too. There's something odd\ngoing on there, but the information provided is insufficient to tell\nwhat.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Oct 2011 18:19:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with a table with 38928077 record "
}
] |
[
{
"msg_contents": "Hi,\n\nYesterday, a customer increased the server memory from 16GB to 48GB.\n\nToday, the load of the server hit 40 ~ 50 points.\nWith 16 GB, the load not surpasses 5 ~ 8 points.\n\nThe only parameter that I changed is effective_cache_size (from 14 GB to \n40GB) and shared_buffers (from 1 GB to 5 GB). Setting the values back \ndoes not take any effect.\n\nThis server use CentOS 5.5 (2.6.18-194.3.1.el5.centos.plus - X86_64).\nShould I change some vm parameters to this specific kernel ?\n\nThanks for any help.\n",
"msg_date": "Mon, 10 Oct 2011 10:26:44 -0300",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Adding more memory = hugh cpu load"
},
{
"msg_contents": "alexandre - aldeia digital <[email protected]> wrote:\n \n> Yesterday, a customer increased the server memory from 16GB to\n> 48GB.\n \nThat's usually for the better, but be aware that on some hardware\nadding RAM beyond a certain point causes slower RAM access. Without\nknowing more details, it's impossible to say whether that's the case\nhere.\n \n> Today, the load of the server hit 40 ~ 50 points.\n> With 16 GB, the load not surpasses 5 ~ 8 points.\n \nAre you talking about \"load average\", CPU usage, or something else?\n \n> The only parameter that I changed is effective_cache_size (from 14\n> GB to 40GB) and shared_buffers (from 1 GB to 5 GB). Setting the\n> values back does not take any effect.\n \nWhat version of PostgreSQL is this? What settings are in effect? \nHow many user connections are active at one time? How many cores\nare there, of what type? What's the storage system? What kind of\nload is this?\n \nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \n-Kevin\n",
"msg_date": "Mon, 10 Oct 2011 08:51:08 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On 10/10/2011 08:26 AM, alexandre - aldeia digital wrote:\n\n> Yesterday, a customer increased the server memory from 16GB to 48GB.\n>\n> Today, the load of the server hit 40 ~ 50 points.\n> With 16 GB, the load not surpasses 5 ~ 8 points.\n\nThat's not entirely surprising. The problem with having lots of memory \nis... that you have lots of memory. The operating system likes to cache, \nand this includes writes. Normally this isn't a problem, but with 48GB \nof RAM, the defaults (for CentOS 5.5 in particular) are to use up to 40% \nof that to cache writes.\n\nThe settings you're looking for are in:\n\n/proc/sys/vm/dirty_background_ratio\n/proc/sys/vm/dirty_ratio\n\nYou can set these by putting lines in your /etc/sysctl.conf file:\n\nvm.dirty_background_ratio = 1\nvm.dirty_ratio = 10\n\nAnd then calling:\n\nsudo sysctl -p\n\nThe first number, the background ratio, tells the memory manager to \nstart writing to disk as soon as 1% of memory is used. The second is \nlike a maximum of memory that can be held for caching. If the number of \npending writes exceeds this, the system goes into synchronous write \nmode, and blocks all other write activity until it can flush everything \nout to disk. You really, really want to avoid this.\n\nThe defaults in older Linux systems were this high mostly to optimize \nfor desktop performance. For CentOS 5.5, the defaults are 10% and 40%, \nwhich doesn't seem like a lot. But for servers with tons of ram, 10% of \n48GB is almost 5GB. That's way bigger than all but the largest RAID or \ncontroller cache, which means IO waits, and thus high load. Those high \nIO waits cause a kind of cascade that slowly cause writes to back up, \nmaking it more likely you'll reach the hard 40% limit which causes a \nsystem flush, and then you're in trouble.\n\nYou can actually monitor this by checking /proc/meminfo:\n\ngrep -A1 Dirty /proc/meminfo\n\nThe 'Dirty' line tells you how much memory *could* be written to disk, \nand the 'Writeback' line tells you how much the system is trying to \nwrite. You want that second line to be 0 or close to it, as much as \nhumanly possible. It's also good to keep Dirty low, because it can be an \nindicator that the system is about to start uncontrollably flushing if \nit gets too high.\n\nGenerally it's good practice to keep dirty_ratio lower than the size of \nyour disk controller cache, but even high-end systems only give 256MB to \n1GB of controller cache. Newer kernels have introduced dirty_bytes and \ndirty_background_bytes, which lets you set a hard byte-specified limit \ninstead of relying on some vague integer percentage of system memory. \nThis is better for systems with vast amounts of memory that could cause \nthese kinds of IO spikes. Of course, in order to use those settings, \nyour client will have to either install a custom kernel, or upgrade to \nCentOS 6. Try the 1% first, and it may work out.\n\nSome kernels have a hard 5% limit on dirty_background_ratio, but the one \nincluded in CentOS 5.5 does not. You can even set it to 0, but your IO \nthroughput will take a nosedive, because at that point, it's always \nwriting to disk without any effective caching at all. The reason we set \ndirty_ratio to 10%, is because we want to reduce the total amount of \ntime a synchronous IO block lasts. You can probably take that as low as \n5%, but be careful and test to find your best equilibrium point. You \nwant it at a point it rarely blocks, but if it does, it's over quickly.\n\nThere's more info here:\n\nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\n(I only went on about this because we had the same problem when we \nincreased from 32GB to 72GB. It was a completely unexpected reaction, \nbut a manageable one.)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 10 Oct 2011 09:04:55 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "\n\n> That's not entirely surprising. The problem with having lots of memory is... \n> that you have lots of memory. The operating system likes to cache, and this \n> includes writes. Normally this isn't a problem, but with 48GB of RAM, the \n> defaults (for CentOS 5.5 in particular) are to use up to 40% of that to cache \n> writes.\n\n\nI don't understand: don't you want postgresql to issue the fsync calls when\nit \"makes sense\" (and configure them), rather than having the OS decide\nwhen it's best to flush to disk? That is: don't you want all the memory to\nbe used for caching, unless postgresql says otherwise (calling fsync), instead\nof \"as soon as 1% of memory is used\"?\n\n",
"msg_date": "Mon, 10 Oct 2011 16:14:35 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On 10/10/2011 10:14 AM, Leonardo Francalanci wrote:\n\n> I don't understand: don't you want postgresql to issue the fsync\n> calls when it \"makes sense\" (and configure them), rather than having\n> the OS decide when it's best to flush to disk? That is: don't you\n> want all the memory to be used for caching, unless postgresql says\n> otherwise (calling fsync), instead of \"as soon as 1% of memory is\n> used\"?\n\nYou'd think that, which is why this bites so many people. That's not \nquite how it works in practice, though. OS cache is a lot more than \naltered database and WAL files, which do get fsync'd frequently. Beyond \nthat, you need to worry about what happens *between* fsync calls.\n\nOn a highly loaded database, or even just a database experiencing heavy \nwrite volume due to some kind of ETL process, your amount of dirty \nmemory may increase much more quickly than you expect. For example, say \nyour checkpoint_timeout setting is the default of five minutes. An ETL \nprocess runs that loads 2GB of data into a table, and you're archiving \ntransaction logs. So you now have three possible write vectors, not \nincluding temp tables and what not. And that's only for that connection; \nthis gets more complicated if you have other OLTP connections on the \nsame DB.\n\nSo your memory is now flooded with 2-6GB of data, and that's easy for \nmemory to handle, and it can do so quickly. With 48GB of RAM, that's \nwell within caching range, so the OS never writes anything until the \nfsync call. Then the database makes the fsync call, and suddenly the OS \nwants to flush 2-6GB of data straight to disk. Without that background \ntrickle, you now have a flood that only the highest-end disk controller \nor a backing-store full of SSDs or PCIe NVRAM could ever hope to absorb.\n\nThat write flood massively degrades your read IOPS, and degrades future \nwrites until it's done flushing, so all of your disk IO is saturated, \nfurther worsening the situation. Now you're getting closer and closer to \nyour dirty_ratio setting, at which point the OS will effectively stop \nresponding to anything, so it can finally finish flushing everything to \ndisk. This can take a couple minutes, but it's not uncommon for these IO \nstorms to last over half an hour depending on the quality of the \ndisks/controller in question. During this time, system load is climbing \nprecipitously, and clients are getting query timeouts.\n\nAdding more memory can actually make your system performance worse if \nyou don't equally increase the capability of your RAID/SAN/whatever to \ncompensate for increased size of write chunks.\n\nThis is counter-intuitive, but completely borne out by tests. The kernel \ndevelopers agree, or we wouldn't have dirty_bytes, or \ndirty_background_bytes, and they wouldn't have changed the defaults to \n5% and 10% instead of 10% and 40%. It's just one of those things nobody \nexpected until machines with vast amounts of RAM started becoming common.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 10 Oct 2011 10:39:08 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On 10/10/2011 10:04 AM, Shaun Thomas wrote:\n> The problem with having lots of memory is... that you have lots of \n> memory. The operating system likes to cache, and this includes writes. \n> Normally this isn't a problem, but with 48GB of RAM, the defaults (for \n> CentOS 5.5 in particular) are to use up to 40% of that to cache writes.\n\nI make the same sort of tuning changes Shaun suggested on every CentOS 5 \nsystem I come across. That said, you should turn on log_checkpoints in \nyour postgresql.conf and see whether the \"sync=\" numbers are high. That \nwill help prove or disprove that the slowdown you're seeing is from too \nmuch write caching. You also may be able to improve that by adjusting \ncheckpoint_segments/checkpoint_timeout, or *decreasing* shared_buffers. \nMore about this at \nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nThere are some other possibilities, too, like that memory addition can \nactually causing average memory speed to drop as Kevin mentioned. I \nalways benchmark with stream-scaling: \nhttps://github.com/gregs1104/stream-scaling before and after a RAM size \nchange, to see whether things are still as fast or not. It's hard to do \nthat in the position you're in now though.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 10 Oct 2011 12:07:34 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "> Then the \n\n> database makes the fsync call, and suddenly the OS wants to flush 2-6GB of data \n> straight to disk. Without that background trickle, you now have a flood that \n> only the highest-end disk controller or a backing-store full of SSDs or PCIe \n> NVRAM could ever hope to absorb.\n\n\nIsn't checkpoint_completion_target supposed to deal exactly with that problem?\n\nPlus: if 2-6GB is too much, why not decrease checkpoint_segments? Or\ncheckpoint_timeout?\n\n> The kernel \n> developers agree, or we wouldn't have dirty_bytes, or \n> dirty_background_bytes, and they wouldn't have changed the defaults to 5% \n> and 10% instead of 10% and 40%. \n\n\nI'm not saying that those kernel parameters are \"useless\"; I'm saying they are used\nin the same way as the checkpoint_segments, checkpoint_timeout and\ncheckpoint_completion_target are used by postgresql; and on a postgresql-only system\nI would rather have postgresql look after the fsync calls, not the OS.\n",
"msg_date": "Mon, 10 Oct 2011 17:14:37 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "Em 10-10-2011 11:04, Shaun Thomas wrote:\n> That's not entirely surprising. The problem with having lots of memory\n> is... that you have lots of memory. The operating system likes to cache,\n> and this includes writes. Normally this isn't a problem, but with 48GB\n> of RAM, the defaults (for CentOS 5.5 in particular) are to use up to 40%\n> of that to cache writes.\n\nHi Shawn and all,\n\nAfter change the parameters in sysctl.conf, during some time I see that \nload average downs. But the system loads grow again.\n\nDirty memory in meminfo is about 150MB and Whriteback is mostly 0 kB.\n\nI drop checkpoint_timeout to 1min and turn on log_checkpoint:\n\n<2011-10-10 14:18:48 BRT >LOG: checkpoint complete: wrote 6885 buffers \n(1.1%); 0 transaction log file(s) added, 0 removed, 1 recycled; \nwrite=29.862 s, sync=28.466 s, total=58.651 s\n<2011-10-10 14:18:50 BRT >LOG: checkpoint starting: time\n<2011-10-10 14:19:40 BRT >LOG: checkpoint complete: wrote 6415 buffers \n(1.0%); 0 transaction log file(s) added, 0 removed, 1 recycled; \nwrite=29.981 s, sync=19.960 s, total=50.111 s\n<2011-10-10 14:19:50 BRT >LOG: checkpoint starting: time\n<2011-10-10 14:20:45 BRT >LOG: checkpoint complete: wrote 6903 buffers \n(1.1%); 0 transaction log file(s) added, 0 removed, 1 recycled; \nwrite=29.653 s, sync=25.504 s, total=55.477 s\n<2011-10-10 14:20:50 BRT >LOG: checkpoint starting: time\n<2011-10-10 14:21:45 BRT >LOG: checkpoint complete: wrote 7231 buffers \n(1.1%); 0 transaction log file(s) added, 0 removed, 2 recycled; \nwrite=29.911 s, sync=24.899 s, total=55.037 s\n<2011-10-10 14:21:50 BRT >LOG: checkpoint starting: time\n<2011-10-10 14:22:45 BRT >LOG: checkpoint complete: wrote 6569 buffers \n(1.0%); 0 transaction log file(s) added, 0 removed, 1 recycled; \nwrite=29.947 s, sync=25.303 s, total=55.342 s\n<2011-10-10 14:22:50 BRT >LOG: checkpoint starting: time\n<2011-10-10 14:23:44 BRT >LOG: checkpoint complete: wrote 5711 buffers \n(0.9%); 0 transaction log file(s) added, 0 removed, 1 recycled; \nwrite=30.036 s, sync=24.299 s, total=54.507 s\n<2011-10-10 14:23:50 BRT >LOG: checkpoint starting: time\n<2011-10-10 14:24:50 BRT >LOG: checkpoint complete: wrote 6744 buffers \n(1.0%); 0 transaction log file(s) added, 0 removed, 2 recycled; \nwrite=29.946 s, sync=29.792 s, total=60.223 s\n<2011-10-10 14:24:50 BRT >LOG: checkpoint starting: time\n\n[root@servernew data]# vmstat 1 30 -w\nprocs -------------------memory------------------ ---swap-- -----io---- \n--system-- -----cpu-------\n r b swpd free buff cache si so bi bo \n in cs us sy id wa st\n22 0 2696 8290280 117852 38431540 0 0 328 59 \n 9 17 17 3 79 1 0\n34 0 2696 8289288 117852 38432268 0 0 8 2757 \n2502 4148 80 20 0 0 0\n39 1 2696 8286128 117852 38432348 0 0 24 622 \n2449 4008 80 20 0 0 0\n41 0 2696 8291100 117852 38433792 0 0 64 553 \n2487 3419 83 17 0 0 0\n42 1 2696 8293596 117852 38434556 0 0 232 776 \n2372 2779 83 17 0 0 0\n44 1 2696 8291984 117852 38435252 0 0 56 408 \n2388 3012 82 18 0 0 0\n26 0 2696 8289884 117856 38435924 0 0 64 698 \n2486 3283 83 17 0 0 0\n31 0 2696 8286788 117856 38437052 0 0 88 664 \n2452 3385 82 18 0 0 0\n42 0 2696 8284500 117868 38437516 0 0 176 804 \n2492 3876 83 17 0 0 0\n44 0 2696 8281392 117868 38438860 0 0 24 504 \n2338 2916 80 20 0 0 0\n44 0 2696 8278540 117868 38439152 0 0 32 568 \n2337 2937 83 17 0 0 0\n45 0 2696 8280280 117868 38440348 0 0 72 402 \n2492 3635 84 16 0 0 0\n35 2 2696 8279928 117868 38440388 0 0 184 600 \n2492 3835 84 16 0 0 0\n41 0 2696 8275948 117872 38441712 0 0 136 620 \n2624 4187 79 21 0 0 0\n37 0 2696 8274392 117872 38442372 0 0 24 640 \n2492 3824 84 16 0 0 0\n40 0 2696 8268548 117872 38443120 0 0 0 624 \n2421 3584 81 19 0 0 0\n32 0 2696 8268308 117872 38443652 0 0 16 328 \n2384 3767 81 19 0 0 0\n38 0 2696 8281820 117872 38427472 0 0 72 344 \n2505 3810 81 19 0 0 0\n41 0 2696 8279776 117872 38427976 0 0 16 220 \n2496 3428 84 16 0 0 0\n27 0 2696 8283252 117872 38428508 0 0 112 312 \n2563 4279 81 19 0 0 0\n36 0 2696 8280332 117872 38429288 0 0 48 544 \n2626 4406 80 20 0 0 0\n30 0 2696 8274372 117872 38429372 0 0 24 472 \n2442 3646 80 19 0 0 0\n38 0 2696 8272144 117872 38429956 0 0 152 256 \n2465 4039 83 16 0 0 0\n41 2 2696 8266496 117872 38430324 0 0 56 304 \n2414 3206 82 18 0 0 0\n32 0 2696 8267188 117872 38431068 0 0 64 248 \n2540 4211 78 22 0 0 0\n37 0 2696 8278876 117872 38431324 0 0 56 264 \n2547 4523 81 19 0 0 0\n43 1 2696 8277460 117872 38431588 0 0 40 8627 \n2695 4143 82 18 0 0 0\n41 0 2696 8272556 117872 38431716 0 0 40 216 \n2495 3744 79 21 0 0 0\n40 1 2696 8267292 117876 38433204 0 0 192 544 \n2586 4437 77 23 0 0 0\n34 1 2696 8263204 117876 38433628 0 0 320 929 \n2841 5166 78 22 0 0 0\n\nNotice that we have no idle % in cpu column.\n\n[root@servernew data]# uptime\n 14:26:47 up 2 days, 3:26, 4 users, load average: 48.61, 46.12, 40.47\n\nMy client wants to remove the extra memory... :/\n\nBest regards.\n",
"msg_date": "Mon, 10 Oct 2011 14:31:40 -0300",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "alexandre - aldeia digital <[email protected]> wrote:\n \n> Notice that we have no idle % in cpu column.\n \nSo they're making full use of all the CPUs they paid for. That in\nitself isn't a problem. Unfortunately you haven't given us nearly\nenough information to know whether there is indeed a problem, or if\nso, what. What was throughput before? What is it now? How has\nlatency been affected? And all those unanswered questions from my\nfirst email....\n \nThe problem *might* be something along the lines of most of the\ndiscussion on the thread. It might not be. I just don't know yet,\nmyself.\n \n> 14:26:47 up 2 days, 3:26, 4 users, load average: 48.61,\n> 46.12, 40.47\n \nThis has me wondering again about your core count and your user\nconnections.\n \n> My client wants to remove the extra memory... :/\n \nMaybe we should identify the problem. It might be that a connection\npooler is the solution. On the other hand, if critical production\napplications are suffering, it might make sense to take this out of\nproduction and figure out a safer place to test things and sort this\nout.\n \n-Kevin\n",
"msg_date": "Mon, 10 Oct 2011 12:46:44 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "Em 10-10-2011 14:46, Kevin Grittner escreveu:\n> alexandre - aldeia digital<[email protected]> wrote:\n>\n>> Notice that we have no idle % in cpu column.\n>\n> So they're making full use of all the CPUs they paid for. That in\n> itself isn't a problem. Unfortunately you haven't given us nearly\n> enough information to know whether there is indeed a problem, or if\n> so, what. What was throughput before? What is it now? How has\n> latency been affected? And all those unanswered questions from my\n> first email....\n>\n> The problem *might* be something along the lines of most of the\n> discussion on the thread. It might not be. I just don't know yet,\n> myself.\n\n From the point of view of the client, the question is simple: until the \nlast friday (with 16 GB of RAM), the load average of server rarely \nsurpasses 4. Nothing change in normal database use.\n\nTonight, we will remove the extra memory. :/\n\nBest regards.\n\n\n\n",
"msg_date": "Mon, 10 Oct 2011 15:54:03 -0300",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "alexandre - aldeia digital <[email protected]> wrote:\n \n> From the point of view of the client, the question is simple:\n> until the last friday (with 16 GB of RAM), the load average of\n> server rarely surpasses 4. Nothing change in normal database use.\n \nReally? The application still performs as well or better, and it's\nthe load average they care about? How odd.\n \nIf they were happy with performance before the RAM was added, why\ndid they add it? If they weren't happy with performance, what led\nthem to believe that adding more RAM would help? If there's a\nperformance problem, there's generally one bottleneck which is the\nlimit, with one set of symptoms. When you remove that bottleneck\nand things get faster, you may well have a new bottleneck with\ndifferent symptoms. (These symptoms might include high load average\nor CPU usage, for example.) You then figure out what is causing\n*that* bottleneck, and you can make things yet faster.\n \nIn this whole thread you have yet to give enough information to know\nfor sure whether there was or is any performance problem, or what\nthe actual bottleneck is. I think you'll find that people happy to\nhelp identify the problem and suggest solutions if you provide that\ninformation.\n \n-Kevin\n",
"msg_date": "Mon, 10 Oct 2011 14:39:28 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On 10/10/2011 12:31 PM, alexandre - aldeia digital wrote:\n\n> <2011-10-10 14:18:48 BRT >LOG: checkpoint complete: wrote 6885 buffers\n> (1.1%); 0 transaction log file(s) added, 0 removed, 1 recycled;\n> write=29.862 s, sync=28.466 s, total=58.651 s\n\n28.466s sync time?! That's horrifying. At this point, I want to say the \nincrease in effective_cache_size or shared_buffers triggered the planner \nto change one of your plans significantly enough it's doing a ton more \ndisk IO and starving out your writes. Except you said you changed it \nback and it's still misbehaving.\n\nThis also reminds me somewhat of an issue Greg mentioned a while back \nwith xlog storms in 9.0 databases. I can't recall how he usually \"fixed\" \nthese, though.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 10 Oct 2011 15:25:54 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "Em 10-10-2011 16:39, Kevin Grittner escreveu:\n> alexandre - aldeia digital<[email protected]> wrote:\n>\n>> From the point of view of the client, the question is simple:\n>> until the last friday (with 16 GB of RAM), the load average of\n>> server rarely surpasses 4. Nothing change in normal database use.\n>\n> Really? The application still performs as well or better, and it's\n> the load average they care about? How odd.\n>\n> If they were happy with performance before the RAM was added, why\n> did they add it? If they weren't happy with performance, what led\n> them to believe that adding more RAM would help? If there's a\n> performance problem, there's generally one bottleneck which is the\n> limit, with one set of symptoms. When you remove that bottleneck\n> and things get faster, you may well have a new bottleneck with\n> different symptoms. (These symptoms might include high load average\n> or CPU usage, for example.) You then figure out what is causing\n> *that* bottleneck, and you can make things yet faster.\n\nCalm down: if the client plans to add , for example, another database\nin his server in a couple of weeks, he must only upgrade when this new \ndatabase come to life and add another point of doubt ??? IMHO, the \nreasons to add MEMORY does not matters in this case. I came to the list \nto see if anyone else has experienced the same problem, that not \nnecessarily is related with Postgres. Shaun and Greg apparently had the \nsame the same problems in CentOS and the information provided by they \nhelped too much...\n\n\n\n",
"msg_date": "Mon, 10 Oct 2011 17:34:45 -0300",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "alexandre - aldeia digital <[email protected]> wrote:\n \n> I came to the list to see if anyone else has experienced the same\n> problem\n \nA high load average or low idle CPU isn't a problem, it's a\npotentially useful bit of information in diagnosing a problem. I\nwas hoping to hear what the actual problem was, since I've had a few\nproblems in high RAM situations, but the solutions depend on what\nthe actual problems are. I don't suppose you saw periods where\nqueries which normally run very quickly (say in a millisecond or\nless) were suddenly taking tens of seconds to run -- \"stalling\" and\nthen returning to normal? Because if I knew you were having a\nproblem like *that* I might have been able to help. Same for other\nset of symptoms; it's just the suggestions would have been\ndifferent. And the suggestions would have depended on what your\nsystem looked like besides the RAM.\n \nIf you're satisfied with how things are running with less RAM,\nthough, there's no need.\n \n-Kevin\n",
"msg_date": "Mon, 10 Oct 2011 15:52:05 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On Mon, Oct 10, 2011 at 1:52 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> alexandre - aldeia digital <[email protected]> wrote:\n>\n> > I came to the list to see if anyone else has experienced the same\n> > problem\n>\n> A high load average or low idle CPU isn't a problem, it's a\n> potentially useful bit of information in diagnosing a problem. I\n> was hoping to hear what the actual problem was, since I've had a few\n> problems in high RAM situations, but the solutions depend on what\n> the actual problems are. I don't suppose you saw periods where\n> queries which normally run very quickly (say in a millisecond or\n> less) were suddenly taking tens of seconds to run -- \"stalling\" and\n> then returning to normal? Because if I knew you were having a\n> problem like *that* I might have been able to help. Same for other\n> set of symptoms; it's just the suggestions would have been\n> different. And the suggestions would have depended on what your\n> system looked like besides the RAM.\n>\n> If you're satisfied with how things are running with less RAM,\n> though, there's no need.\n>\n\nThe original question doesn't actually say that performance has gone down,\nonly that cpu utilization has gone up. Presumably, with lots more RAM, it is\nblocking on I/O a lot less, so it isn't necessarily surprising that CPU\nutilization has gone up. The only problem would be if db performance has\ngotten worse. Maybe I missed a message where that was covered? I don't see\nit in the original query to the list.\n\nOn Mon, Oct 10, 2011 at 1:52 PM, Kevin Grittner <[email protected]> wrote:\nalexandre - aldeia digital <[email protected]> wrote:\n\n> I came to the list to see if anyone else has experienced the same\n> problem\n\nA high load average or low idle CPU isn't a problem, it's a\npotentially useful bit of information in diagnosing a problem. I\nwas hoping to hear what the actual problem was, since I've had a few\nproblems in high RAM situations, but the solutions depend on what\nthe actual problems are. I don't suppose you saw periods where\nqueries which normally run very quickly (say in a millisecond or\nless) were suddenly taking tens of seconds to run -- \"stalling\" and\nthen returning to normal? Because if I knew you were having a\nproblem like *that* I might have been able to help. Same for other\nset of symptoms; it's just the suggestions would have been\ndifferent. And the suggestions would have depended on what your\nsystem looked like besides the RAM.\n\nIf you're satisfied with how things are running with less RAM,\nthough, there's no need.The original question doesn't actually say that performance has gone down, only that cpu utilization has gone up. Presumably, with lots more RAM, it is blocking on I/O a lot less, so it isn't necessarily surprising that CPU utilization has gone up. The only problem would be if db performance has gotten worse. Maybe I missed a message where that was covered? I don't see it in the original query to the list.",
"msg_date": "Mon, 10 Oct 2011 15:02:16 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On Tue, Oct 11, 2011 at 12:02 AM, Samuel Gendler\n<[email protected]> wrote:\n> The original question doesn't actually say that performance has gone down,\n> only that cpu utilization has gone up. Presumably, with lots more RAM, it is\n> blocking on I/O a lot less, so it isn't necessarily surprising that CPU\n> utilization has gone up. The only problem would be if db performance has\n> gotten worse. Maybe I missed a message where that was covered? I don't see\n> it in the original query to the list.\n\nLoad average (which is presumably the metric in question) includes\nboth processes using the CPU and processes waiting for I/O.\nSo it *would* be strange for load average to go up like that, if\ndatabase configuration remains the same (ie: equal query plans)\n",
"msg_date": "Tue, 11 Oct 2011 04:19:57 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On 10/10/2011 01:31 PM, alexandre - aldeia digital wrote:\n> I drop checkpoint_timeout to 1min and turn on log_checkpoint:\n>\n> <2011-10-10 14:18:48 BRT >LOG: checkpoint complete: wrote 6885 \n> buffers (1.1%); 0 transaction log file(s) added, 0 removed, 1 \n> recycled; write=29.862 s, sync=28.466 s, total=58.651 s\n> <2011-10-10 14:18:50 BRT >LOG: checkpoint starting: time\n\nSync times that go to 20 seconds suggest there's a serious problem here \nsomewhere. But it would have been better to do these changes one at a \ntime: turn on log_checkpoints, collect some data, then try lowering \ncheckpoint_timeout. A checkpoint every minute is normally a bad idea, \nso that change may have caused this other issue.\n\n> procs -------------------memory------------------ ---swap-- \n> -----io---- --system-- -----cpu-------\n> r b swpd free buff cache si so bi \n> bo in cs us sy id wa st\n> 34 0 2696 8289288 117852 38432268 0 0 8 \n> 2757 2502 4148 80 20 0 0 0\n> 39 1 2696 8286128 117852 38432348 0 0 24 \n> 622 2449 4008 80 20 0 0 0\n> 41 0 2696 8291100 117852 38433792 0 0 64 \n> 553 2487 3419 83 17 0 0 0\n> ...Notice that we have no idle % in cpu column.\n\nYou also have no waiting for I/O! This is just plain strange; \ncheckpoint sync time spikes with no I/O waits I've never seen before. \nSystem time going to 20% isn't normal either.\n\nI don't know what's going on with this server. What I would normally do \nin this case is use \"top -c\" to see what processes are taking up so much \nruntime, and then look at what they are doing with pg_stat_activity. \nYou might see the slow processes in the log files by setting \nlog_min_duration_statement instead. I'd be suspicious of Linux given \nyour situation though.\n\nI wonder if increasing the memory is a coincidence, and the real cause \nis something related to the fact that you had to reboot to install it. \nYou might have switched to a newer kernel in the process too, for \nexample; I'd have to put a kernel bug on the list of suspects with this \nunusual vmstat output.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 11 Oct 2011 02:42:42 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On 10/10/2011 12:14 PM, Leonardo Francalanci wrote:\n>\n>> database makes the fsync call, and suddenly the OS wants to flush 2-6GB of data\n>> straight to disk. Without that background trickle, you now have a flood that\n>> only the highest-end disk controller or a backing-store full of SSDs or PCIe\n>> NVRAM could ever hope to absorb.\n>> \n>\n> Isn't checkpoint_completion_target supposed to deal exactly with that problem?\n> \n\ncheckpoint_completion_targets spreads out the writes to disk. \nPostgreSQL doesn't make any attempt yet to spread out the sync calls. \nOn a busy server, what can happen is that the whole OS write cache fills \nwith dirty data--none of which is written out to disk because of the \nhigh kernel threshold--and then it all slams onto disk fast once the \ncheckpoint starts executing sync calls. Lowering the size of the Linux \nwrite cache helps with that a lot, but can't quite eliminate the problem.\n\n> Plus: if 2-6GB is too much, why not decrease checkpoint_segments? Or\n> checkpoint_timeout?\n> \n\nMaking checkpoints really frequent increases total disk I/O, both to the \ndatabase and to the WAL, significantly. You don't want to do that if \nthere's another way to achieve the same goal without those costs, which \nis what some kernel tuning can do here. Just need to be careful not to \ngo too far; some write caching at the OS level helps a lot, too.\n\n\n> I'm not saying that those kernel parameters are \"useless\"; I'm saying \n> they are used\n> in the same way as the checkpoint_segments, checkpoint_timeout and\n> checkpoint_completion_target are used by postgresql; and on a postgresql-only system\n> I would rather have postgresql look after the fsync calls, not the OS.\n> \n\nExcept that PostgreSQL doesn't look after the fsync calls yet. I wrote \na patch for 9.1 that spread out the sync calls, similarly to how the \nwrites are spread out now. I wasn't able to prove an improvement \nsufficient to commit the result. In the Linux case, the OS has more \ninformation to work with about how to schedule I/O efficiently given how \nthe hardware is acting, and it's not possible for PostgreSQL to know all \nthat--not without duplicating a large portion of the kernel development \nwork at least. Right now, relying the kernel means that any \nimprovements there magically apply to any PostgreSQL version. So far \nthe results there have been beating out improvements made to the \ndatabase fast enough that it's hard to innovate in this area within \nPostgres.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 11 Oct 2011 02:50:46 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "> checkpoint_completion_targets spreads out the writes to disk. PostgreSQL \n\n> doesn't make any attempt yet to spread out the sync calls. On a busy \n> server, what can happen is that the whole OS write cache fills with dirty \n> data--none of which is written out to disk because of the high kernel \n> threshold--and then it all slams onto disk fast once the checkpoint starts \n> executing sync calls. Lowering the size of the Linux write cache helps with \n> that a lot, but can't quite eliminate the problem.\n\n\nOk, I understand now\n\n> Except that PostgreSQL doesn't look after the fsync calls yet. I wrote a \n\n> patch for 9.1 that spread out the sync calls, similarly to how the writes are \n> spread out now. I wasn't able to prove an improvement sufficient to commit \n> the result. In the Linux case, the OS has more information to work with about \n> how to schedule I/O efficiently given how the hardware is acting, and it's \n> not possible for PostgreSQL to know all that--not without duplicating a large \n> portion of the kernel development work at least. Right now, relying the kernel \n> means that any improvements there magically apply to any PostgreSQL version. So \n> far the results there have been beating out improvements made to the database \n> fast enough that it's hard to innovate in this area within Postgres.\n\n\nOk; thank you very much for the explanation.\n\nIn fact, shouldn't those things be explained in the \"WAL Configuration\" section\nof the manual? It looks as important as configuring Postgresql itself...\nAnd: that applies to Linux. What about other OS, such as Solaris and FreeBSD?\n",
"msg_date": "Tue, 11 Oct 2011 09:57:57 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On 11/10/2011 00:02, Samuel Gendler wrote:\n\n> The original question doesn't actually say that performance has gone down,\n> only that cpu utilization has gone up. Presumably, with lots more RAM, it is\n> blocking on I/O a lot less, so it isn't necessarily surprising that CPU\n> utilization has gone up. \n\nIt's Linux - it counts IO wait in the load average. Again there are too\nlittle details to be sure of anything, but it is possible that the IO\nrate didn't go down.",
"msg_date": "Tue, 11 Oct 2011 12:20:45 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "Em 10-10-2011 23:19, Claudio Freire escreveu:\n> On Tue, Oct 11, 2011 at 12:02 AM, Samuel Gendler\n> <[email protected]> wrote:\n>> The original question doesn't actually say that performance has gone down,\n>> only that cpu utilization has gone up. Presumably, with lots more RAM, it is\n>> blocking on I/O a lot less, so it isn't necessarily surprising that CPU\n>> utilization has gone up. The only problem would be if db performance has\n>> gotten worse. Maybe I missed a message where that was covered? I don't see\n>> it in the original query to the list.\n>\n> Load average (which is presumably the metric in question) includes\n> both processes using the CPU and processes waiting for I/O.\n> So it *would* be strange for load average to go up like that, if\n> database configuration remains the same (ie: equal query plans)\n\nYep, that's the point. Iostat and vmstat reports a very low use of the \ndisks (lower than before the changes are made - perhaps because the cache).\nNothing changed in database itself.\n",
"msg_date": "Tue, 11 Oct 2011 08:54:42 -0300",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "Em 11-10-2011 03:42, Greg Smith escreveu:\n> On 10/10/2011 01:31 PM, alexandre - aldeia digital wrote:\n>> I drop checkpoint_timeout to 1min and turn on log_checkpoint:\n>>\n>> <2011-10-10 14:18:48 BRT >LOG: checkpoint complete: wrote 6885 buffers\n>> (1.1%); 0 transaction log file(s) added, 0 removed, 1 recycled;\n>> write=29.862 s, sync=28.466 s, total=58.651 s\n>> <2011-10-10 14:18:50 BRT >LOG: checkpoint starting: time\n>\n> Sync times that go to 20 seconds suggest there's a serious problem here\n> somewhere. But it would have been better to do these changes one at a\n> time: turn on log_checkpoints, collect some data, then try lowering\n> checkpoint_timeout. A checkpoint every minute is normally a bad idea, so\n> that change may have caused this other issue.\n\nI returned to 5 minutes. Thanks.\n\n>> procs -------------------memory------------------ ---swap--\n>> -----io---- --system-- -----cpu-------..\n>> r b swpd free buff cache si so bi bo in cs us sy id wa st\n>> 34 0 2696 8289288 117852 38432268 0 0 8 2757 2502 4148 80 20 0 0 0\n>> 39 1 2696 8286128 117852 38432348 0 0 24 622 2449 4008 80 20 0 0 0\n>> 41 0 2696 8291100 117852 38433792 0 0 64 553 2487 3419 83 17 0 0 0\n>> ...Notice that we have no idle % in cpu column.\n>\n> You also have no waiting for I/O! This is just plain strange; checkpoint\n> sync time spikes with no I/O waits I've never seen before. System time\n> going to 20% isn't normal either.\n\nHave I anything to detect which proccess was causing the system time \nincreasing ?\n\n\n> I don't know what's going on with this server. What I would normally do\n> in this case is use \"top -c\" to see what processes are taking up so much\n> runtime, and then look at what they are doing with pg_stat_activity. You\n> might see the slow processes in the log files by setting\n> log_min_duration_statement instead. I'd be suspicious of Linux given\n> your situation though.\n\nLast night, I put another disk in the server and install Debian 6, \npreserving the same structure, only poiting the olds data in the new \npostgresql 9.0.5 compilation. Today, the problem persists.\n\nAnd for all that asks: the performance is poor, unusable.\n\n> I wonder if increasing the memory is a coincidence, and the real cause\n> is something related to the fact that you had to reboot to install it.\n> You might have switched to a newer kernel in the process too, for\n> example; I'd have to put a kernel bug on the list of suspects with this\n> unusual vmstat output.\n\nI dont think that is a coincidence, because this machine was rebooted\nother times without problem.\n\nBest regards.\n\n\n",
"msg_date": "Tue, 11 Oct 2011 09:14:50 -0300",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On Mon, Oct 10, 2011 at 3:26 PM, alexandre - aldeia digital\n<[email protected]> wrote:\n> Hi,\n>\n> Yesterday, a customer increased the server memory from 16GB to 48GB.\n\nA shot in the dark... what is the content of /proc/mtrr?\n\nLuca\n",
"msg_date": "Tue, 11 Oct 2011 14:50:55 +0200",
"msg_from": "Luca Tettamanti <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "On 10/11/2011 04:57 AM, Leonardo Francalanci wrote:\n> In fact, shouldn't those things be explained in the \"WAL \n> Configuration\" section\n> of the manual? It looks as important as configuring Postgresql itself...\n> And: that applies to Linux. What about other OS, such as Solaris and FreeBSD?\n> \n\nThere's at least 10 pages of stuff about this floating around my book, \nincluding notes on the similar Solaris and FreeBSD parameters to tune.\n\nAs for why that's not in the manual instead, two reasons. The \nPostgreSQL manual doesn't get too deep into things like operating system \nadjustments for performance. It's big enough without that, and any \ninformation that would be added is likely to get out of sync with new OS \nreleases. This project doesn't want to take on the job of documenting \nevery twist there. My book is less than a year old, and there's already \nsome material on this topic that's turning obsolete due to Linux kernel \nchanges. The PostgreSQL manual is shooting to still be relevant for \nmuch longer than that.\n\nSecond, making an addition to the manual is the hardest possible way to \ndocument things. The docbook toolchain used is picky and fragile to \nmalformed additions. And you can't get anything added there without \npassing through a series of people who will offer feedback of some sort; \nusually valuable, but still time consuming to process. Things need to \nasked quite frequently before it's worth the trouble. I did a few blog \nentries and mailing list posts about this earlier this year, and that \nwas as much documentation as I could justify at the time. If there's a \nuser-visible behavior changes here, that's the point where an update to \nthe manual would be in order.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 11 Oct 2011 11:32:48 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load"
},
{
"msg_contents": "Hi,\n\nAbout 3 hours ago, the client contacted the Dell and they suggested 2 \nthings:\n\n1) Update the baseboard firmware (the only component that haven't \nupdated yesterday).\n2) Change all memory chips to new others, instead of maintain the old \n(16 GB) + new (32 GB).\n\nAfter do this, until now, the load average does not surpasses 4 points.\n\nThe I/O wait returned to hit some values and the system % down to an \naverage of 5%.\n\nUnfortunatly, we will never discover if the problem resides in baseboard \nfirmware or in RAM, but util now the problem was solved.\n\nThanks all for help !\n\nBest regards\n\n",
"msg_date": "Tue, 11 Oct 2011 15:02:26 -0300",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding more memory = hugh cpu load [solved]"
},
{
"msg_contents": "On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital\n<[email protected]> wrote:\n> 2) Change all memory chips to new others, instead of maintain the old (16\n> GB) + new (32 GB).\n\nOf course, mixing disables double/triple/whatuple channel, and makes\nyour memory subsystem correspondingly slower.\nBy a lot.\n",
"msg_date": "Tue, 11 Oct 2011 15:05:54 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load [solved]"
},
{
"msg_contents": "Em 11-10-2011 15:05, Claudio Freire escreveu:\n> On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital\n> <[email protected]> wrote:\n>> 2) Change all memory chips to new others, instead of maintain the old (16\n>> GB) + new (32 GB).\n>\n> Of course, mixing disables double/triple/whatuple channel, and makes\n> your memory subsystem correspondingly slower.\n> By a lot.\n\nThe initial change (add more memory) are maded by a technical person of \nDell and him told us that he use the same especification in memory chips.\nBut, you know how \"it works\"... ;)\n\n\n\n",
"msg_date": "Tue, 11 Oct 2011 17:02:29 -0300",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding more memory = hugh cpu load [solved]"
},
{
"msg_contents": "On Tue, Oct 11, 2011 at 5:02 PM, alexandre - aldeia digital\n<[email protected]> wrote:\n> The initial change (add more memory) are maded by a technical person of Dell\n> and him told us that he use the same especification in memory chips.\n> But, you know how \"it works\"... ;)\n\nYeah, but different size == different specs\n\nI do know how it works ;-)\n",
"msg_date": "Tue, 11 Oct 2011 17:08:05 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load [solved]"
},
{
"msg_contents": "On 11-10-2011 20:05 Claudio Freire wrote:\n> On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital\n> <[email protected]> wrote:\n>> 2) Change all memory chips to new others, instead of maintain the old (16\n>> GB) + new (32 GB).\n>\n> Of course, mixing disables double/triple/whatuple channel, and makes\n> your memory subsystem correspondingly slower.\n> By a lot.\n>\n\nThat really depends on the chipset/server. The current intel E56xx-chips \n(and previous E55xx) basically just expect groups of 3 modules per \nprocessor, but it doesn't really matter whether that's 3x2+3x4 or 6x4 in \nterms of performance (unless the linuxkernel does some weirdness of \ncourse). It at least won't disable triple-channel, just because you \nadded different size modules. Only when you get to too many 'ranks', \nyou'll see performance degradation. But that's in terms of clock speed, \nnot in disabling triple channel.\n\nBut as said, that all depends on the memory controller in the server's \nmainboard or processors.\n\nBest regards,\n\nArjen\n",
"msg_date": "Tue, 11 Oct 2011 22:33:44 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load [solved]"
},
{
"msg_contents": "On Tue, Oct 11, 2011 at 5:33 PM, Arjen van der Meijden\n<[email protected]> wrote:\n> That really depends on the chipset/server. The current intel E56xx-chips\n> (and previous E55xx) basically just expect groups of 3 modules per\n> processor, but it doesn't really matter whether that's 3x2+3x4 or 6x4 in\n> terms of performance (unless the linuxkernel does some weirdness of course).\n> It at least won't disable triple-channel, just because you added different\n> size modules. Only when you get to too many 'ranks', you'll see performance\n> degradation. But that's in terms of clock speed, not in disabling triple\n> channel.\n>\n\nThere are too many caveats to doing that, including how they're\ninstalled (in which slots), the timings, brand, internal structure,\nand whatnot, with all those details not always published, only sitting\nin the module's SPD.\n\nIn essence, mixing is more probable to mess your multichannel\nconfiguration than not.\n\nThat's why matched kits are sold.\n",
"msg_date": "Tue, 11 Oct 2011 17:46:32 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding more memory = hugh cpu load [solved]"
}
] |
[
{
"msg_contents": "\nI have a slow query. I expect that there is either a more intelligent way \nto write it, or that one could make some indexes that would speed it up. \nI've tried various indexes, and am not getting anywhere.\n\nI'd be grateful for any suggestions. Reasonably full details are below.\n\n\nDESCRIPTION\n===========\n\nI am writing a database which stores scores from mass-entry competitions \n(\"challenges\"). Candidates who were the best in their school (schools being \nidentified by their \"centre_number\") receive a special certificate in each \ncompetition.\n\nCurrently the database has data from six competitions, each entered by \nsomething like 200,000 students from about 2,000 schools. I wish to produce \na view showing best-in-school performances.\n\nI have made two attempts (both immediately after running VACUUM ANALYZE), \nand both are surprisingly slow.\n\n\nTHE QUERIES\n===========\n\nI'm interested in running something like\n\nSELECT * FROM best_in_school_methodN WHERE competition_name = 'X' AND \nacademic_year_beginning = 2010\n\nand the following two variants have been tried:\n\n\nCREATE VIEW best_in_school_method1 AS\n WITH best_scores(competition_name, academic_year_beginning, \ncentre_number, total_score) AS\n (SELECT competition_name, academic_year_beginning, centre_number, \nMAX(total_score) AS total_score FROM challenge_entries GROUP BY \ncompetition_name, academic_year_beginning, centre_number)\n SELECT competition_name, academic_year_beginning, centre_number, \nentry_id, total_score, (true) AS best_in_school FROM challenge_entries \nNATURAL JOIN best_scores;\n\nThis is EXPLAIN ANALYZEd here:\n http://explain.depesz.com/s/EiS\n\n\nCREATE VIEW best_in_school_method2 AS\n WITH innertable(competition_name, academic_year_beginning, \ncentre_number, entry_id, total_score, school_max_score) AS\n (SELECT competition_name, academic_year_beginning, centre_number, \nentry_id, total_score, MAX(total_score) OVER (PARTITION BY \ncompetition_name, academic_year_beginning, centre_number) AS \ncentre_max_score FROM challenge_entries)\n SELECT competition_name, academic_year_beginning, centre_number, \nentry_id, total_score, (true) AS best_in_school FROM innertable WHERE \ncentre_max_score = total_score;\n\nThis one is EXPLAIN ANALYZEd here:\n http://explain.depesz.com/s/6Eh\n\n\nCOMMENT\n=======\n\nIn both cases, unless I've misunderstood, most of the time is taken up by \nsorting all the results for that particular competition. It appears to me \nthat there should be much better ways: the results do not need to be fully \nsorted.\n\nIf I were such an expert, I wouldn't be asking you all though.\n\n\nSCHEMA\n======\n\nI should explain the tables(though probably only the last one is \ninteresting) and the index mentioned by one of the EXPLAINs. They can be \nproduced by\n\nCREATE TABLE school_years\n(\n yearname VARCHAR(5) PRIMARY KEY,\n minimum_usual_age_september1 INTERVAL,\n maximum_usual_age_september1 INTERVAL,\n usual_successor VARCHAR(5) REFERENCES school_years\n);\n\nCREATE TABLE challenge_types\n(\n competition_name TEXT PRIMARY KEY,\n too_young_yeargroup VARCHAR(5) REFERENCES school_years\n);\n\nCREATE TABLE challenges\n(\n competition_name TEXT REFERENCES challenge_types,\n academic_year_beginning INTEGER,\n competition_date DATE,\n CONSTRAINT competition_is_in_year CHECK (competition_date BETWEEN (date \n(academic_year_beginning || '.001') + interval '9 months') AND (date \n(academic_year_beginning || '.001') + interval '21 months')),\n CONSTRAINT one_challenge_per_year UNIQUE \n(academic_year_beginning,competition_name),\n PRIMARY KEY (competition_name, academic_year_beginning)\n);\n\nCREATE TABLE challenge_entries\n(\n entry_id SERIAL,\n competition_name TEXT,\n academic_year_beginning INTEGER,\n given_name TEXT,\n surname TEXT,\n centre_number CHAR(6),\n school_year VARCHAR(5),\n date_of_birth DATE,\n uk_educated BOOLEAN,\n uk_passport BOOLEAN,\n sex SEX,\n total_score INTEGER NOT NULL DEFAULT 0,\n PRIMARY KEY (competition_name,academic_year_beginning,entry_id),\n FOREIGN KEY (school_year) REFERENCES school_years,\n FOREIGN KEY (competition_name,academic_year_beginning) REFERENCES \nchallenges );\n\nCREATE INDEX challenge_entries_by_competition_centre_number_and_total_score\n ON challenge_entries\n (competition_name,academic_year_beginning,centre_number,total_score DESC);\n\n\nSOFTWARE AND HARDWARE\n=====================\n\nI'm running \"PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC \ngcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit\". It's the standard \ninstallation from Debian stable (Squeeze), and I haven't messed around with \nit.\n\nMy Linux kernel is 2.6.32-5-amd64.\n\nI have a desktop PC with a Intel Core i7 CPU and 6GB of RAM, and a single \n640GB Hitachi HDT72106 disk. My root partition is less than 30% full.\n\n",
"msg_date": "10 Oct 2011 22:59:17 +0100",
"msg_from": "James Cranch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rapidly finding maximal rows"
},
{
"msg_contents": "James Cranch <[email protected]> writes:\n> I have a slow query. I expect that there is either a more intelligent way \n> to write it, or that one could make some indexes that would speed it up. \n> I've tried various indexes, and am not getting anywhere.\n> I'd be grateful for any suggestions. Reasonably full details are below.\n\nTwo bits of advice:\n\n1. Avoid unnecessary use of WITH. It acts as an optimization fence,\nwhich you don't want here. In particular, the only way to avoid\nsorting/aggregating over the whole table is for the outer query's WHERE\nconditions on competition_name and academic_year_beginning to get pushed\ndown into the scans on challenge_entries ... and that can't happen if\nthere's a WITH in between.\n\n2. Try increasing work_mem. I think that your first view would work all\nright if it had enough work_mem to go for a HashAgg plan instead of\nsort-and-group, even without pushdown of the outer WHERE. It'd\ndefinitely be faster than what you've got, anyway.\n\nThe other approach with a window function is probably a lost cause.\nPostgres hasn't got a lot of intelligence about optimizing\nwindow-function queries ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Oct 2011 00:25:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rapidly finding maximal rows "
}
] |
[
{
"msg_contents": "\nI have a slow query, based on the problem of finding the set of rows which \nare maximal in some sense. I expect that there is either a more intelligent \nway to write it, or that one could make some indexes that would speed it \nup. I've tried various indexes, and am not getting anywhere.\n\nI'd be grateful for any suggestions. Reasonably full details are below.\n\n\nDESCRIPTION\n===========\n\nI am writing a database which stores scores from mass-entry competitions \n(\"challenges\"). Candidates who were the best in their school (schools being \nidentified by their \"centre_number\") receive a special certificate in each \ncompetition.\n\nCurrently the database has data from six competitions, each entered by \nsomething like 200,000 students from about 2,000 schools. I wish to produce \na view showing the students who had best-in-school performances.\n\nI have made two attempts (both immediately after running VACUUM ANALYZE), \nand both are surprisingly slow.\n\n\nTHE QUERIES\n===========\n\nI'm interested in running something like\n\nSELECT * FROM best_in_school_methodN WHERE competition_name = 'X' AND \nacademic_year_beginning = 2010\n\nand the following two variants have been tried:\n\n\nCREATE VIEW best_in_school_method1 AS\n WITH best_scores(competition_name, academic_year_beginning, \ncentre_number, total_score) AS\n (SELECT competition_name, academic_year_beginning, centre_number, \nMAX(total_score) AS total_score FROM challenge_entries GROUP BY \ncompetition_name, academic_year_beginning, centre_number)\n SELECT competition_name, academic_year_beginning, centre_number, \nentry_id, total_score, (true) AS best_in_school FROM challenge_entries \nNATURAL JOIN best_scores;\n\nThis is EXPLAIN ANALYZEd here:\n http://explain.depesz.com/s/EiS\n\n\nCREATE VIEW best_in_school_method2 AS\n WITH innertable(competition_name, academic_year_beginning, \ncentre_number, entry_id, total_score, school_max_score) AS\n (SELECT competition_name, academic_year_beginning, centre_number, \nentry_id, total_score, MAX(total_score) OVER (PARTITION BY \ncompetition_name, academic_year_beginning, centre_number) AS \ncentre_max_score FROM challenge_entries)\n SELECT competition_name, academic_year_beginning, centre_number, \nentry_id, total_score, (true) AS best_in_school FROM innertable WHERE \ncentre_max_score = total_score;\n\nThis one is EXPLAIN ANALYZEd here:\n http://explain.depesz.com/s/6Eh\n\n\nCOMMENT\n=======\n\nIn both cases, unless I've misunderstood, most of the time is taken up by \nsorting all the results for that particular competition. It appears to me \nthat there should be much better ways: the results do not need to be fully \nsorted.\n\nIf I were such an expert, I wouldn't be asking you all though.\n\nBy the way, the choice to SELECT a value of true is so that I can join the \nresults back into the original table to easily produce a best in school \nboolean column.\n\n\nSCHEMA\n======\n\nI should explain the tables(though probably only the last one is \ninteresting) and the index mentioned by one of the EXPLAINs. They can be \nproduced by\n\nCREATE TABLE school_years\n(\n yearname VARCHAR(5) PRIMARY KEY,\n minimum_usual_age_september1 INTERVAL,\n maximum_usual_age_september1 INTERVAL,\n usual_successor VARCHAR(5) REFERENCES school_years\n);\n\nCREATE TABLE challenge_types\n(\n competition_name TEXT PRIMARY KEY,\n too_young_yeargroup VARCHAR(5) REFERENCES school_years\n);\n\nCREATE TABLE challenges\n(\n competition_name TEXT REFERENCES challenge_types,\n academic_year_beginning INTEGER,\n competition_date DATE,\n CONSTRAINT competition_is_in_year CHECK (competition_date BETWEEN (date \n(academic_year_beginning || '.001') + interval '9 months') AND (date \n(academic_year_beginning || '.001') + interval '21 months')),\n CONSTRAINT one_challenge_per_year UNIQUE \n(academic_year_beginning,competition_name),\n PRIMARY KEY (competition_name, academic_year_beginning)\n);\n\nCREATE TABLE challenge_entries\n(\n entry_id SERIAL,\n competition_name TEXT,\n academic_year_beginning INTEGER,\n given_name TEXT,\n surname TEXT,\n centre_number CHAR(6),\n school_year VARCHAR(5),\n date_of_birth DATE,\n uk_educated BOOLEAN,\n uk_passport BOOLEAN,\n sex SEX,\n total_score INTEGER NOT NULL DEFAULT 0,\n PRIMARY KEY (competition_name,academic_year_beginning,entry_id),\n FOREIGN KEY (school_year) REFERENCES school_years,\n FOREIGN KEY (competition_name,academic_year_beginning) REFERENCES \nchallenges );\n\nCREATE INDEX challenge_entries_by_competition_centre_number_and_total_score\n ON challenge_entries\n (competition_name,academic_year_beginning,centre_number,total_score DESC);\n\n\nSOFTWARE AND HARDWARE\n=====================\n\nI'm running \"PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC \ngcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit\". It's the standard \ninstallation from Debian stable (Squeeze), and I haven't messed around with \nit.\n\nMy Linux kernel is 2.6.32-5-amd64.\n\nI have a desktop PC with a Intel Core i7 CPU and 6GB of RAM, and a single \n640GB Hitachi HDT72106 disk. My root partition is less than 30% full.\n\n\n",
"msg_date": "11 Oct 2011 11:16:06 +0100",
"msg_from": "James Cranch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rapidly finding maximal rows"
},
{
"msg_contents": "On Tue, Oct 11, 2011 at 3:16 AM, James Cranch <[email protected]> wrote:\n\n>\n> This is EXPLAIN ANALYZEd here:\n> http://explain.depesz.com/s/EiS\n\n\"Sort Method: external merge Disk: 35712kB\"\n\n>\n> SOFTWARE AND HARDWARE\n> =====================\n>\n> I'm running \"PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC\n> gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit\". It's the standard installation\n> from Debian stable (Squeeze), and I haven't messed around with it.\n>\n> My Linux kernel is 2.6.32-5-amd64.\n>\n> I have a desktop PC with a Intel Core i7 CPU and 6GB of RAM, and a single\n> 640GB Hitachi HDT72106 disk. My root partition is less than 30% full.\n\nTry setting work_mem to something larger, like 40MB to do that sort\nstep in memory, rather than spilling to disk. The usual caveats apply\nthough, like if you have many users/queries performing sorts or\naggregations, up to that amount of work_mem may be used at each step\npotentially resulting in your system running out of memory/OOM etc.\n",
"msg_date": "Tue, 11 Oct 2011 16:22:06 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rapidly finding maximal rows"
},
{
"msg_contents": "Hi James\n\n\nI'm guessing the problem is that the combination of using a view and the way\nthe view is defined with an in-line temporary table is too complex for the\nplanner to introspect into, transform and figure out the equivalent direct\nquery, and so it's creating that entire temporary table every time you\nevaluate the select.\n\nOur app has some similar queries (get me the most recent row from a data\nlogging table) and these work fine with a simple self-join, like this\nexample (irrelevant columns omitted for discussion)\n\nselect t.entity, t.time_stamp, t.data from log_table t\nwhere t.entity=cast('21EC2020-3AEA-1069-A2DD-08002B30309D' as uuid)\nand t.time_stamp=\n (select max(time_stamp)\n from log_table u\n where t.entity=u.entity)\n\ngiven a schema with the obvious indexes ...\n\ncreate table log_table\n (entity UUID,\n time_stamp TIMESTAMP WITHOUT TIME ZONE,\n data TEXT);\n\ncreate index log_table_index on log_table (entity, time_stamp);\n\n.. and the plan for the dependent sub-query does the obvious reverse index\nscan as you'd expect / want.\n\n\n\nIf you still want / need to have the view, I suspect that getting rid of the\ntemp table definition will fix it ... my effort is below, alternatively you\nmight be able to take your first example and pull out best_scores and define\nit as a view alos,\n\nCREATE VIEW best_in_school_method3 AS\n SELECT competition_name, academic_year_beginning, centre_number, entry_id,\ntotal_score, (true) AS best_in_school FROM challenge_entries ce1\n WHERE total_score =\n (SELECT MAX(total_score) FROM challenge_entries ce2\n WHERE ce1.competition_name=ce2.competition_name\n AND ce1.academic_year_beginning=ce2.academic_year_beginning\n AND ce1.centre_number=ce2.centre_number\n )\n\nIf you don't actually need to have the view for other purposes, and just\nwant to solve the original problem (listing certificates to be issued), you\ncan do it as a direct query, e.g.\n\n SELECT competition_name, academic_year_beginning, centre_number, entry_id,\ntotal_score, (true) AS best_in_school FROM challenge_entries ce1\n WHERE total_score =\n (SELECT MAX(total_score) FROM challenge_entries ce2\n WHERE ce1.competition_name=ce2.competition_name\n AND ce1.academic_year_beginning=ce2.academic_year_beginning\n AND ce1.centre_number=ce2.centre_number\n )\n AND competition_name = 'X'\n AND academic_year_beginning = 2010\n\nPostgreSQL also has a proprietary extension SELECT DISTINCT ON which has a\nmuch nicer syntax, but unlike the above it will only show one (arbitrarily\nselected) pupil per school in the event of a tie, which is probably not what\nyou want :-)\n\nLooking at the schema, the constraint one_challenge_per_year is redundant\nwith the primary key.\n\nCheers\nDave\n\nP.S. Small world ... did my undergrad there, back when @cam.ac.uk email went\nto an IBM 3084 mainframe and the user ids typically ended in 10 :-)\n\nOn Tue, Oct 11, 2011 at 5:16 AM, James Cranch <[email protected]> wrote:\n\n>\n> I have a slow query, based on the problem of finding the set of rows which\n> are maximal in some sense. I expect that there is either a more intelligent\n> way to write it, or that one could make some indexes that would speed it up.\n> I've tried various indexes, and am not getting anywhere.\n>\n> I'd be grateful for any suggestions. Reasonably full details are below.\n>\n>\n> DESCRIPTION\n> ===========\n>\n> I am writing a database which stores scores from mass-entry competitions\n> (\"challenges\"). Candidates who were the best in their school (schools being\n> identified by their \"centre_number\") receive a special certificate in each\n> competition.\n>\n> Currently the database has data from six competitions, each entered by\n> something like 200,000 students from about 2,000 schools. I wish to produce\n> a view showing the students who had best-in-school performances.\n>\n> I have made two attempts (both immediately after running VACUUM ANALYZE),\n> and both are surprisingly slow.\n>\n>\n> THE QUERIES\n> ===========\n>\n> I'm interested in running something like\n>\n> SELECT * FROM best_in_school_methodN WHERE competition_name = 'X' AND\n> academic_year_beginning = 2010\n>\n> and the following two variants have been tried:\n>\n>\n> CREATE VIEW best_in_school_method1 AS\n> WITH best_scores(competition_name, academic_year_beginning,\n> centre_number, total_score) AS\n> (SELECT competition_name, academic_year_beginning, centre_number,\n> MAX(total_score) AS total_score FROM challenge_entries GROUP BY\n> competition_name, academic_year_beginning, centre_number)\n> SELECT competition_name, academic_year_beginning, centre_number,\n> entry_id, total_score, (true) AS best_in_school FROM challenge_entries\n> NATURAL JOIN best_scores;\n>\n> This is EXPLAIN ANALYZEd here:\n> http://explain.depesz.com/s/**EiS <http://explain.depesz.com/s/EiS>\n>\n>\n> CREATE VIEW best_in_school_method2 AS\n> WITH innertable(competition_name, academic_year_beginning, centre_number,\n> entry_id, total_score, school_max_score) AS\n> (SELECT competition_name, academic_year_beginning, centre_number,\n> entry_id, total_score, MAX(total_score) OVER (PARTITION BY competition_name,\n> academic_year_beginning, centre_number) AS centre_max_score FROM\n> challenge_entries)\n> SELECT competition_name, academic_year_beginning, centre_number,\n> entry_id, total_score, (true) AS best_in_school FROM innertable WHERE\n> centre_max_score = total_score;\n>\n> This one is EXPLAIN ANALYZEd here:\n> http://explain.depesz.com/s/**6Eh <http://explain.depesz.com/s/6Eh>\n>\n>\n> COMMENT\n> =======\n>\n> In both cases, unless I've misunderstood, most of the time is taken up by\n> sorting all the results for that particular competition. It appears to me\n> that there should be much better ways: the results do not need to be fully\n> sorted.\n>\n> If I were such an expert, I wouldn't be asking you all though.\n>\n> By the way, the choice to SELECT a value of true is so that I can join the\n> results back into the original table to easily produce a best in school\n> boolean column.\n>\n>\n> SCHEMA\n> ======\n>\n> I should explain the tables(though probably only the last one is\n> interesting) and the index mentioned by one of the EXPLAINs. They can be\n> produced by\n>\n> CREATE TABLE school_years\n> (\n> yearname VARCHAR(5) PRIMARY KEY,\n> minimum_usual_age_september1 INTERVAL,\n> maximum_usual_age_september1 INTERVAL,\n> usual_successor VARCHAR(5) REFERENCES school_years\n> );\n>\n> CREATE TABLE challenge_types\n> (\n> competition_name TEXT PRIMARY KEY,\n> too_young_yeargroup VARCHAR(5) REFERENCES school_years\n> );\n>\n> CREATE TABLE challenges\n> (\n> competition_name TEXT REFERENCES challenge_types,\n> academic_year_beginning INTEGER,\n> competition_date DATE,\n> CONSTRAINT competition_is_in_year CHECK (competition_date BETWEEN (date\n> (academic_year_beginning || '.001') + interval '9 months') AND (date\n> (academic_year_beginning || '.001') + interval '21 months')),\n> CONSTRAINT one_challenge_per_year UNIQUE (academic_year_beginning,**\n> competition_name),\n> PRIMARY KEY (competition_name, academic_year_beginning)\n> );\n>\n> CREATE TABLE challenge_entries\n> (\n> entry_id SERIAL,\n> competition_name TEXT,\n> academic_year_beginning INTEGER,\n> given_name TEXT,\n> surname TEXT,\n> centre_number CHAR(6),\n> school_year VARCHAR(5),\n> date_of_birth DATE,\n> uk_educated BOOLEAN,\n> uk_passport BOOLEAN,\n> sex SEX,\n> total_score INTEGER NOT NULL DEFAULT 0,\n> PRIMARY KEY (competition_name,academic_**year_beginning,entry_id),\n> FOREIGN KEY (school_year) REFERENCES school_years,\n> FOREIGN KEY (competition_name,academic_**year_beginning) REFERENCES\n> challenges );\n>\n> CREATE INDEX challenge_entries_by_**competition_centre_number_and_**\n> total_score\n> ON challenge_entries\n> (competition_name,academic_**year_beginning,centre_number,**total_score\n> DESC);\n>\n>\n> SOFTWARE AND HARDWARE\n> =====================\n>\n> I'm running \"PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC\n> gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit\". It's the standard installation\n> from Debian stable (Squeeze), and I haven't messed around with it.\n>\n> My Linux kernel is 2.6.32-5-amd64.\n>\n> I have a desktop PC with a Intel Core i7 CPU and 6GB of RAM, and a single\n> 640GB Hitachi HDT72106 disk. My root partition is less than 30% full.\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nHi JamesI'm guessing the problem is that the combination of using a view and the way the view is defined with an in-line temporary table is too complex for the planner to introspect into, transform and figure out the equivalent direct query, and so it's creating that entire temporary table every time you evaluate the select. \nOur app has some similar queries (get me the most recent row from a data\n logging table) and these work fine with a simple self-join, like this \nexample (irrelevant columns omitted for discussion)\n\nselect t.entity, t.time_stamp, t.data from log_table t\nwhere t.entity=cast('21EC2020-3AEA-1069-A2DD-08002B30309D' as uuid)\nand t.time_stamp=\n (select max(time_stamp)\n from log_table u\n where t.entity=u.entity)\n\ngiven a schema with the obvious indexes ...\n\ncreate table log_table\n (entity UUID,\n time_stamp TIMESTAMP WITHOUT TIME ZONE,\n data TEXT);\n\ncreate index log_table_index on log_table (entity, time_stamp); \n\n.. and the plan for the dependent sub-query does the obvious reverse index scan as you'd expect / want.\nIf you still want / need to have the view, I suspect that getting rid of the temp table definition will fix it ... my effort is below, alternatively you might be able to take your first example and pull out best_scores and define it as a view alos,\nCREATE VIEW best_in_school_method3 AS\n SELECT competition_name, academic_year_beginning, centre_number, \nentry_id, total_score, (true) AS best_in_school FROM challenge_entries ce1 WHERE total_score =\n (SELECT MAX(total_score) FROM challenge_entries ce2 WHERE ce1.competition_name=ce2.competition_name\n AND ce1.academic_year_beginning=ce2.academic_year_beginning AND ce1.centre_number=ce2.centre_number\n )If you don't actually need to have the view for other purposes, and just want to solve the original problem (listing certificates to be issued), you can do it as a direct query, e.g.\n SELECT competition_name, academic_year_beginning, centre_number, \nentry_id, total_score, (true) AS best_in_school FROM challenge_entries ce1\n WHERE total_score =\n (SELECT MAX(total_score) FROM challenge_entries ce2\n WHERE ce1.competition_name=ce2.competition_name\n AND ce1.academic_year_beginning=ce2.academic_year_beginning\n AND ce1.centre_number=ce2.centre_number\n ) AND competition_name = 'X' AND academic_year_beginning = 2010\nPostgreSQL\n also has a proprietary extension SELECT DISTINCT ON which has a much \nnicer syntax, but unlike the above it will only show one (arbitrarily \nselected) pupil per school in the event of a tie, which is probably not \nwhat you want :-)Looking at the schema, the constraint one_challenge_per_year is redundant with the primary key.\n\nCheersDaveP.S. Small world ... did my undergrad there, back when @cam.ac.uk email went to an IBM 3084 mainframe and the user ids typically ended in 10 :-)\nOn Tue, Oct 11, 2011 at 5:16 AM, James Cranch <[email protected]> wrote:\n\nI have a slow query, based on the problem of finding the set of rows which are maximal in some sense. I expect that there is either a more intelligent way to write it, or that one could make some indexes that would speed it up. I've tried various indexes, and am not getting anywhere.\n\nI'd be grateful for any suggestions. Reasonably full details are below.\n\n\nDESCRIPTION\n===========\n\nI am writing a database which stores scores from mass-entry competitions (\"challenges\"). Candidates who were the best in their school (schools being identified by their \"centre_number\") receive a special certificate in each competition.\n\nCurrently the database has data from six competitions, each entered by something like 200,000 students from about 2,000 schools. I wish to produce a view showing the students who had best-in-school performances.\n\nI have made two attempts (both immediately after running VACUUM ANALYZE), and both are surprisingly slow.\n\n\nTHE QUERIES\n===========\n\nI'm interested in running something like\n\nSELECT * FROM best_in_school_methodN WHERE competition_name = 'X' AND academic_year_beginning = 2010\n\nand the following two variants have been tried:\n\n\nCREATE VIEW best_in_school_method1 AS\n WITH best_scores(competition_name, academic_year_beginning, centre_number, total_score) AS\n (SELECT competition_name, academic_year_beginning, centre_number, MAX(total_score) AS total_score FROM challenge_entries GROUP BY competition_name, academic_year_beginning, centre_number)\n SELECT competition_name, academic_year_beginning, centre_number, entry_id, total_score, (true) AS best_in_school FROM challenge_entries NATURAL JOIN best_scores;\n\nThis is EXPLAIN ANALYZEd here:\n http://explain.depesz.com/s/EiS\n\n\nCREATE VIEW best_in_school_method2 AS\n WITH innertable(competition_name, academic_year_beginning, centre_number, entry_id, total_score, school_max_score) AS\n (SELECT competition_name, academic_year_beginning, centre_number, entry_id, total_score, MAX(total_score) OVER (PARTITION BY competition_name, academic_year_beginning, centre_number) AS centre_max_score FROM challenge_entries)\n\n SELECT competition_name, academic_year_beginning, centre_number, entry_id, total_score, (true) AS best_in_school FROM innertable WHERE centre_max_score = total_score;\n\nThis one is EXPLAIN ANALYZEd here:\n http://explain.depesz.com/s/6Eh\n\n\nCOMMENT\n=======\n\nIn both cases, unless I've misunderstood, most of the time is taken up by sorting all the results for that particular competition. It appears to me that there should be much better ways: the results do not need to be fully sorted.\n\nIf I were such an expert, I wouldn't be asking you all though.\n\nBy the way, the choice to SELECT a value of true is so that I can join the results back into the original table to easily produce a best in school boolean column.\n\n\nSCHEMA\n======\n\nI should explain the tables(though probably only the last one is interesting) and the index mentioned by one of the EXPLAINs. They can be produced by\n\nCREATE TABLE school_years\n(\n yearname VARCHAR(5) PRIMARY KEY,\n minimum_usual_age_september1 INTERVAL,\n maximum_usual_age_september1 INTERVAL,\n usual_successor VARCHAR(5) REFERENCES school_years\n);\n\nCREATE TABLE challenge_types\n(\n competition_name TEXT PRIMARY KEY,\n too_young_yeargroup VARCHAR(5) REFERENCES school_years\n);\n\nCREATE TABLE challenges\n(\n competition_name TEXT REFERENCES challenge_types,\n academic_year_beginning INTEGER,\n competition_date DATE,\n CONSTRAINT competition_is_in_year CHECK (competition_date BETWEEN (date (academic_year_beginning || '.001') + interval '9 months') AND (date (academic_year_beginning || '.001') + interval '21 months')),\n\n CONSTRAINT one_challenge_per_year UNIQUE (academic_year_beginning,competition_name),\n PRIMARY KEY (competition_name, academic_year_beginning)\n);\n\nCREATE TABLE challenge_entries\n(\n entry_id SERIAL,\n competition_name TEXT,\n academic_year_beginning INTEGER,\n given_name TEXT,\n surname TEXT,\n centre_number CHAR(6),\n school_year VARCHAR(5),\n date_of_birth DATE,\n uk_educated BOOLEAN,\n uk_passport BOOLEAN,\n sex SEX,\n total_score INTEGER NOT NULL DEFAULT 0,\n PRIMARY KEY (competition_name,academic_year_beginning,entry_id),\n FOREIGN KEY (school_year) REFERENCES school_years,\n FOREIGN KEY (competition_name,academic_year_beginning) REFERENCES challenges );\n\nCREATE INDEX challenge_entries_by_competition_centre_number_and_total_score\n ON challenge_entries\n (competition_name,academic_year_beginning,centre_number,total_score DESC);\n\n\nSOFTWARE AND HARDWARE\n=====================\n\nI'm running \"PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit\". It's the standard installation from Debian stable (Squeeze), and I haven't messed around with it.\n\nMy Linux kernel is 2.6.32-5-amd64.\n\nI have a desktop PC with a Intel Core i7 CPU and 6GB of RAM, and a single 640GB Hitachi HDT72106 disk. My root partition is less than 30% full.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 11 Oct 2011 20:05:27 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rapidly finding maximal rows"
},
{
"msg_contents": "Dear Dave,\n\n>CREATE VIEW best_in_school_method3 AS\n> SELECT competition_name, academic_year_beginning, centre_number, \n> entry_id, total_score, (true) AS best_in_school FROM challenge_entries \n> ce1\n> WHERE total_score =\n> (SELECT MAX(total_score) FROM challenge_entries ce2\n> WHERE ce1.competition_name=ce2.competition_name\n> AND ce1.academic_year_beginning=ce2.academic_year_beginning\n> AND ce1.centre_number=ce2.centre_number\n> )\n\nThanks! That works much better, as you can see here:\n\n http://explain.depesz.com/s/Jz1\n\n>If you don't actually need to have the view for other purposes, and just\n>want to solve the original problem (listing certificates to be issued), you\n>can do it as a direct query, e.g.\n\nI'll keep the view, please.\n\n> PostgreSQL also has a proprietary extension SELECT DISTINCT ON which has \n> a much nicer syntax, but unlike the above it will only show one \n> (arbitrarily selected) pupil per school in the event of a tie, which is \n> probably not what you want :-)\n\nIndeed not, that's disastrous here.\n\n>Looking at the schema, the constraint one_challenge_per_year is redundant\n>with the primary key.\n\nOh, yes, thanks. It's a legacy from an earlier approach.\n\n> P.S. Small world ... did my undergrad there, back when @cam.ac.uk email \n> went to an IBM 3084 mainframe and the user ids typically ended in 10 :-)\n\nHeh. The people with only two initials are generating bignums these days: I \nknow [email protected] (here x and y are variables representing letters of \nthe alphabet).\n\nCheers,\n\nJames\n\\/\\/\\\n",
"msg_date": "12 Oct 2011 12:40:48 +0100",
"msg_from": "James Cranch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rapidly finding maximal rows"
},
{
"msg_contents": "Dear Bricklen,\n\n>Try setting work_mem to something larger, like 40MB to do that sort\n>step in memory, rather than spilling to disk. The usual caveats apply\n>though, like if you have many users/queries performing sorts or\n>aggregations, up to that amount of work_mem may be used at each step\n>potentially resulting in your system running out of memory/OOM etc.\n\nThanks, I'll bear that in mind as a strategy. It's good to know. But since \nDave has saved me the sort altogether, I'll go with his plan.\n\nBest wishes,\n\nJames\n\\/\\/\\\n\n-- \n------------------------------------------------------------\nJames Cranch http://www.srcf.ucam.org/~jdc41\n\n\n",
"msg_date": "12 Oct 2011 12:41:51 +0100",
"msg_from": "James Cranch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rapidly finding maximal rows"
},
{
"msg_contents": "On Tue, Oct 11, 2011 at 8:05 PM, Dave Crooke <[email protected]> wrote:\n> Hi James\n>\n>\n> I'm guessing the problem is that the combination of using a view and the way\n> the view is defined with an in-line temporary table is too complex for the\n> planner to introspect into, transform and figure out the equivalent direct\n> query, and so it's creating that entire temporary table every time you\n> evaluate the select.\n>\n> Our app has some similar queries (get me the most recent row from a data\n> logging table) and these work fine with a simple self-join, like this\n> example (irrelevant columns omitted for discussion)\n>\n> select t.entity, t.time_stamp, t.data from log_table t\n> where t.entity=cast('21EC2020-3AEA-1069-A2DD-08002B30309D' as uuid)\n> and t.time_stamp=\n> (select max(time_stamp)\n> from log_table u\n> where t.entity=u.entity)\n>\n> given a schema with the obvious indexes ...\n>\n> create table log_table\n> (entity UUID,\n> time_stamp TIMESTAMP WITHOUT TIME ZONE,\n> data TEXT);\n>\n> create index log_table_index on log_table (entity, time_stamp);\n>\n> .. and the plan for the dependent sub-query does the obvious reverse index\n> scan as you'd expect / want.\n>\n>\n>\n> If you still want / need to have the view, I suspect that getting rid of the\n> temp table definition will fix it ... my effort is below, alternatively you\n> might be able to take your first example and pull out best_scores and define\n> it as a view alos,\n>\n> CREATE VIEW best_in_school_method3 AS\n> SELECT competition_name, academic_year_beginning, centre_number, entry_id,\n> total_score, (true) AS best_in_school FROM challenge_entries ce1\n> WHERE total_score =\n> (SELECT MAX(total_score) FROM challenge_entries ce2\n> WHERE ce1.competition_name=ce2.competition_name\n> AND ce1.academic_year_beginning=ce2.academic_year_beginning\n> AND ce1.centre_number=ce2.centre_number\n> )\n\nThis is a very common problem in SQL and has a lot of interesting solutions.\n\nIn queries like this I usually use the 'ORDER BY total_score DESC\nLIMIT 1 trick. Modern postgres is *usually* smart enough to convert\nmax to that, but not always.\n\nWHERE total_score =\n (SELECT total_score FROM challenge_entries ce2\n WHERE ce1.competition_name=ce2.competition_name\n AND ce1.academic_year_beginning=ce2.academic_year_beginning\n AND ce1.centre_number=ce2.centre_number\n ORDER BY total_score DESC LIMIT 1\n )\n\nAnother clever, and more portable way to write it which can sometimes\ngive a better plan is like this:\n\nWHERE NOT EXISTS\n(\n SELECT 1 FROM challenge_entries ce2\n WHERE ce1.competition_name=ce2.competition_name\n AND ce1.academic_year_beginning=ce2.academic_year_beginning\n AND ce1.centre_number=ce2.centre_number\n AND ce1.total_score > ce2.total_score\n)\n\nYet another interesting way of getting the 'top' record based on a\ncertain criteria is to write a custom aggregate. In postgres you can\naggregate over the entire record type, not just plain fields, so that\nrunning your aggregate function looks like this:\n\nSELECT competition_name, academic_year_beginning, centre_number,\nmax_challenge_entries(ce) FROM challenge_entries ce GROUP BY 1,2,3;\n\nYour function aggregator in SQL would look something like this:\n\nCREATE OR REPLACE FUNCTION\nmax_challenge_entries_impl(challenge_entries, challenge_entries)\nreturns challenge_entries AS\n$$\n SELECT CASE WHEN ($2).total_score > ($1).total_score THEN $2 ELSE $1 END;\n$$ LANGUAGE SQL IMMUTABLE;\n\nThis very STLish approach is rarely the best way to go performance\nwise although it can give better worst case plans in some cases\n(although total_score if in index can never be used for optimization).\n I mention it because I find it to be very clean conceptually and can\nbe a great approach if your 'picking' algorithm is sufficiently more\ncomplex than 'field > field' and is also otherwise not optimizable.\nAnyways, food for thought.\n\nmerlin\n",
"msg_date": "Wed, 19 Oct 2011 09:54:56 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rapidly finding maximal rows"
}
] |
[
{
"msg_contents": "Excuse the noob question, I couldn't find any reading material on this\ntopic.\n\n \n\nLet's say my_table has two fields, pkey_id and another_id. The primary key\nis pkey_id and of course indexed.\n\n \n\nThen someone adds a composite index on btree(pkey_id, another_id).\n\n \n\nQuestion 1) Is there any benefit to having pkey_id in the second index\n(assuming the index was created to satisfy some arbitrary WHERE clause)?\n\n \n\nQuestion 2) Regardless of the answer to Question 1 - if another_id is not\nguaranteed to be unique, whereas pkey_id is - there any value to changing\nthe order of declaration (more generally, is there a performance impact for\ncolumn ordering in btree composite keys?)\n\n \n\nThanks,\n\n \n\nCarlo\n\n\n\n\n\n\n\n\n\n\nExcuse the noob question, I couldn’t find any reading\nmaterial on this topic.\n \nLet’s say my_table has two fields, pkey_id and another_id.\nThe primary key is pkey_id and of course indexed.\n \nThen someone adds a composite index on btree(pkey_id, another_id).\n \nQuestion 1) Is there any benefit to having pkey_id in the\nsecond index (assuming the index was created to satisfy some arbitrary WHERE\nclause)?\n \nQuestion 2) Regardless of the answer to Question 1 - if\nanother_id is not guaranteed to be unique, whereas pkey_id is – there any\nvalue to changing the order of declaration (more generally, is there a\nperformance impact for column ordering in btree composite keys?)\n \nThanks,\n \nCarlo",
"msg_date": "Tue, 11 Oct 2011 11:16:07 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Composite keys"
},
{
"msg_contents": "On Tue, Oct 11, 2011 at 5:16 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> Question 2) Regardless of the answer to Question 1 - if another_id is not\n> guaranteed to be unique, whereas pkey_id is – there any value to changing\n> the order of declaration (more generally, is there a performance impact for\n> column ordering in btree composite keys?)\n\nMulticolumn indices on (c1, c2, ..., cn) can only be used on where\nclauses involving c1..ck with k<n.\n\nSo, an index on (a,b) does *not* help for querying on b.\n\nFurthermore, if a is unique, querying on a or querying on a and b is\nequally selective. b there is just consuming space and cpu cycles.\n\nI'd say, although it obviously depends on the queries you issue, you\nonly need an index on another_id.\n",
"msg_date": "Wed, 12 Oct 2011 02:52:16 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Composite keys"
},
{
"msg_contents": "Claudio is on point, I'll be even more pointed ....\n\nIf pkey_id truly is a primary key in the database sense of the term, and\nthus unique, then IIUC there is no circumstance in which your composite\nindex would ever even get used ... all it's doing is slowing down writes :-)\nIf the query is sufficiently selective on pkey_id to merit using an index,\nthen the planner will use the primary key index, because it's narrower; if\nnot, then the only other option is to do a full table scan because there is\nno index of which another_id is a prefix.\n\nThere are only three options which make sense:\n\n1. No additional indexes, just the primary key\n2. An additional index on (another_id)\n3. An additional index on (another_id, pkey_id)\n4. Both 2. and 3.\n\nChoosing between these depends on a lot of variables of the query mix in\npractice ... you could set up both 2. and 3. and then see which indexes the\nplanner actually uses in practice and then decide which to keep.\n\nThe value in having pkey_id in the index in 3. is for queries whose primary\nselectivity is on another_id, but which also have some selectivity on\npkey_id .... the planner can use an index scan to filter candidate rows /\nblocks to look at. This is especially helpful if another_id is not very\nselective and / or the rows are quite wide.\n\nOn gut feel, it seems unlikely that you'd have a real-world circumstance in\nwhich it makes sense to choose option 4. but it can't be ruled out without\nfurther context.\n\nCheers\nDave\n\nOn Tue, Oct 11, 2011 at 7:52 PM, Claudio Freire <[email protected]>wrote:\n\n> On Tue, Oct 11, 2011 at 5:16 PM, Carlo Stonebanks\n> <[email protected]> wrote:\n> > Question 2) Regardless of the answer to Question 1 - if another_id is not\n> > guaranteed to be unique, whereas pkey_id is – there any value to changing\n> > the order of declaration (more generally, is there a performance impact\n> for\n> > column ordering in btree composite keys?)\n>\n> Multicolumn indices on (c1, c2, ..., cn) can only be used on where\n> clauses involving c1..ck with k<n.\n>\n> So, an index on (a,b) does *not* help for querying on b.\n>\n> Furthermore, if a is unique, querying on a or querying on a and b is\n> equally selective. b there is just consuming space and cpu cycles.\n>\n> I'd say, although it obviously depends on the queries you issue, you\n> only need an index on another_id.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nClaudio is on point, I'll be even more pointed .... If pkey_id truly is a primary key in the database sense of the term, and thus unique, then IIUC there is no circumstance in which your composite index would ever even get used ... all it's doing is slowing down writes :-) If the query is sufficiently selective on pkey_id to merit using an index, then the planner will use the primary key index, because it's narrower; if not, then the only other option is to do a full table scan because there is no index of which another_id is a prefix.\nThere are only three options which make sense:1. No additional indexes, just the primary key2. An additional index on (another_id)3. An additional index on (another_id, pkey_id)4. Both 2. and 3.\nChoosing between these depends on a lot of variables of the query mix in practice ... you could set up both 2. and 3. and then see which indexes the planner actually uses in practice and then decide which to keep.\nThe value in having pkey_id in the index in 3. is for queries whose primary selectivity is on another_id, but which also have some selectivity on pkey_id .... the planner can use an index scan to filter candidate rows / blocks to look at. This is especially helpful if another_id is not very selective and / or the rows are quite wide.\nOn gut feel, it seems unlikely that you'd have a real-world circumstance in which it makes sense to choose option 4. but it can't be ruled out without further context.CheersDave\nOn Tue, Oct 11, 2011 at 7:52 PM, Claudio Freire <[email protected]> wrote:\nOn Tue, Oct 11, 2011 at 5:16 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> Question 2) Regardless of the answer to Question 1 - if another_id is not\n> guaranteed to be unique, whereas pkey_id is – there any value to changing\n> the order of declaration (more generally, is there a performance impact for\n> column ordering in btree composite keys?)\n\nMulticolumn indices on (c1, c2, ..., cn) can only be used on where\nclauses involving c1..ck with k<n.\n\nSo, an index on (a,b) does *not* help for querying on b.\n\nFurthermore, if a is unique, querying on a or querying on a and b is\nequally selective. b there is just consuming space and cpu cycles.\n\nI'd say, although it obviously depends on the queries you issue, you\nonly need an index on another_id.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 11 Oct 2011 20:28:29 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Composite keys"
},
{
"msg_contents": "Thanks Dave & Claudio. \n\n \n\nUnfortunately, my specific example had a primary key in it (based on a\nreal-world case) but this kind of distracted from the general point.\n\n \n\nSo with PG I will stick to the general SQL rule that IF I use compound keys\nthen we have the most selective columns to the left. correct?\n\n \n\n \n\n \n\n \n\n _____ \n\nFrom: Dave Crooke [mailto:[email protected]] \nSent: October 11, 2011 9:28 PM\nTo: Claudio Freire\nCc: Carlo Stonebanks; [email protected]\nSubject: Re: [PERFORM] Composite keys\n\n \n\nClaudio is on point, I'll be even more pointed .... \n\nIf pkey_id truly is a primary key in the database sense of the term, and\nthus unique, then IIUC there is no circumstance in which your composite\nindex would ever even get used ... all it's doing is slowing down writes :-)\nIf the query is sufficiently selective on pkey_id to merit using an index,\nthen the planner will use the primary key index, because it's narrower; if\nnot, then the only other option is to do a full table scan because there is\nno index of which another_id is a prefix.\n\nThere are only three options which make sense:\n\n1. No additional indexes, just the primary key\n2. An additional index on (another_id)\n3. An additional index on (another_id, pkey_id)\n4. Both 2. and 3.\n\nChoosing between these depends on a lot of variables of the query mix in\npractice ... you could set up both 2. and 3. and then see which indexes the\nplanner actually uses in practice and then decide which to keep.\n\nThe value in having pkey_id in the index in 3. is for queries whose primary\nselectivity is on another_id, but which also have some selectivity on\npkey_id .... the planner can use an index scan to filter candidate rows /\nblocks to look at. This is especially helpful if another_id is not very\nselective and / or the rows are quite wide.\n\nOn gut feel, it seems unlikely that you'd have a real-world circumstance in\nwhich it makes sense to choose option 4. but it can't be ruled out without\nfurther context.\n\nCheers\nDave\n\nOn Tue, Oct 11, 2011 at 7:52 PM, Claudio Freire <[email protected]>\nwrote:\n\nOn Tue, Oct 11, 2011 at 5:16 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> Question 2) Regardless of the answer to Question 1 - if another_id is not\n> guaranteed to be unique, whereas pkey_id is - there any value to changing\n> the order of declaration (more generally, is there a performance impact\nfor\n> column ordering in btree composite keys?)\n\nMulticolumn indices on (c1, c2, ..., cn) can only be used on where\nclauses involving c1..ck with k<n.\n\nSo, an index on (a,b) does *not* help for querying on b.\n\nFurthermore, if a is unique, querying on a or querying on a and b is\nequally selective. b there is just consuming space and cpu cycles.\n\nI'd say, although it obviously depends on the queries you issue, you\nonly need an index on another_id.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n \n\n\n\n\n\n\n\n\n\n\n\nThanks Dave & Claudio. \n \nUnfortunately, my specific example had a\nprimary key in it (based on a real-world case) but this kind of distracted from\nthe general point.\n \nSo with PG I will stick to the general SQL\nrule that IF I use compound keys then we have the most selective columns to the\nleft… correct?\n \n \n \n \n\n\n\n\nFrom:\nDave Crooke [mailto:[email protected]] \nSent: October 11, 2011 9:28 PM\nTo: Claudio Freire\nCc: Carlo Stonebanks;\[email protected]\nSubject: Re: [PERFORM] Composite\nkeys\n\n \nClaudio is on point, I'll\nbe even more pointed .... \n\nIf pkey_id truly is a primary key in the database sense of the term, and thus\nunique, then IIUC there is no circumstance in which your composite index would\never even get used ... all it's doing is slowing down writes :-) If the query\nis sufficiently selective on pkey_id to merit using an index, then the planner\nwill use the primary key index, because it's narrower; if not, then the only\nother option is to do a full table scan because there is no index of which\nanother_id is a prefix.\n\nThere are only three options which make sense:\n\n1. No additional indexes, just the primary key\n2. An additional index on (another_id)\n3. An additional index on (another_id, pkey_id)\n4. Both 2. and 3.\n\nChoosing between these depends on a lot of variables of the query mix in\npractice ... you could set up both 2. and 3. and then see which indexes the\nplanner actually uses in practice and then decide which to keep.\n\nThe value in having pkey_id in the index in 3. is for queries whose primary\nselectivity is on another_id, but which also have some selectivity on pkey_id\n.... the planner can use an index scan to filter candidate rows / blocks to\nlook at. This is especially helpful if another_id is not very selective and /\nor the rows are quite wide.\n\nOn gut feel, it seems unlikely that you'd have a real-world circumstance in\nwhich it makes sense to choose option 4. but it can't be ruled out without\nfurther context.\n\nCheers\nDave\n\nOn Tue, Oct 11, 2011 at 7:52 PM, Claudio Freire <[email protected]> wrote:\n\nOn Tue, Oct 11, 2011 at\n5:16 PM, Carlo Stonebanks\n<[email protected]>\nwrote:\n> Question 2) Regardless of the answer to Question 1 - if another_id is not\n> guaranteed to be unique, whereas pkey_id is – there any value to changing\n> the order of declaration (more generally, is there a performance impact\nfor\n> column ordering in btree composite keys?)\n\nMulticolumn indices on (c1, c2, ..., cn) can only be used on where\nclauses involving c1..ck with k<n.\n\nSo, an index on (a,b) does *not* help for querying on b.\n\nFurthermore, if a is unique, querying on a or querying on a and b is\nequally selective. b there is just consuming space and cpu cycles.\n\nI'd say, although it obviously depends on the queries you issue, you\nonly need an index on another_id.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 12 Oct 2011 00:39:08 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Composite keys"
},
{
"msg_contents": "On 10/12/2011 12:39 AM, Carlo Stonebanks wrote:\n>\n> So with PG I will stick to the general SQL rule that IF I use compound \n> keys then we have the most selective columns to the left... correct?\n>\n\nThere was a subtle point Dave made you should pay close attention to \nthough. If there are multiple indexes that start with the same column, \nPostgreSQL is biased toward picking the smallest of them. The amount of \nextra I/O needed to navigate a wider index is such that the second \ncolumn has to be very selective, too, before it will be used instead of \na narrower single column one. There are plenty of times that the reason \nbehind \"why isn't it using my index?\" is \"the index is too fat to \nnavigate efficiently\", because the actual number of blocks involved is \nfactored into the cost computations.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n\n\n\n\n\nOn 10/12/2011 12:39 AM, Carlo Stonebanks wrote:\n\n\n\n\n\n\n \nSo with PG I\nwill stick to the general SQL\nrule that IF I use compound keys then we have the most selective\ncolumns to the\nleft… correct?\n\n\n\nThere was a subtle point Dave made you should pay close attention to\nthough. If there are multiple indexes that start with the same column,\nPostgreSQL is biased toward picking the smallest of them. The amount\nof extra I/O needed to navigate a wider index is such that the second\ncolumn has to be very selective, too, before it will be used instead of\na narrower single column one. There are plenty of times that the\nreason behind \"why isn't it using my index?\" is \"the index is too fat\nto navigate efficiently\", because the actual number of blocks involved\nis factored into the cost computations.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Wed, 12 Oct 2011 06:26:33 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Composite keys"
},
{
"msg_contents": "On Tue, Oct 11, 2011 at 8:52 PM, Claudio Freire <[email protected]> wrote:\n> On Tue, Oct 11, 2011 at 5:16 PM, Carlo Stonebanks\n> <[email protected]> wrote:\n>> Question 2) Regardless of the answer to Question 1 - if another_id is not\n>> guaranteed to be unique, whereas pkey_id is – there any value to changing\n>> the order of declaration (more generally, is there a performance impact for\n>> column ordering in btree composite keys?)\n>\n> Multicolumn indices on (c1, c2, ..., cn) can only be used on where\n> clauses involving c1..ck with k<n.\n\nI don't think that's true. I believe it can be used for a query that\nonly touches, say, c2. It's just extremely inefficient.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 31 Oct 2011 13:08:11 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Composite keys"
},
{
"msg_contents": "On Mon, Oct 31, 2011 at 2:08 PM, Robert Haas <[email protected]> wrote:\n>> Multicolumn indices on (c1, c2, ..., cn) can only be used on where\n>> clauses involving c1..ck with k<n.\n>\n> I don't think that's true. I believe it can be used for a query that\n> only touches, say, c2. It's just extremely inefficient.\n\nDoes postgres generate those kinds of plans?\nI do not think so. I've never seen it happening.\n",
"msg_date": "Mon, 31 Oct 2011 14:52:14 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Composite keys"
},
{
"msg_contents": "On Mon, Oct 31, 2011 at 1:52 PM, Claudio Freire <[email protected]> wrote:\n> On Mon, Oct 31, 2011 at 2:08 PM, Robert Haas <[email protected]> wrote:\n>>> Multicolumn indices on (c1, c2, ..., cn) can only be used on where\n>>> clauses involving c1..ck with k<n.\n>>\n>> I don't think that's true. I believe it can be used for a query that\n>> only touches, say, c2. It's just extremely inefficient.\n>\n> Does postgres generate those kinds of plans?\n> I do not think so. I've never seen it happening.\n\nSure it does:\n\nrhaas=# create table baz (a bool, b int, c text, primary key (a, b));\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"baz_pkey\" for table \"baz\"\nCREATE TABLE\nrhaas=# insert into baz select true, g,\nrandom()::text||random()::text||random()::text||random()::text from\ngenerate_series(1,400000) g;\nINSERT 0 400000\nrhaas=# analyze baz;\nANALYZE\nrhaas=# explain analyze select * from baz where b = 1;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Index Scan using baz_pkey on baz (cost=0.00..7400.30 rows=1\nwidth=74) (actual time=0.104..20.691 rows=1 loops=1)\n Index Cond: (b = 1)\n Total runtime: 20.742 ms\n(3 rows)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 31 Oct 2011 14:24:46 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Composite keys"
},
{
"msg_contents": "On Mon, Oct 31, 2011 at 3:24 PM, Robert Haas <[email protected]> wrote:\n> Sure it does:\n>\n> rhaas=# create table baz (a bool, b int, c text, primary key (a, b));\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n> \"baz_pkey\" for table \"baz\"\n> CREATE TABLE\n> rhaas=# insert into baz select true, g,\n> random()::text||random()::text||random()::text||random()::text from\n> generate_series(1,400000) g;\n\nOk, that's artificially skewed, since the index has only one value in\nthe first column.\n\nBut it does prove PG considers the case, and takes into account the\nnumber of values it has to iterate over on the first column, which is\nvery very interesting and cool.\n",
"msg_date": "Mon, 31 Oct 2011 15:34:25 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Composite keys"
},
{
"msg_contents": "On Mon, Oct 31, 2011 at 2:34 PM, Claudio Freire <[email protected]> wrote:\n> On Mon, Oct 31, 2011 at 3:24 PM, Robert Haas <[email protected]> wrote:\n>> Sure it does:\n>>\n>> rhaas=# create table baz (a bool, b int, c text, primary key (a, b));\n>> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n>> \"baz_pkey\" for table \"baz\"\n>> CREATE TABLE\n>> rhaas=# insert into baz select true, g,\n>> random()::text||random()::text||random()::text||random()::text from\n>> generate_series(1,400000) g;\n>\n> Ok, that's artificially skewed, since the index has only one value in\n> the first column.\n>\n> But it does prove PG considers the case, and takes into account the\n> number of values it has to iterate over on the first column, which is\n> very very interesting and cool.\n\nYes. As your experience indicates, it's rare for this to be the best\nplan. But it is considered. So there you have it. :-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 31 Oct 2011 14:42:09 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Composite keys"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> On Mon, Oct 31, 2011 at 2:08 PM, Robert Haas <[email protected]> wrote:\n>>> Multicolumn indices on (c1, c2, ..., cn) can only be used on where\n>>> clauses involving c1..ck with k<n.\n\n>> I don't think that's true. �I believe it can be used for a query that\n>> only touches, say, c2. �It's just extremely inefficient.\n\n> Does postgres generate those kinds of plans?\n\nSure it does. It doesn't usually think they're efficient enough,\nbecause they require full-index scans. But sometimes that's the\nbest you can do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Oct 2011 14:59:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Composite keys "
}
] |
[
{
"msg_contents": "Hi all ;\n\nI'm trying to tune a difficult query.\n\nI have 2 tables:\ncust_acct (9million rows)\ncust_orders (200,000 rows)\n\nHere's the query:\n\nSELECT\n a.account_id, a.customer_id, a.order_id, a.primary_contact_id,\n a.status, a.customer_location_id, a.added_date,\n o.agent_id, p.order_location_id_id,\n COALESCE(a.customer_location_id, p.order_location_id) AS \norder_location_id\nFROM\n cust_acct a JOIN\n cust_orders o\n ON a.order_id = p.order_id;\n\nI can't get it to run much faster that about 13 seconds, in most cases \nit's more like 30 seconds.\nWe have a box with 8 2.5GZ cores and 28GB of ram, shared_buffers is at 8GB\n\n\nI've tried separating the queries as filtering queries & joining the \nresults, disabling seq scans, upping work_mem and half a dozen other \napproaches. Here's the explain plan:\n\n Hash Join (cost=151.05..684860.30 rows=9783130 width=100)\n Hash Cond: (a.order_id = o.order_id)\n -> Seq Scan on cust_acct a (cost=0.00..537962.30 rows=9783130 \nwidth=92)\n -> Hash (cost=122.69..122.69 rows=2269 width=12)\n -> Seq Scan on cust_orders o (cost=0.00..122.69 rows=2269 \nwidth=12)\n\nThanks in advance for any help, tips, etc...\n\n\n\n\n\n\n\n\n\n\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n\n\n\n\n\n\n Hi all ;\n\n I'm trying to tune a difficult query.\n\n I have 2 tables:\n cust_acct (9million rows)\n cust_orders (200,000 rows)\n\n Here's the query:\n\n SELECT\n a.account_id, a.customer_id, a.order_id, a.primary_contact_id,\n a.status, a.customer_location_id, a.added_date,\n o.agent_id, p.order_location_id_id,\n COALESCE(a.customer_location_id, p.order_location_id) AS\n order_location_id\n FROM\n cust_acct a JOIN\n cust_orders o\n ON a.order_id = p.order_id;\n\n I can't get it to run much faster that about 13 seconds, in most\n cases it's more like 30 seconds.\n We have a box with 8 2.5GZ cores and 28GB of ram, shared_buffers is\n at 8GB\n\n\n I've tried separating the queries as filtering queries & joining\n the results, disabling seq scans, upping work_mem and half a dozen\n other approaches. Here's the explain plan:\n\n Hash Join (cost=151.05..684860.30 rows=9783130 width=100)\n Hash Cond: (a.order_id = o.order_id)\n -> Seq Scan on cust_acct a (cost=0.00..537962.30\n rows=9783130 width=92)\n -> Hash (cost=122.69..122.69 rows=2269 width=12)\n -> Seq Scan on cust_orders o (cost=0.00..122.69\n rows=2269 width=12)\n\n Thanks in advance for any help, tips, etc...\n\n\n\n\n\n\n\n\n\n\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State \nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------",
"msg_date": "Tue, 11 Oct 2011 11:52:35 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query tuning help"
},
{
"msg_contents": "Hello\n\nplease, send EXPLAIN ANALYZE output instead.\n\nRegards\n\nPavel Stehule\n\n2011/10/11 CS DBA <[email protected]>:\n> Hi all ;\n>\n> I'm trying to tune a difficult query.\n>\n> I have 2 tables:\n> cust_acct (9million rows)\n> cust_orders (200,000 rows)\n>\n> Here's the query:\n>\n> SELECT\n> a.account_id, a.customer_id, a.order_id, a.primary_contact_id,\n> a.status, a.customer_location_id, a.added_date,\n> o.agent_id, p.order_location_id_id,\n> COALESCE(a.customer_location_id, p.order_location_id) AS\n> order_location_id\n> FROM\n> cust_acct a JOIN\n> cust_orders o\n> ON a.order_id = p.order_id;\n>\n> I can't get it to run much faster that about 13 seconds, in most cases it's\n> more like 30 seconds.\n> We have a box with 8 2.5GZ cores and 28GB of ram, shared_buffers is at 8GB\n>\n>\n> I've tried separating the queries as filtering queries & joining the\n> results, disabling seq scans, upping work_mem and half a dozen other\n> approaches. Here's the explain plan:\n>\n> Hash Join (cost=151.05..684860.30 rows=9783130 width=100)\n> Hash Cond: (a.order_id = o.order_id)\n> -> Seq Scan on cust_acct a (cost=0.00..537962.30 rows=9783130 width=92)\n> -> Hash (cost=122.69..122.69 rows=2269 width=12)\n> -> Seq Scan on cust_orders o (cost=0.00..122.69 rows=2269\n> width=12)\n>\n> Thanks in advance for any help, tips, etc...\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> --\n> ---------------------------------------------\n> Kevin Kempter - Constent State\n> A PostgreSQL Professional Services Company\n> www.consistentstate.com\n> ---------------------------------------------\n",
"msg_date": "Tue, 11 Oct 2011 20:02:21 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "On 11 October 2011 19:52, CS DBA <[email protected]> wrote:\n\n> Hi all ;\n>\n> I'm trying to tune a difficult query.\n>\n> I have 2 tables:\n> cust_acct (9million rows)\n> cust_orders (200,000 rows)\n>\n> Here's the query:\n>\n> SELECT\n> a.account_id, a.customer_id, a.order_id, a.primary_contact_id,\n> a.status, a.customer_location_id, a.added_date,\n> o.agent_id, p.order_location_id_id,\n> COALESCE(a.customer_location_id, p.order_location_id) AS\n> order_location_id\n> FROM\n> cust_acct a JOIN\n> cust_orders o\n> ON a.order_id = p.order_id;\n>\n> I can't get it to run much faster that about 13 seconds, in most cases it's\n> more like 30 seconds.\n> We have a box with 8 2.5GZ cores and 28GB of ram, shared_buffers is at 8GB\n>\n>\n> I've tried separating the queries as filtering queries & joining the\n> results, disabling seq scans, upping work_mem and half a dozen other\n> approaches. Here's the explain plan:\n>\n> Hash Join (cost=151.05..684860.30 rows=9783130 width=100)\n> Hash Cond: (a.order_id = o.order_id)\n> -> Seq Scan on cust_acct a (cost=0.00..537962.30 rows=9783130\n> width=92)\n> -> Hash (cost=122.69..122.69 rows=2269 width=12)\n> -> Seq Scan on cust_orders o (cost=0.00..122.69 rows=2269\n> width=12)\n>\n> Thanks in advance for any help, tips, etc...\n>\n>\n>\n>\n\nHi,\ntwo simple questions:\n\n- do you really need getting all 9M rows?\n- show us the table structure, together with index definitions\n\nregards\nSzymon\n\nOn 11 October 2011 19:52, CS DBA <[email protected]> wrote:\n\n Hi all ;\n\n I'm trying to tune a difficult query.\n\n I have 2 tables:\n cust_acct (9million rows)\n cust_orders (200,000 rows)\n\n Here's the query:\n\n SELECT\n a.account_id, a.customer_id, a.order_id, a.primary_contact_id,\n a.status, a.customer_location_id, a.added_date,\n o.agent_id, p.order_location_id_id,\n COALESCE(a.customer_location_id, p.order_location_id) AS\n order_location_id\n FROM\n cust_acct a JOIN\n cust_orders o\n ON a.order_id = p.order_id;\n\n I can't get it to run much faster that about 13 seconds, in most\n cases it's more like 30 seconds.\n We have a box with 8 2.5GZ cores and 28GB of ram, shared_buffers is\n at 8GB\n\n\n I've tried separating the queries as filtering queries & joining\n the results, disabling seq scans, upping work_mem and half a dozen\n other approaches. Here's the explain plan:\n\n Hash Join (cost=151.05..684860.30 rows=9783130 width=100)\n Hash Cond: (a.order_id = o.order_id)\n -> Seq Scan on cust_acct a (cost=0.00..537962.30\n rows=9783130 width=92)\n -> Hash (cost=122.69..122.69 rows=2269 width=12)\n -> Seq Scan on cust_orders o (cost=0.00..122.69\n rows=2269 width=12)\n\n Thanks in advance for any help, tips, etc...\n\n\n\nHi,two simple questions:- do you really need getting all 9M rows?- show us the table structure, together with index definitions\nregardsSzymon",
"msg_date": "Tue, 11 Oct 2011 20:03:45 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "On 10/11/2011 12:02 PM, Pavel Stehule wrote:\n> Hello\n>\n> please, send EXPLAIN ANALYZE output instead.\n>\n> Regards\n>\n> Pavel Stehule\n>\n> 2011/10/11 CS DBA<[email protected]>:\n>> Hi all ;\n>>\n>> I'm trying to tune a difficult query.\n>>\n>> I have 2 tables:\n>> cust_acct (9million rows)\n>> cust_orders (200,000 rows)\n>>\n>> Here's the query:\n>>\n>> SELECT\n>> a.account_id, a.customer_id, a.order_id, a.primary_contact_id,\n>> a.status, a.customer_location_id, a.added_date,\n>> o.agent_id, p.order_location_id_id,\n>> COALESCE(a.customer_location_id, p.order_location_id) AS\n>> order_location_id\n>> FROM\n>> cust_acct a JOIN\n>> cust_orders o\n>> ON a.order_id = p.order_id;\n>>\n>> I can't get it to run much faster that about 13 seconds, in most cases it's\n>> more like 30 seconds.\n>> We have a box with 8 2.5GZ cores and 28GB of ram, shared_buffers is at 8GB\n>>\n>>\n>> I've tried separating the queries as filtering queries& joining the\n>> results, disabling seq scans, upping work_mem and half a dozen other\n>> approaches. Here's the explain plan:\n>>\n>> Hash Join (cost=151.05..684860.30 rows=9783130 width=100)\n>> Hash Cond: (a.order_id = o.order_id)\n>> -> Seq Scan on cust_acct a (cost=0.00..537962.30 rows=9783130 width=92)\n>> -> Hash (cost=122.69..122.69 rows=2269 width=12)\n>> -> Seq Scan on cust_orders o (cost=0.00..122.69 rows=2269\n>> width=12)\n>>\n>> Thanks in advance for any help, tips, etc...\n>>\n>>\n>>\n>>\n>>\n\nExplain Analyze:\n\n\n Hash Join (cost=154.46..691776.11 rows=10059626 width=100) (actual \ntime=5.191..37551.360 rows=10063432 loops=1)\n Hash Cond: (a.order_id = o.order_id)\n -> Seq Scan on cust_acct a (cost=0.00..540727.26 rows=10059626 \nwidth=92) (actual time=0.022..18987.095 rows=10063432 loops=1)\n -> Hash (cost=124.76..124.76 rows=2376 width=12) (actual \ntime=5.135..5.135 rows=2534 loops=1)\n -> Seq Scan on cust_orders o (cost=0.00..124.76 rows=2376 \nwidth=12) (actual time=0.011..2.843 rows=2534 loops=1)\n Total runtime: 43639.105 ms\n(6 rows)\n\n\n\n\n\n\n\n>>\n>>\n>>\n>>\n>>\n>>\n>> --\n>> ---------------------------------------------\n>> Kevin Kempter - Constent State\n>> A PostgreSQL Professional Services Company\n>> www.consistentstate.com\n>> ---------------------------------------------\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n\n\n\n\n\n\n On 10/11/2011 12:02 PM, Pavel Stehule wrote:\n \nHello\n\nplease, send EXPLAIN ANALYZE output instead.\n\nRegards\n\nPavel Stehule\n\n2011/10/11 CS DBA <[email protected]>:\n\n\nHi all ;\n\nI'm trying to tune a difficult query.\n\nI have 2 tables:\ncust_acct (9million rows)\ncust_orders (200,000 rows)\n\nHere's the query:\n\nSELECT\n a.account_id, a.customer_id, a.order_id, a.primary_contact_id,\n a.status, a.customer_location_id, a.added_date,\n o.agent_id, p.order_location_id_id,\n COALESCE(a.customer_location_id, p.order_location_id) AS\norder_location_id\nFROM\n cust_acct a JOIN\n cust_orders o\n ON a.order_id = p.order_id;\n\nI can't get it to run much faster that about 13 seconds, in most cases it's\nmore like 30 seconds.\nWe have a box with 8 2.5GZ cores and 28GB of ram, shared_buffers is at 8GB\n\n\nI've tried separating the queries as filtering queries & joining the\nresults, disabling seq scans, upping work_mem and half a dozen other\napproaches. Here's the explain plan:\n\n Hash Join (cost=151.05..684860.30 rows=9783130 width=100)\n Hash Cond: (a.order_id = o.order_id)\n -> Seq Scan on cust_acct a (cost=0.00..537962.30 rows=9783130 width=92)\n -> Hash (cost=122.69..122.69 rows=2269 width=12)\n -> Seq Scan on cust_orders o (cost=0.00..122.69 rows=2269\nwidth=12)\n\nThanks in advance for any help, tips, etc...\n\n\n\n\n\n\n\n\n\n Explain Analyze:\n\n\n Hash Join (cost=154.46..691776.11 rows=10059626 width=100)\n (actual time=5.191..37551.360 rows=10063432 loops=1)\n Hash Cond: (a.order_id =\n o.order_id) \n \n -> Seq Scan on cust_acct a (cost=0.00..540727.26\n rows=10059626 width=92) (actual time=0.022..18987.095\n rows=10063432 loops=1) \n -> Hash (cost=124.76..124.76 rows=2376 width=12) (actual\n time=5.135..5.135 rows=2534\n loops=1) \n \n -> Seq Scan on cust_orders o (cost=0.00..124.76\n rows=2376 width=12) (actual time=0.011..2.843 rows=2534 loops=1)\n Total runtime: 43639.105\n ms \n \n (6 rows)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n--\n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n\n\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State \nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------",
"msg_date": "Tue, 11 Oct 2011 12:14:25 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "On 10/11/2011 12:03 PM, Szymon Guz wrote:\n>\n>\n> On 11 October 2011 19:52, CS DBA <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hi all ;\n>\n> I'm trying to tune a difficult query.\n>\n> I have 2 tables:\n> cust_acct (9million rows)\n> cust_orders (200,000 rows)\n>\n> Here's the query:\n>\n> SELECT\n> a.account_id, a.customer_id, a.order_id, a.primary_contact_id,\n> a.status, a.customer_location_id, a.added_date,\n> o.agent_id, p.order_location_id_id,\n> COALESCE(a.customer_location_id, p.order_location_id) AS\n> order_location_id\n> FROM\n> cust_acct a JOIN\n> cust_orders o\n> ON a.order_id = p.order_id;\n>\n> I can't get it to run much faster that about 13 seconds, in most\n> cases it's more like 30 seconds.\n> We have a box with 8 2.5GZ cores and 28GB of ram, shared_buffers\n> is at 8GB\n>\n>\n> I've tried separating the queries as filtering queries & joining\n> the results, disabling seq scans, upping work_mem and half a dozen\n> other approaches. Here's the explain plan:\n>\n> Hash Join (cost=151.05..684860.30 rows=9783130 width=100)\n> Hash Cond: (a.order_id = o.order_id)\n> -> Seq Scan on cust_acct a (cost=0.00..537962.30 rows=9783130\n> width=92)\n> -> Hash (cost=122.69..122.69 rows=2269 width=12)\n> -> Seq Scan on cust_orders o (cost=0.00..122.69\n> rows=2269 width=12)\n>\n> Thanks in advance for any help, tips, etc...\n>\n>\n>\n>\n>\n> Hi,\n> two simple questions:\n>\n> - do you really need getting all 9M rows?\nunfortunately yes\n\n\n> - show us the table structure, together with index definitions\n>\n\n\ncust_acct table\n\n Column | Type \n| Modifiers\n-----------------------+-----------------------------+-------------------------------------------------------\n account_id | bigint | not null default \nnextval('cust_account_id_seq'::regclass)\n customer_id | character varying(10) |\n order_id | integer | not null\n primary_contact_id | bigint |\n status | accounts_status_type | not null\n customer_location_id | integer |\n added_date | timestamp with time zone | not null\nIndexes:\n \"cust_acct_pkey\" PRIMARY KEY, btree (account_id)\n \"cust_acct_cust_id_indx\" btree (customer_id)\n \"cust_acct_order_id_id_indx\" btree (order_id)\n \"cust_acct_pri_contact_id_indx\" btree (primary_contact_id)\n\n\n\n\n\ncust_orders table\n\n\n Column | Type \n| Modifiers\n-----------------------------+-----------------------------+------------------------------------------------------- \n\n order_id | integer | not null \ndefault nextval('order_id_seq'::regclass)\n backorder_tag_id | character varying(18) |\n order_location_id | integer | not null\n work_order_name | character varying(75) | not null\n status | programs_status_type | not null\n additional_info_tag_shipper | character(16) | not null\n additional_info_tag_cust | character(16) | not null\n additional_info_tag_manuf | character(16) | not null\n additional_info_tag_supply | character(16) | not null\n acct_active_dt | timestamp without time zone |\n acct_last_activity_date | timestamp without time zone |\n acct_backorder_items | boolean | not null \ndefault false\n custom_info1 | text |\n custom_info2 | text |\n custom_info3 | text |\n custom_info4 | text |\n custom_info5 | text |\n custom_info6 | text |\n custom_info7 | text |\nIndexes:\n \"cust_orders_pkey\" PRIMARY KEY, btree (order_id)\n \"cust_orders_order_id_loc_id_key\" UNIQUE, btree (order_id, \norder_location_id)\n \"cust_orders_loc_id_idx\" btree (order_location_id)\n\n\n\n\n\n\n\n\n\n\n\n\n\n> regards\n> Szymon\n>\n>\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n\n\n\n\n\n\n On 10/11/2011 12:03 PM, Szymon Guz wrote:\n \n\nOn 11 October 2011 19:52, CS DBA <[email protected]>\n wrote:\n\n Hi all ;\n\n I'm trying to tune a difficult query.\n\n I have 2 tables:\n cust_acct (9million rows)\n cust_orders (200,000 rows)\n\n Here's the query:\n\n SELECT\n a.account_id, a.customer_id, a.order_id,\n a.primary_contact_id,\n a.status, a.customer_location_id, a.added_date,\n o.agent_id, p.order_location_id_id,\n COALESCE(a.customer_location_id, p.order_location_id) AS\n order_location_id\n FROM\n cust_acct a JOIN\n cust_orders o\n ON a.order_id = p.order_id;\n\n I can't get it to run much faster that about 13 seconds, in\n most cases it's more like 30 seconds.\n We have a box with 8 2.5GZ cores and 28GB of ram,\n shared_buffers is at 8GB\n\n\n I've tried separating the queries as filtering queries &\n joining the results, disabling seq scans, upping work_mem\n and half a dozen other approaches. Here's the explain plan:\n\n Hash Join (cost=151.05..684860.30 rows=9783130\n width=100)\n Hash Cond: (a.order_id = o.order_id)\n -> Seq Scan on cust_acct a (cost=0.00..537962.30\n rows=9783130 width=92)\n -> Hash (cost=122.69..122.69 rows=2269 width=12)\n -> Seq Scan on cust_orders o \n (cost=0.00..122.69 rows=2269 width=12)\n\n Thanks in advance for any help, tips, etc...\n \n\n\n\n\n\n\n\n\n\nHi,\n\ntwo simple questions:\n\n\n- do you really need getting all 9M rows?\n\n unfortunately yes\n\n\n\n- show us the table structure, together with index\n definitions\n\n\n\n\n\n cust_acct table\n\n Column | Type \n | Modifiers \n-----------------------+-----------------------------+-------------------------------------------------------\n account_id | bigint | not null\n default nextval('cust_account_id_seq'::regclass)\n customer_id | character varying(10) | \n order_id | integer | not null\n primary_contact_id | bigint | \n status | accounts_status_type | not null \n customer_location_id | integer | \n added_date | timestamp with time zone | not null\n Indexes:\n \"cust_acct_pkey\" PRIMARY KEY, btree (account_id)\n \"cust_acct_cust_id_indx\" btree (customer_id)\n \"cust_acct_order_id_id_indx\" btree (order_id)\n \"cust_acct_pri_contact_id_indx\" btree (primary_contact_id)\n\n\n\n\n\n cust_orders table\n\n\n Column | Type \n | \n Modifiers \n \n -----------------------------+-----------------------------+------------------------------------------------------- \n \n order_id | integer | not\n null default\n nextval('order_id_seq'::regclass) \n \n backorder_tag_id | character varying(18) \n | \n \n order_location_id | integer | not\n null \n \n work_order_name | character varying(75) | not\n null \n \n status | programs_status_type | not\n null \n additional_info_tag_shipper | character(16) | not\n null \n additional_info_tag_cust | character(16) | not\n null \n additional_info_tag_manuf | character(16) | not\n null \n additional_info_tag_supply | character(16) | not\n null \n acct_active_dt | timestamp without time zone\n | \n \n acct_last_activity_date | timestamp without time zone\n | \n \n acct_backorder_items | boolean | not\n null default\n false \n \n custom_info1 | text \n | \n \n custom_info2 | text \n | \n \n custom_info3 | text \n | \n \n custom_info4 | text \n | \n \n custom_info5 | text \n | \n \n custom_info6 | text \n | \n custom_info7 | text \n | \n \n Indexes: \n \n \"cust_orders_pkey\" PRIMARY KEY, btree\n (order_id) \n \n \"cust_orders_order_id_loc_id_key\" UNIQUE, btree (order_id,\n order_location_id) \n \n \"cust_orders_loc_id_idx\" btree (order_location_id) \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nregards\nSzymon\n\n\n\n\n\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State \nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------",
"msg_date": "Tue, 11 Oct 2011 12:31:07 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": ">\n>\n> Hash Join (cost=154.46..691776.11 rows=10059626 width=100) (actual\n> time=5.191..37551.360 rows=10063432 loops=1)\n> Hash Cond: (a.order_id =\n> o.order_id)\n> -> Seq Scan on cust_acct a (cost=0.00..540727.26 rows=10059626\n> width=92) (actual time=0.022..18987.095 rows=10063432\n> loops=1)\n> -> Hash (cost=124.76..124.76 rows=2376 width=12) (actual\n> time=5.135..5.135 rows=2534\n> loops=1)\n> -> Seq Scan on cust_orders o (cost=0.00..124.76 rows=2376\n> width=12) (actual time=0.011..2.843 rows=2534 loops=1)\n> Total runtime: 43639.105\n> ms\n> (6 rows)\n>\n\nI am thinking so this time is adequate - processing of 10 mil rows\nresult must be slow\n\na tips:\n\n* recheck a seq. read speed - if this is about expected values\n\n* play with work_mem - probably is not enough for one bucket - you can\ndecrease time about 10-20 sec, but attention to going to swap -\nEXPLAIN ANALYZE VERBOSE show a number of buckets - ideal is one.\n\n* use a some filter if it's possible\n* use a limit if it's possible\n\nif you really should to process all rows and you need better reaction\ntime, try to use a cursor. It is optimized for fast first row\n\nRegards\n\nPavel Stehule\n\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> --\n> ---------------------------------------------\n> Kevin Kempter - Constent State\n> A PostgreSQL Professional Services Company\n> www.consistentstate.com\n> ---------------------------------------------\n>\n>\n> --\n> ---------------------------------------------\n> Kevin Kempter - Constent State\n> A PostgreSQL Professional Services Company\n> www.consistentstate.com\n> ---------------------------------------------\n",
"msg_date": "Tue, 11 Oct 2011 20:59:51 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
}
] |
[
{
"msg_contents": "Hi \n\nI want to know how can i measure runtime query in postgresql if i use command line psql?\nnot explain rutime for the query such as the runtime which appear in pgadmin ?\nsuch as Total query runtime: 203 ms.\nHi I want to know how can i measure runtime query in postgresql if i use command line psql?not explain rutime for the query such as the runtime which appear in pgadmin ?such as Total query runtime: 203 ms.",
"msg_date": "Tue, 11 Oct 2011 12:08:31 -0700 (PDT)",
"msg_from": "Radhya sahal <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql query runtime "
},
{
"msg_contents": "2011/10/11 Radhya sahal <[email protected]>:\n> Hi\n> I want to know how can i measure runtime query in postgresql if i use\n> command line psql?\n> not explain rutime for the query such as the runtime which appear in pgadmin\n> ?\n> such as Total query runtime: 203 ms.\n>\n\nHello\n\nuse\n\n\\timing\n\n\\? is your friend :)\n\nRegards\n\nPavel Stehule\n",
"msg_date": "Tue, 11 Oct 2011 21:11:58 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql query runtime"
},
{
"msg_contents": "On 11 October 2011 21:08, Radhya sahal <[email protected]> wrote:\n\n> Hi\n> I want to know how can i measure runtime query in postgresql if i use\n> command line psql?\n> not explain rutime for the query such as the runtime which appear in\n> pgadmin ?\n> such as Total query runtime: 203 ms.\n>\n\n\nrun this in psql:\n\n\\t\n\nregards\nSzymon\n\nOn 11 October 2011 21:08, Radhya sahal <[email protected]> wrote:\nHi I want to know how can i measure runtime query in postgresql if i use command line psql?\nnot explain rutime for the query such as the runtime which appear in pgadmin ?such as Total query runtime: 203 ms.run this in psql:\n\\tregardsSzymon",
"msg_date": "Tue, 11 Oct 2011 21:13:04 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql query runtime"
},
{
"msg_contents": "On 11 October 2011 21:13, Szymon Guz <[email protected]> wrote:\n\n>\n>\n> On 11 October 2011 21:08, Radhya sahal <[email protected]> wrote:\n>\n>> Hi\n>> I want to know how can i measure runtime query in postgresql if i use\n>> command line psql?\n>> not explain rutime for the query such as the runtime which appear in\n>> pgadmin ?\n>> such as Total query runtime: 203 ms.\n>>\n>\n>\n> run this in psql:\n>\n> \\t\n>\n> regards\n> Szymon\n>\n\n\nyes... \\timing of course... I think I shouldn't send emails when I've got a\nfever :)\n\n\n- Szymon\n\nOn 11 October 2011 21:13, Szymon Guz <[email protected]> wrote:\nOn 11 October 2011 21:08, Radhya sahal <[email protected]> wrote:\nHi I want to know how can i measure runtime query in postgresql if i use command line psql?\nnot explain rutime for the query such as the runtime which appear in pgadmin ?such as Total query runtime: 203 ms.run this in psql:\n\n\\tregardsSzymon\nyes... \\timing of course... I think I shouldn't send emails when I've got a fever :)- Szymon",
"msg_date": "Tue, 11 Oct 2011 21:14:46 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql query runtime"
},
{
"msg_contents": "Radhya sahal <[email protected]> wrote:\n \n> I want to know how can i measure runtime query in postgresql if i\n> use command line psql?\n \n\\timing on\n \n-Kevin\n",
"msg_date": "Tue, 11 Oct 2011 14:15:58 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql query runtime"
},
{
"msg_contents": "thank's Kevin\nhow i can read query runtime if i want to run query from java ?\nregards,,,\n\n\n\n________________________________\nFrom: Kevin Grittner <[email protected]>\nTo: pgsql-performance group <[email protected]>; Radhya sahal <[email protected]>\nSent: Tuesday, October 11, 2011 12:15 PM\nSubject: Re: [PERFORM] postgresql query runtime\n\nRadhya sahal <[email protected]> wrote:\n\n> I want to know how can i measure runtime query in postgresql if i\n> use command line psql?\n\n\\timing on\n\n-Kevin\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nthank's Kevinhow i can read query runtime if i want to run query from java ?regards,,,From: Kevin Grittner <[email protected]>To: pgsql-performance group <[email protected]>; Radhya sahal <[email protected]>Sent: Tuesday, October 11, 2011 12:15 PMSubject: Re: [PERFORM] postgresql query\n runtime\nRadhya sahal <[email protected]> wrote: > I want to know how can i measure runtime query in postgresql if i> use command line psql? \\timing on -Kevin-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 11 Oct 2011 14:29:23 -0700 (PDT)",
"msg_from": "Radhya sahal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql query runtime"
},
{
"msg_contents": "Radhya sahal <[email protected]> wrote:\n \n> how i can read query runtime if i want to run query from java ?\n \nPersonally, I capture System.currentTimeInMillis() before and after\nthe query, and subtract. In our framework, we have a monitor class\nto encapsulate that.\n \n-Kevin\n",
"msg_date": "Tue, 11 Oct 2011 16:43:00 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql query runtime"
}
] |
[
{
"msg_contents": "Hi all,\nI am running 9.03 with the settings listed below. I have a prohibitively\nslow query in an application which has an overall good performance:\n\nselect\n cast (SD.detectorid as numeric),\n CAST( ( (SD.createdtime - 0 )/ 18000000000000::bigint ) AS numeric) as\ntimegroup,\n sum(datafromsource)+sum(datafromdestination) as numbytes,\n CAST ( sum(packetsfromsource)+sum(packetsfromdestination) AS numeric) as\nnumpackets\nfrom\n appqosdata.icmptraffic SD , appqosdata.icmptraffic_classes CL\nwhere\n SD.detectorid >= 0\n and CL.detectorid = SD.detectorid\n and CL.sessionid = SD.sessionid\n and CL.createdtime = SD.createdtime\n and SD.detectorid = 1\n and SD.createdtime >= 1317726000000000000::bigint and SD.createdtime <\n1318326120000000000::bigint\n and CL.sessionid < 1318326120000000000::bigint\n and CL.classid = 1\ngroup by\n SD.detectorid, timegroup\n\nappqosdata.icmptraffic and appqosdata.icmptraffic_classes are both\npartitioned.\n\nCREATE TABLE appqosdata.icmptraffic\n(\n detectorid smallint not null default(0), -- references\nappqosdata.detectors(id),\n sessionid bigint not null,\n createdtime bigint not null,\n\n...\n --primary key(detectorid, sessionid, createdtime) defined in the\nchildren tables\n);\n\nCREATE TABLE appqosdata.icmptraffic_classes\n(\n detectorid smallint not null,\n sessionid bigint not null,\n createdtime bigint not null,\n\n classid integer not null\n\n -- definitions in the children tables:\n --primary key(detectorid, sessionid, createdtime, classid)\n --foreign key(detectorid, sessionid, createdtime) references\nappqosdata.icmptraffic(detectorid, sessionid, createdtime),\n --foreign key(classid) references appqosdata.display_classes(id),\n);\n\n\"HashAggregate (cost=154.24..154.28 rows=1 width=34) (actual\ntime=7594069.940..7594069.983 rows=19 loops=1)\"\n\" Output: (sd.detectorid)::numeric, ((((sd.createdtime - 0) /\n18000000000000::bigint))::numeric), (sum(sd.datafromsource) +\nsum(sd.datafromdestination)), ((sum(sd.packetsfromsource) +\nsum(sd.packetsfromdestination)))::numeric, sd.detectorid\"\n\" -> Nested Loop (cost=0.00..154.23 rows=1 width=34) (actual\ntime=0.140..7593838.258 rows=50712 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource,\nsd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination,\n(((sd.createdtime - 0) / 18000000000000::bigint))::numeric\"\n\" Join Filter: ((sd.sessionid = cl.sessionid) AND (sd.createdtime =\ncl.createdtime))\"\n\" -> Append (cost=0.00..88.37 rows=7 width=18) (actual\ntime=0.063..333.355 rows=51776 loops=1)\"\n\" -> Seq Scan on appqosdata.icmptraffic_classes cl\n(cost=0.00..37.48 rows=1 width=18) (actual time=0.013..0.013 rows=0\nloops=1)\"\n\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Filter: ((cl.sessionid < 1318326120000000000::bigint)\nAND (cl.detectorid = 1) AND (cl.classid = 1))\"\n\" -> Index Scan using icmptraffic_classes_10_pkey on\nappqosdata.icmptraffic_classes_10 cl (cost=0.00..8.36 rows=1 width=18)\n(actual time=0.046..14.205 rows=3985 loops=1)\"\n\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid <\n1318326120000000000::bigint) AND (cl.classid = 1))\"\n\" -> Index Scan using icmptraffic_classes_11_pkey on\nappqosdata.icmptraffic_classes_11 cl (cost=0.00..8.62 rows=1 width=18)\n(actual time=0.038..52.757 rows=14372 loops=1)\"\n\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid <\n1318326120000000000::bigint) AND (cl.classid = 1))\"\n\" -> Index Scan using icmptraffic_classes_12_pkey on\nappqosdata.icmptraffic_classes_12 cl (cost=0.00..8.60 rows=1 width=18)\n(actual time=0.033..47.845 rows=13512 loops=1)\"\n\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid <\n1318326120000000000::bigint) AND (cl.classid = 1))\"\n\" -> Index Scan using icmptraffic_classes_13_pkey on\nappqosdata.icmptraffic_classes_13 cl (cost=0.00..8.59 rows=1 width=18)\n(actual time=0.030..46.504 rows=13274 loops=1)\"\n\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid <\n1318326120000000000::bigint) AND (cl.classid = 1))\"\n\" -> Index Scan using icmptraffic_classes_14_pkey on\nappqosdata.icmptraffic_classes_14 cl (cost=0.00..8.43 rows=1 width=18)\n(actual time=0.025..22.868 rows=6633 loops=1)\"\n\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid <\n1318326120000000000::bigint) AND (cl.classid = 1))\"\n\" -> Index Scan using icmptraffic_classes_15_pkey on\nappqosdata.icmptraffic_classes_15 cl (cost=0.00..8.30 rows=1 width=18)\n(actual time=0.014..0.014 rows=0 loops=1)\"\n\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid <\n1318326120000000000::bigint) AND (cl.classid = 1))\"\n\" -> Materialize (cost=0.00..65.13 rows=6 width=42) (actual\ntime=0.001..73.261 rows=50915 loops=51776)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource,\nsd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination,\nsd.sessionid\"\n\" -> Append (cost=0.00..65.10 rows=6 width=42) (actual\ntime=0.059..244.693 rows=50915 loops=1)\"\n\" -> Seq Scan on appqosdata.icmptraffic sd\n(cost=0.00..22.60 rows=1 width=42) (actual time=0.001..0.001 rows=0\nloops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination, sd.sessionid\"\n\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime\n>= 1317726000000000000::bigint) AND (sd.createdtime <\n1318326120000000000::bigint) AND (sd.detectorid = 1))\"\n\" -> Index Scan using icmptraffic_10_pkey on\nappqosdata.icmptraffic_10 sd (cost=0.00..8.35 rows=1 width=42) (actual\ntime=0.053..7.807 rows=3997 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination, sd.sessionid\"\n\" Index Cond: ((sd.detectorid >= 0) AND\n(sd.detectorid = 1) AND (sd.createdtime >= 1317726000000000000::bigint) AND\n(sd.createdtime < 1318326120000000000::bigint))\"\n\" -> Index Scan using icmptraffic_11_pkey on\nappqosdata.icmptraffic_11 sd (cost=0.00..8.59 rows=1 width=42) (actual\ntime=0.025..27.957 rows=14372 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination, sd.sessionid\"\n\" Index Cond: ((sd.detectorid >= 0) AND\n(sd.detectorid = 1) AND (sd.createdtime >= 1317726000000000000::bigint) AND\n(sd.createdtime < 1318326120000000000::bigint))\"\n\" -> Index Scan using icmptraffic_12_pkey on\nappqosdata.icmptraffic_12 sd (cost=0.00..8.58 rows=1 width=42) (actual\ntime=0.027..26.217 rows=13512 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination, sd.sessionid\"\n\" Index Cond: ((sd.detectorid >= 0) AND\n(sd.detectorid = 1) AND (sd.createdtime >= 1317726000000000000::bigint) AND\n(sd.createdtime < 1318326120000000000::bigint))\"\n\" -> Index Scan using icmptraffic_13_pkey on\nappqosdata.icmptraffic_13 sd (cost=0.00..8.56 rows=1 width=42) (actual\ntime=0.030..26.075 rows=13430 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination, sd.sessionid\"\n\" Index Cond: ((sd.detectorid >= 0) AND\n(sd.detectorid = 1) AND (sd.createdtime >= 1317726000000000000::bigint) AND\n(sd.createdtime < 1318326120000000000::bigint))\"\n\" -> Index Scan using icmptraffic_14_pkey on\nappqosdata.icmptraffic_14 sd (cost=0.00..8.41 rows=1 width=42) (actual\ntime=0.027..11.040 rows=5604 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime,\nsd.datafromsource, sd.datafromdestination, sd.packetsfromsource,\nsd.packetsfromdestination, sd.sessionid\"\n\" Index Cond: ((sd.detectorid >= 0) AND\n(sd.detectorid = 1) AND (sd.createdtime >= 1317726000000000000::bigint) AND\n(sd.createdtime < 1318326120000000000::bigint))\"\n\"Total runtime: 7594071.137 ms\"\n\n\n\n name |\ncurrent_setting\n------------------------------+----------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.0.3 on\namd64-portbld-freebsd8.0, compiled by GCC cc (GCC) 4.2.1 20070719\n[FreeBSD], 64-bit\n autovacuum | on\n autovacuum_analyze_threshold | 500000\n autovacuum_max_workers | 1\n autovacuum_naptime | 1h\n autovacuum_vacuum_threshold | 500000\n checkpoint_segments | 64\n effective_cache_size | 3GB\n fsync | on\n lc_collate | C\n lc_ctype | C\n listen_addresses | *\n log_destination | syslog, stderr\n log_min_duration_statement | 5ms\n log_rotation_age | 1d\n log_rotation_size | 100MB\n logging_collector | on\n max_connections | 30\n max_stack_depth | 2MB\n server_encoding | UTF8\n shared_buffers | 1793MB\n silent_mode | on\n synchronous_commit | on\n syslog_facility | local0\n TimeZone | Europe/Jersey\n update_process_title | off\n wal_buffers | 128kB\n work_mem | 24MB\n\nThanks for any help,\nSvetlin Manavski\n\nHi all,I am running 9.03 with the settings listed below. I have a prohibitively slow query in an application which has an overall good performance:select cast (SD.detectorid as numeric), CAST( ( (SD.createdtime - 0 )/ 18000000000000::bigint ) AS numeric) as timegroup, \n sum(datafromsource)+sum(datafromdestination) as numbytes, CAST ( sum(packetsfromsource)+sum(packetsfromdestination) AS numeric) as numpackets from appqosdata.icmptraffic SD , appqosdata.icmptraffic_classes CL \nwhere SD.detectorid >= 0 and CL.detectorid = SD.detectorid and CL.sessionid = SD.sessionid and CL.createdtime = SD.createdtime and SD.detectorid = 1 and SD.createdtime >= 1317726000000000000::bigint and SD.createdtime < 1318326120000000000::bigint \n and CL.sessionid < 1318326120000000000::bigint and CL.classid = 1 group by SD.detectorid, timegroupappqosdata.icmptraffic and appqosdata.icmptraffic_classes are both partitioned.\nCREATE TABLE appqosdata.icmptraffic ( detectorid smallint not null default(0), -- references appqosdata.detectors(id), sessionid bigint not null, createdtime bigint not null,... --primary key(detectorid, sessionid, createdtime) defined in the children tables\n);CREATE TABLE appqosdata.icmptraffic_classes( detectorid smallint not null, sessionid bigint not null, createdtime bigint not null, classid integer not null -- definitions in the children tables:\n --primary key(detectorid, sessionid, createdtime, classid)\n --foreign key(detectorid, sessionid, createdtime) references appqosdata.icmptraffic(detectorid, sessionid, createdtime), --foreign key(classid) references appqosdata.display_classes(id),);\"HashAggregate (cost=154.24..154.28 rows=1 width=34) (actual time=7594069.940..7594069.983 rows=19 loops=1)\"\n\" Output: (sd.detectorid)::numeric, ((((sd.createdtime - 0) / 18000000000000::bigint))::numeric), (sum(sd.datafromsource) + sum(sd.datafromdestination)), ((sum(sd.packetsfromsource) + sum(sd.packetsfromdestination)))::numeric, sd.detectorid\"\n\" -> Nested Loop (cost=0.00..154.23 rows=1 width=34) (actual time=0.140..7593838.258 rows=50712 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, (((sd.createdtime - 0) / 18000000000000::bigint))::numeric\"\n\" Join Filter: ((sd.sessionid = cl.sessionid) AND (sd.createdtime = cl.createdtime))\"\" -> Append (cost=0.00..88.37 rows=7 width=18) (actual time=0.063..333.355 rows=51776 loops=1)\"\n\" -> Seq Scan on appqosdata.icmptraffic_classes cl (cost=0.00..37.48 rows=1 width=18) (actual time=0.013..0.013 rows=0 loops=1)\"\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Filter: ((cl.sessionid < 1318326120000000000::bigint) AND (cl.detectorid = 1) AND (cl.classid = 1))\"\" -> Index Scan using icmptraffic_classes_10_pkey on appqosdata.icmptraffic_classes_10 cl (cost=0.00..8.36 rows=1 width=18) (actual time=0.046..14.205 rows=3985 loops=1)\"\n\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid < 1318326120000000000::bigint) AND (cl.classid = 1))\"\n\" -> Index Scan using icmptraffic_classes_11_pkey on appqosdata.icmptraffic_classes_11 cl (cost=0.00..8.62 rows=1 width=18) (actual time=0.038..52.757 rows=14372 loops=1)\"\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid < 1318326120000000000::bigint) AND (cl.classid = 1))\"\" -> Index Scan using icmptraffic_classes_12_pkey on appqosdata.icmptraffic_classes_12 cl (cost=0.00..8.60 rows=1 width=18) (actual time=0.033..47.845 rows=13512 loops=1)\"\n\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid < 1318326120000000000::bigint) AND (cl.classid = 1))\"\n\" -> Index Scan using icmptraffic_classes_13_pkey on appqosdata.icmptraffic_classes_13 cl (cost=0.00..8.59 rows=1 width=18) (actual time=0.030..46.504 rows=13274 loops=1)\"\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid < 1318326120000000000::bigint) AND (cl.classid = 1))\"\" -> Index Scan using icmptraffic_classes_14_pkey on appqosdata.icmptraffic_classes_14 cl (cost=0.00..8.43 rows=1 width=18) (actual time=0.025..22.868 rows=6633 loops=1)\"\n\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid < 1318326120000000000::bigint) AND (cl.classid = 1))\"\n\" -> Index Scan using icmptraffic_classes_15_pkey on appqosdata.icmptraffic_classes_15 cl (cost=0.00..8.30 rows=1 width=18) (actual time=0.014..0.014 rows=0 loops=1)\"\" Output: cl.detectorid, cl.sessionid, cl.createdtime\"\n\" Index Cond: ((cl.detectorid = 1) AND (cl.sessionid < 1318326120000000000::bigint) AND (cl.classid = 1))\"\" -> Materialize (cost=0.00..65.13 rows=6 width=42) (actual time=0.001..73.261 rows=50915 loops=51776)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.sessionid\"\" -> Append (cost=0.00..65.10 rows=6 width=42) (actual time=0.059..244.693 rows=50915 loops=1)\"\n\" -> Seq Scan on appqosdata.icmptraffic sd (cost=0.00..22.60 rows=1 width=42) (actual time=0.001..0.001 rows=0 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.sessionid\"\n\" Filter: ((sd.detectorid >= 0) AND (sd.createdtime >= 1317726000000000000::bigint) AND (sd.createdtime < 1318326120000000000::bigint) AND (sd.detectorid = 1))\"\" -> Index Scan using icmptraffic_10_pkey on appqosdata.icmptraffic_10 sd (cost=0.00..8.35 rows=1 width=42) (actual time=0.053..7.807 rows=3997 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.sessionid\"\" Index Cond: ((sd.detectorid >= 0) AND (sd.detectorid = 1) AND (sd.createdtime >= 1317726000000000000::bigint) AND (sd.createdtime < 1318326120000000000::bigint))\"\n\" -> Index Scan using icmptraffic_11_pkey on appqosdata.icmptraffic_11 sd (cost=0.00..8.59 rows=1 width=42) (actual time=0.025..27.957 rows=14372 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.sessionid\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.detectorid = 1) AND (sd.createdtime >= 1317726000000000000::bigint) AND (sd.createdtime < 1318326120000000000::bigint))\"\" -> Index Scan using icmptraffic_12_pkey on appqosdata.icmptraffic_12 sd (cost=0.00..8.58 rows=1 width=42) (actual time=0.027..26.217 rows=13512 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.sessionid\"\" Index Cond: ((sd.detectorid >= 0) AND (sd.detectorid = 1) AND (sd.createdtime >= 1317726000000000000::bigint) AND (sd.createdtime < 1318326120000000000::bigint))\"\n\" -> Index Scan using icmptraffic_13_pkey on appqosdata.icmptraffic_13 sd (cost=0.00..8.56 rows=1 width=42) (actual time=0.030..26.075 rows=13430 loops=1)\"\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.sessionid\"\n\" Index Cond: ((sd.detectorid >= 0) AND (sd.detectorid = 1) AND (sd.createdtime >= 1317726000000000000::bigint) AND (sd.createdtime < 1318326120000000000::bigint))\"\" -> Index Scan using icmptraffic_14_pkey on appqosdata.icmptraffic_14 sd (cost=0.00..8.41 rows=1 width=42) (actual time=0.027..11.040 rows=5604 loops=1)\"\n\" Output: sd.detectorid, sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.sessionid\"\" Index Cond: ((sd.detectorid >= 0) AND (sd.detectorid = 1) AND (sd.createdtime >= 1317726000000000000::bigint) AND (sd.createdtime < 1318326120000000000::bigint))\"\n\"Total runtime: 7594071.137 ms\" name | current_setting ------------------------------+----------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.0.3 on amd64-portbld-freebsd8.0, compiled by GCC cc (GCC) 4.2.1 20070719 [FreeBSD], 64-bit autovacuum | on autovacuum_analyze_threshold | 500000 autovacuum_max_workers | 1\n autovacuum_naptime | 1h autovacuum_vacuum_threshold | 500000 checkpoint_segments | 64 effective_cache_size | 3GB fsync | on lc_collate | C\n lc_ctype | C listen_addresses | * log_destination | syslog, stderr log_min_duration_statement | 5ms log_rotation_age | 1d log_rotation_size | 100MB\n logging_collector | on max_connections | 30 max_stack_depth | 2MB server_encoding | UTF8 shared_buffers | 1793MB silent_mode | on\n synchronous_commit | on syslog_facility | local0 TimeZone | Europe/Jersey update_process_title | off wal_buffers | 128kB work_mem | 24MB\nThanks for any help,Svetlin Manavski",
"msg_date": "Wed, 12 Oct 2011 09:55:46 +0100",
"msg_from": "Svetlin Manavski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Join over two tables of 50K records takes 2 hours"
},
{
"msg_contents": "Svetlin Manavski <[email protected]> writes:\n> I am running 9.03 with the settings listed below. I have a prohibitively\n> slow query in an application which has an overall good performance:\n\nIt's slow because the planner is choosing a nestloop join on the\nstrength of its estimate that there's only a half dozen rows to be\njoined. You need to figure out why those rowcount estimates are so bad.\nI suspect that you've shot yourself in the foot by raising\nautovacuum_analyze_threshold so high --- most likely, none of those\ntables have ever gotten analyzed. And what's with the high\nautovacuum_naptime setting? You might need to increase\ndefault_statistics_target too, but first see if a manual ANALYZE makes\nthings better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Oct 2011 00:37:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join over two tables of 50K records takes 2 hours "
},
{
"msg_contents": "It seems like your row estimate are way off, with the planner\nexpecting 1 and getting 3000 or so. Have you tried cranking up\ndefault stats target to say 1000, running analyze and seeing what\nhappens?\n\nIf that doesn't do it, try temporarily turning off nested loops:\n\nset enable_nestloop = off;\nexplain analyze yourqueryhere;\n",
"msg_date": "Thu, 13 Oct 2011 22:38:16 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join over two tables of 50K records takes 2 hours"
},
{
"msg_contents": "Thank you guys for spotting the problem immediately.\nThe reason for such autovacuum thresholds is that these tables are designed\nfor very high rate of inserts and I have a specific routine to analyze them\nin a more controlled way. Infact the stats target of some of the fields is\nalso high. However that routine was failing to perform analyze on\nappqosdata.icmptraffic and its children due to another issue...\n\nRegards,\nSvetlin Manavski\n\nOn Fri, Oct 14, 2011 at 5:37 AM, Tom Lane <[email protected]> wrote:\n\n> Svetlin Manavski <[email protected]> writes:\n> > I am running 9.03 with the settings listed below. I have a prohibitively\n> > slow query in an application which has an overall good performance:\n>\n> It's slow because the planner is choosing a nestloop join on the\n> strength of its estimate that there's only a half dozen rows to be\n> joined. You need to figure out why those rowcount estimates are so bad.\n> I suspect that you've shot yourself in the foot by raising\n> autovacuum_analyze_threshold so high --- most likely, none of those\n> tables have ever gotten analyzed. And what's with the high\n> autovacuum_naptime setting? You might need to increase\n> default_statistics_target too, but first see if a manual ANALYZE makes\n> things better.\n>\n> regards, tom lane\n>\n\nThank you guys for spotting the problem immediately.The reason for such autovacuum thresholds is that these tables are designed for very high rate of inserts and I have a specific routine to analyze them in a more controlled way. Infact the stats target of some of the fields is also high. However that routine was failing to perform analyze on appqosdata.icmptraffic and its children due to another issue...\nRegards,Svetlin ManavskiOn Fri, Oct 14, 2011 at 5:37 AM, Tom Lane <[email protected]> wrote:\nSvetlin Manavski <[email protected]> writes:\n> I am running 9.03 with the settings listed below. I have a prohibitively\n> slow query in an application which has an overall good performance:\n\nIt's slow because the planner is choosing a nestloop join on the\nstrength of its estimate that there's only a half dozen rows to be\njoined. You need to figure out why those rowcount estimates are so bad.\nI suspect that you've shot yourself in the foot by raising\nautovacuum_analyze_threshold so high --- most likely, none of those\ntables have ever gotten analyzed. And what's with the high\nautovacuum_naptime setting? You might need to increase\ndefault_statistics_target too, but first see if a manual ANALYZE makes\nthings better.\n\n regards, tom lane",
"msg_date": "Fri, 14 Oct 2011 09:35:57 +0100",
"msg_from": "Svetlin Manavski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join over two tables of 50K records takes 2 hours"
},
{
"msg_contents": "On Fri, Oct 14, 2011 at 2:35 AM, Svetlin Manavski\n<[email protected]> wrote:\n> Thank you guys for spotting the problem immediately.\n> The reason for such autovacuum thresholds is that these tables are designed\n> for very high rate of inserts and I have a specific routine to analyze them\n> in a more controlled way. Infact the stats target of some of the fields is\n> also high. However that routine was failing to perform analyze on\n> appqosdata.icmptraffic and its children due to another issue...\n\nNote that it's possible to set many of the autovacuum values per table\nso you don't have to do a brainyectomy on the whole autovac system.\n",
"msg_date": "Fri, 14 Oct 2011 03:41:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join over two tables of 50K records takes 2 hours"
}
] |
[
{
"msg_contents": "HI,\n\n \n\nI am using PostgreSQL 8.4 in windows. I have created a database and\nsome tables on it. Also created a table space and some tables in it. My\napplication inserts data into these tables in every second. The\napplication is a continuous running application. My issue is that after\na 20-30 days continuous run ( Machine logged off 2 times), some of the\nfiles in the \\16384 folder of newly created table space are dropped\nautomatically. Can you tell me the reason for this.? How can we\nrecover from this?\n\n \n\n \n\nI have a task that delete the data(only data stored in table) from the\ntables in the tablespace in certain interval. Is there any problem\nrelated to this?\n\nCan you tell me the reason for this?\n\n \n\n \n\nThanks & Regards,\n\nVishnu S\n\n \n\n***** Confidentiality Statement/Disclaimer *****\n\nThis message and any attachments is intended for the sole use of the intended recipient. It may contain confidential information. Any unauthorized use, dissemination or modification is strictly prohibited. If you are not the intended recipient, please notify the sender immediately then delete it from all your systems, and do not copy, use or print. Internet communications are not secure and it is the responsibility of the recipient to make sure that it is virus/malicious code exempt.\nThe company/sender cannot be responsible for any unauthorized alterations or modifications made to the contents. If you require any form of confirmation of the contents, please contact the company/sender. The company/sender is not liable for any errors or omissions in the content of this message.\n\n\n\n\n\n\n\n\n\n\nHI,\n \nI am using PostgreSQL 8.4 in windows. I have created\na database and some tables on it. Also created a table space and some tables in\nit. My application inserts data into these tables in every second. The\napplication is a continuous running application. My issue is that after a 20-30\ndays continuous run ( Machine logged off 2 times), some of the files in the \n\\16384 folder of newly created table space are dropped automatically. Can \nyou tell me the reason for this.? How can we recover from this?\n \n \nI have a task that delete the data(only data stored in table)\nfrom the tables in the tablespace in certain interval. Is there any\nproblem related to this?\nCan you tell me the reason for this?\n \n \nThanks & Regards,\nVishnu S\n \n\n ***** Confidentiality Statement/Disclaimer *****This message and any attachments is intended for the sole use of the intended recipient. It may contain confidential information. Any unauthorized use, dissemination or modification is strictly prohibited. If you are not the intended recipient, please notify the sender immediately then delete it from all your systems, and do not copy, use or print. Internet communications are not secure and it is the responsibility of the recipient to make sure that it is virus/malicious code exempt.The company/sender cannot be responsible for any unauthorized alterations or modifications made to the contents. If you require any form of confirmation of the contents, please contact the company/sender. The company/sender is not liable for any errors or omissions in the content of this message.",
"msg_date": "Thu, 13 Oct 2011 18:22:32 +0530",
"msg_from": "\"Vishnu S.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tablespace files deleted automatically."
},
{
"msg_contents": "Vishnu,\n\n> I am using PostgreSQL 8.4 in windows. I have created a database and\n> some tables on it. Also created a table space and some tables in it. My\n> application inserts data into these tables in every second. The\n> application is a continuous running application. My issue is that after\n> a 20-30 days continuous run ( Machine logged off 2 times), some of the\n> files in the \\16384 folder of newly created table space are dropped\n> automatically. Can you tell me the reason for this.? How can we\n> recover from this?\n\nPostgreSQL creates and deletes files all the time. I don't understand\nwhy this is a problem.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 14 Oct 2011 11:19:38 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespace files deleted automatically."
},
{
"msg_contents": "On Fri, Oct 14, 2011 at 8:19 PM, Josh Berkus <[email protected]> wrote:\n> Vishnu,\n>\n>> I am using PostgreSQL 8.4 in windows. I have created a database and\n>> some tables on it. Also created a table space and some tables in it. My\n>> application inserts data into these tables in every second. The\n>> application is a continuous running application. My issue is that after\n>> a 20-30 days continuous run ( Machine logged off 2 times), some of the\n>> files in the \\16384 folder of newly created table space are dropped\n>> automatically. Can you tell me the reason for this.? How can we\n>> recover from this?\n>\n> PostgreSQL creates and deletes files all the time. I don't understand\n> why this is a problem.\n\nAlso it does not seem clear whether we are talking about a performance\nissue (what this ML is about) or a bug (lost data).\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 17 Oct 2011 14:04:52 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespace files deleted automatically."
}
] |
[
{
"msg_contents": "Hello all,\n\nI've spent some time looking through previous posts regarding\npostgres and SSD drives and have also been reading up on the\nsubject of SSDs in general elsewhere.\n\nSome quick background:\n\nWe're currently looking at changing our basic database setup as we\nmigrate away from some rather old (circa 2006 or so) hardware to\nmore current gear. Our previous setup basically consisted of two\ntypes of 1U boxes - dual-xeon with a low-end Adaptec RAID card\npaired with two U320 or U160 10K SCSI drives for database and light\nweb frontend and mail submission duties and single P4 and dual PIII\nboxes with commodity drives handling mail deliveries. Most of the\ndelivery servers have been migrated to current hardware (single and\ndual quad-core xeons, 4 WD RE3 SATA drives, FreeBSD w/ZFS), and\nwe've also moved a handful of the db/web services to one of these\nservers just to see what a large increase in RAM and CPU can do for\nus. As we sort of expected, the performance overall was better,\nbut there's a serious disk IO wall we hit when running nightly jobs\nthat are write heavy and long-running.\n\nThere are currently four db/web servers that have a combined total\nof 133GB of on-disk postgres data. Given the age of the servers,\nthe fact that each only has two drives on rather anemic RAID\ncontrollers with no cache/BBU, and none has more than 2GB RAM I\nthink it's a safe bet that consolidating this down to two modern\nservers can give us better performance, allow for growth over the\nnext few years, and let us split db duties from web/mail\nsubmission. One live db server, one standby.\n\nAnd this is where SSDs come in. We're not looking at terabytes of\ndata here, and I don't see us growing much beyond our current\ndatabase size in the next few years. SSDs are getting cheap enough\nthat this feels like a good match - we can afford CPU and RAM, we\ncan't afford some massive 24 drive monster and a pile of SAS\ndrives. The current db boxes top out at 300 IOPS, the SATA boxes\nmaybe about half that. If I can achieve 300x4 IOPS (or better,\npreferably...) on 2 or 4 SSDs, I'm pretty much sold.\n\nFrom what I've been reading here, this sounds quite possible. I\nunderstand the Intel 320 series are the best \"bargain\" option since\nthey can survive an unexpected shutdown, so I'm not going to go\nlooking beyond that - OCZ makes me a bit nervous and finding a\nclear list of which drives have the \"supercapacitor\" has been\ndifficult. I have no qualms about buying enough SSDs to mirror\nthem all. I am aware I need to have automated alerts for various\nSMART stats so I know when danger is imminent. I know I need to\nhave this box replicated even with mirrors and monitoring up the\nwazoo.\n\nHere's my remaining questions:\n\n-I'm calling our combined databases at 133GB \"small\", fair\nassumption? -Is there any chance that a server with dual quad core\nxeons, 32GB RAM, and 2 or 4 SSDs (assume mirrored) could be slower\nthan the 4 old servers described above? I'm beating those on raw\ncpu, quadrupling the amount of RAM (and consolidating said RAM),\nand going from disks that top out at 4x300 IOPS with SSDs that\nconservatively should provide 2000 IOPS. \n\n-We're also finally automating more stuff and trying to standardize\nserver configs. One tough decision we made that has paid off quite\nwell was to move to ZFS. We find the features helpful to admin\ntasks outweigh the drawbacks and RAM is cheap enough that we can\ndeal with its tendency to eat RAM. Is ZFS + Postgres + SSDs a bad\ncombo?\n\n-Should I even be looking at the option of ZFS on SATA or low-end\nSAS drives and ZIL and L2ARC on SSDs? Initially this intrigued me,\nbut I can't quite get my head around how the SSD-based ZIL can deal\nwith flushing the metadata out when the whole system is under any\nsort of extreme write-heavy load - I mean if the ZIL is absorbing\n2000 IOPS of metadata writes, at some point it has to get full as\nit's trying to flush this data to much slower spinning drives.\n\n-Should my standby box be the same configuration or should I look\nat actual spinning disks on that? How rough is replication on the\nunderlying storage? Would the total data written on the slave be\nless or equal to the master?\n\nAny input is appreciated. I did really mean for this to be a much\nshorter post...\n\nThanks,\n\nCharles",
"msg_date": "Fri, 14 Oct 2011 04:23:48 -0400",
"msg_from": "CSS <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSD options, small database, ZFS"
},
{
"msg_contents": "On 14-10-2011 10:23, CSS wrote:\n> -I'm calling our combined databases at 133GB \"small\", fair\n> assumption? -Is there any chance that a server with dual quad core\n> xeons, 32GB RAM, and 2 or 4 SSDs (assume mirrored) could be slower\n> than the 4 old servers described above? I'm beating those on raw\n> cpu, quadrupling the amount of RAM (and consolidating said RAM),\n> and going from disks that top out at 4x300 IOPS with SSDs that\n> conservatively should provide 2000 IOPS.\n\nWhether 133GB is small or not probably mostly depends on how much of it \nis actually touched during use. But I'd agree that it isn't a terribly \nlarge database, I'd guess a few simple SSDs would be plenty to achieve \n2000 IOPs. For lineair writes, they're still not really faster than \nnormal disks, but if that's combined with random access (either read or \nwrite) you ought to be ok.\nWe went from 15x 15k sas-disks to 6x ssd several years back in our \nMySQL-box, but since we also increased the ram from 16GB to 72GB, the \nio-load dropped so much the ssd's are normally only lightly loaded...\n\nBtw, the 5500 and 5600 Xeons are normally more efficient with a multiple \nof 6 ram-modules, so you may want to have a look at 24GB (6x4), 36GB \n(6x4+6x2) or 48GB (12x4 or 6x8) RAM.\n\nGiven the historical questions on the list, there is always a risk of \ngetting slower queries with hardware that should be much faster. For \ninstance, the huge increase in RAM may trigger a less efficient \nquery-plan. Or the disks abide by the flush-policies more correctly.\nAssuming the queries are still getting good plans and there are no such \nspecial differences, I'd agree with the assumption that its a win on \nevery count.\nOr your update to a newer OS and PostgreSQL may trigger some worse query \nplan or hardware-usage.\n\n> -Should I even be looking at the option of ZFS on SATA or low-end\n> SAS drives and ZIL and L2ARC on SSDs? Initially this intrigued me,\n> but I can't quite get my head around how the SSD-based ZIL can deal\n> with flushing the metadata out when the whole system is under any\n> sort of extreme write-heavy load - I mean if the ZIL is absorbing\n> 2000 IOPS of metadata writes, at some point it has to get full as\n> it's trying to flush this data to much slower spinning drives.\n\nA fail-safe set-up with SSD's in ZFS assumes at least 3 in total, i.e. a \npair of SSD's for ZIL and as many as you want for L2ARC. Given your \ndatabase size, 4x160GB SSD (in \"raid10\") or 2x 300GB should yield plenty \nof space. So given the same choice, I wouldn't bother with a set of \nlarge capacity sata disks and ZIL/L2ARC-SSD's, I'd just go with 4x160GB \nor 2x300GB SSD's.\n\n> -Should my standby box be the same configuration or should I look\n> at actual spinning disks on that? How rough is replication on the\n> underlying storage? Would the total data written on the slave be\n> less or equal to the master?\n\nHow bad is it for you if the performance of your database potentially \ndrops a fair bit when your slave becomes the master? If you have a \nread-mostly database, you may not even need SSD's in your master-db \n(given your amount of RAM). But honestly, I don't know the answer to \nthis question :)\n\nGood luck with your choices,\nBest regards,\n\nArjen\n",
"msg_date": "Fri, 14 Oct 2011 12:41:34 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD options, small database, ZFS"
},
{
"msg_contents": "Resurrecting this long-dormant thread...\n\nOn Oct 14, 2011, at 6:41 AM, Arjen van der Meijden wrote:\n\n> On 14-10-2011 10:23, CSS wrote:\n>> -I'm calling our combined databases at 133GB \"small\", fair\n>> assumption? -Is there any chance that a server with dual quad core\n>> xeons, 32GB RAM, and 2 or 4 SSDs (assume mirrored) could be slower\n>> than the 4 old servers described above? I'm beating those on raw\n>> cpu, quadrupling the amount of RAM (and consolidating said RAM),\n>> and going from disks that top out at 4x300 IOPS with SSDs that\n>> conservatively should provide 2000 IOPS.\n> \n> Whether 133GB is small or not probably mostly depends on how much of it is actually touched during use. But I'd agree that it isn't a terribly large database, I'd guess a few simple SSDs would be plenty to achieve 2000 IOPs. For lineair writes, they're still not really faster than normal disks, but if that's combined with random access (either read or write) you ought to be ok.\n> We went from 15x 15k sas-disks to 6x ssd several years back in our MySQL-box, but since we also increased the ram from 16GB to 72GB, the io-load dropped so much the ssd's are normally only lightly loaded...\n\nThanks for your input on this. It's taken some time, but I do finally have some hardware on hand (http://imgur.com/LEC5I) and as more trickles in over the coming days, I'll be putting together our first SSD-based postgres box. I have much testing to do, and I'm going to have some questions regarding that subject in another thread.\n\n> Btw, the 5500 and 5600 Xeons are normally more efficient with a multiple of 6 ram-modules, so you may want to have a look at 24GB (6x4), 36GB (6x4+6x2) or 48GB (12x4 or 6x8) RAM.\n\nThanks - I really had a hard time wrapping my head around the rules on populating the banks. If I understand it correctly, this is due to the memory controller moving from the south(?)bridge to being integrated in the CPU.\n\n> Given the historical questions on the list, there is always a risk of getting slower queries with hardware that should be much faster. For instance, the huge increase in RAM may trigger a less efficient query-plan. Or the disks abide by the flush-policies more correctly.\n> Assuming the queries are still getting good plans and there are no such special differences, I'd agree with the assumption that its a win on every count.\n> Or your update to a newer OS and PostgreSQL may trigger some worse query plan or hardware-usage.\n\nThat's an interesting point, I'd not even considered that. Is there any sort of simple documentation on the query planner that might cover how things like increased RAM could impact how a query is executed?\n\n>> -Should I even be looking at the option of ZFS on SATA or low-end\n>> SAS drives and ZIL and L2ARC on SSDs? Initially this intrigued me,\n>> but I can't quite get my head around how the SSD-based ZIL can deal\n>> with flushing the metadata out when the whole system is under any\n>> sort of extreme write-heavy load - I mean if the ZIL is absorbing\n>> 2000 IOPS of metadata writes, at some point it has to get full as\n>> it's trying to flush this data to much slower spinning drives.\n> \n> A fail-safe set-up with SSD's in ZFS assumes at least 3 in total, i.e. a pair of SSD's for ZIL and as many as you want for L2ARC. Given your database size, 4x160GB SSD (in \"raid10\") or 2x 300GB should yield plenty of space. So given the same choice, I wouldn't bother with a set of large capacity sata disks and ZIL/L2ARC-SSD's, I'd just go with 4x160GB or 2x300GB SSD's.\n\nWell, I've bought 4x160GB, so that's what I'll use. I will still do some tests with two SATA drives plus ZIL, just to see what happens.\n\n> \n>> -Should my standby box be the same configuration or should I look\n>> at actual spinning disks on that? How rough is replication on the\n>> underlying storage? Would the total data written on the slave be\n>> less or equal to the master?\n> \n> How bad is it for you if the performance of your database potentially drops a fair bit when your slave becomes the master? If you have a read-mostly database, you may not even need SSD's in your master-db (given your amount of RAM). But honestly, I don't know the answer to this question :)\n\nIt's complicated - during the day we're mostly looking at very scattered reads and writes, probably a bit biased towards writes. But each evening we kick off a number of jobs to pre-generate stats for more complex queries... If the job could still complete in 6-8 hours, we'd probably be OK, but if it starts clogging up our normal queries during the day, that would be a problem.\n\nThanks again for your input!\n\nCharles\n\n> \n> Good luck with your choices,\n> Best regards,\n> \n> Arjen\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 17 Nov 2011 22:44:46 -0500",
"msg_from": "CSS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSD options, small database, ZFS"
},
{
"msg_contents": "\n\nOn 18-11-2011 4:44 CSS wrote:\n> Resurrecting this long-dormant thread...\n>> Btw, the 5500 and 5600 Xeons are normally more efficient with a multiple of 6 ram-modules, so you may want to have a look at 24GB (6x4), 36GB (6x4+6x2) or 48GB (12x4 or 6x8) RAM.\n>\n> Thanks - I really had a hard time wrapping my head around the rules on populating the banks. If I understand it correctly, this is due to the memory controller moving from the south(?)bridge to being integrated in the CPU.\n\nThat's not complete. A while back Intel introduced an integrated memory \ncontroller in the Xeon's (I think it was with the 5500). And doing so, \nthey brought NUMA to the mainstream Xeons (Opterons had been doing that \nfrom the start).\nThe memory controllers in 5500/5600 are \"triple channel\". I.e. they can \ndistribute their work over three memory channels at the same time. The \nnext generation E5 Xeon's will have \"quad channel\", so it'll be going \neven faster with module count than.\n\nWith these kinds of cpu's its normally best to have increments of \"num \nchannels\"*\"num cpu\" memory modules for optimal performance. I.e. with \none \"triple channel\" cpu, you'd increment with three at the time, with \ntwo cpu's you'd go with six.\n\nHaving said that, it will work with many different amounts of memory \nmodules, just at a (slight?) disadvantage compared to the optimal setting.\n\nBest regards,\n\nArjen\n",
"msg_date": "Fri, 18 Nov 2011 08:02:14 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD options, small database, ZFS"
},
{
"msg_contents": "On 11/17/2011 10:44 PM, CSS wrote:\n> Is there any sort of simple documentation on the query planner that \n> might cover how things like increased RAM could impact how a query is \n> executed?\n\nThere is no *simple* documentation on any part of the query planner \nthat's also accurate. Query planning is inherently complicated.\n\nI think this point wasn't quite made clearly. PostgreSQL has no idea \nhow much memory is in your system; it doesn't try to guess or detect \nit. However, when people move from one system to a larger one, they \ntend to increase some of the query planning parameters in the \npostgresql.conf to reflect the new capacity. That type of change can \ncause various types of query plan changes. Let's say your old system \nhas 16GB of RAM and you set effective_cache_size to 12GB; if you upgrade \nto a 64GB server, it seems logical to increase that value to 48GB to \nkeep the same proportions. But that will can you different plans, and \nit's possible they will be worse. There's a similar concern if you \nchange work_mem because you have more memory, because that will alter \nhow plans do things like sorting and hashing\n\nBut you don't have to make any changes. You can migrate to the new \nhardware with zero modifications to the Postgres configuration, then \nintroduce changes later.\n\nThe whole memorys speed topic is also much more complicated than any \nsimple explanation can cover. How many banks of RAM you can use \neffectively changes based on the number of CPUs and associated chipset \ntoo. Someone just sent me an explanation recently of why I was seeing \nsome strange things on my stream-scaling benchmark program. That dove \ninto a bunch of trivia around how the RAM is actually accessed on the \nmotherboard. One of the reasons I keep so many samples on that \nprogram's page is to help people navigate this whole maze, and have some \ndata points to set expectations against. See \nhttps://github.com/gregs1104/stream-scaling for the code and the samples.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 18 Nov 2011 05:09:20 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD options, small database, ZFS"
},
{
"msg_contents": "On Fri, Nov 18, 2011 at 3:09 AM, Greg Smith <[email protected]> wrote:\n> On 11/17/2011 10:44 PM, CSS wrote:\n>>\n>> Is there any sort of simple documentation on the query planner that might\n>> cover how things like increased RAM could impact how a query is executed?\n>\n> There is no *simple* documentation on any part of the query planner that's\n> also accurate. Query planning is inherently complicated.\n>\n> I think this point wasn't quite made clearly. PostgreSQL has no idea how\n> much memory is in your system; it doesn't try to guess or detect it.\n\neffective_cache_size tells the db how much memory you have. Since you\nhave to set it, it can be anything you want, but if you've set it to\nsomething much higher on the new machine then it can affect query\nplanning.\n",
"msg_date": "Fri, 18 Nov 2011 09:17:23 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD options, small database, ZFS"
},
{
"msg_contents": "On 18 Listopad 2011, 17:17, Scott Marlowe wrote:\n> On Fri, Nov 18, 2011 at 3:09 AM, Greg Smith <[email protected]> wrote:\n>> On 11/17/2011 10:44 PM, CSS wrote:\n>>>\n>>> Is there any sort of simple documentation on the query planner that\n>>> might\n>>> cover how things like increased RAM could impact how a query is\n>>> executed?\n>>\n>> There is no *simple* documentation on any part of the query planner\n>> that's\n>> also accurate. Query planning is inherently complicated.\n>>\n>> I think this point wasn't quite made clearly. PostgreSQL has no idea\n>> how\n>> much memory is in your system; it doesn't try to guess or detect it.\n>\n> effective_cache_size tells the db how much memory you have. Since you\n> have to set it, it can be anything you want, but if you've set it to\n> something much higher on the new machine then it can affect query\n> planning.\n\nThat's only half of the truth. effective_cache_size is used to estimate\nthe page cache hit ratio, nothing else. It influences the planning a bit\n(AFAIK it's used only to estimate a nested loop with inner index scan) but\nit has no impact on things like work_mem, maintenance_work_mem,\nwal_buffers etc.\n\nPeople often bump these settings up (especially work_mem) on new hw\nwithout properly testing the impact. PostgreSQL will happily do that\nbecause it was commanded to, but then the system starts swapping or the\nOOM killer starts shooting the processes.\n\nTomas\n\n",
"msg_date": "Fri, 18 Nov 2011 17:30:40 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD options, small database, ZFS"
},
{
"msg_contents": "On Fri, Nov 18, 2011 at 3:39 PM, Greg Smith <[email protected]> wrote:\n\n> On 11/17/2011 10:44 PM, CSS wrote:\n>\n>> Is there any sort of simple documentation on the query planner that might\n>> cover how things like increased RAM could impact how a query is executed?\n>>\n>\n> There is no *simple* documentation on any part of the query planner that's\n> also accurate. Query planning is inherently complicated.\n>\n> I think this point wasn't quite made clearly. PostgreSQL has no idea how\n> much memory is in your system; it doesn't try to guess or detect it.\n> However, when people move from one system to a larger one, they tend to\n> increase some of the query planning parameters in the postgresql.conf to\n> reflect the new capacity. That type of change can cause various types of\n> query plan changes. Let's say your old system has 16GB of RAM and you set\n> effective_cache_size to 12GB; if you upgrade to a 64GB server, it seems\n> logical to increase that value to 48GB to keep the same proportions. But\n> that will can you different plans, and it's possible they will be worse.\n> There's a similar concern if you change work_mem because you have more\n> memory, because that will alter how plans do things like sorting and hashing\n>\n> But you don't have to make any changes. You can migrate to the new\n> hardware with zero modifications to the Postgres configuration, then\n> introduce changes later.\n>\n> The whole memorys speed topic is also much more complicated than any\n> simple explanation can cover. How many banks of RAM you can use\n> effectively changes based on the number of CPUs and associated chipset too.\n> Someone just sent me an explanation recently of why I was seeing some\n> strange things on my stream-scaling benchmark program. That dove into a\n> bunch of trivia around how the RAM is actually accessed on the motherboard.\n> One of the reasons I keep so many samples on that program's page is to\n> help people navigate this whole maze, and have some data points to set\n> expectations against. See https://github.com/gregs1104/**stream-scaling<https://github.com/gregs1104/stream-scaling>for the code and the samples.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n>\n>\n>\nGreg\n\nOn a slightly unrelated note, you had once (\nhttp://archives.postgresql.org/pgsql-general/2011-08/msg00944.php) said to\nlimit shared_buffers max to 8 GB on Linux and leave the rest for OS\ncaching. Does the same advice hold on FreeBSD systems too?\n\n\nAmitabh\n\nOn Fri, Nov 18, 2011 at 3:39 PM, Greg Smith <[email protected]> wrote:\nOn 11/17/2011 10:44 PM, CSS wrote:\n\nIs there any sort of simple documentation on the query planner that might cover how things like increased RAM could impact how a query is executed?\n\n\nThere is no *simple* documentation on any part of the query planner that's also accurate. Query planning is inherently complicated.\n\nI think this point wasn't quite made clearly. PostgreSQL has no idea how much memory is in your system; it doesn't try to guess or detect it. However, when people move from one system to a larger one, they tend to increase some of the query planning parameters in the postgresql.conf to reflect the new capacity. That type of change can cause various types of query plan changes. Let's say your old system has 16GB of RAM and you set effective_cache_size to 12GB; if you upgrade to a 64GB server, it seems logical to increase that value to 48GB to keep the same proportions. But that will can you different plans, and it's possible they will be worse. There's a similar concern if you change work_mem because you have more memory, because that will alter how plans do things like sorting and hashing\n\nBut you don't have to make any changes. You can migrate to the new hardware with zero modifications to the Postgres configuration, then introduce changes later.\n\nThe whole memorys speed topic is also much more complicated than any simple explanation can cover. How many banks of RAM you can use effectively changes based on the number of CPUs and associated chipset too. Someone just sent me an explanation recently of why I was seeing some strange things on my stream-scaling benchmark program. That dove into a bunch of trivia around how the RAM is actually accessed on the motherboard. One of the reasons I keep so many samples on that program's page is to help people navigate this whole maze, and have some data points to set expectations against. See https://github.com/gregs1104/stream-scaling for the code and the samples.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n GregOn a slightly unrelated note, you had once (http://archives.postgresql.org/pgsql-general/2011-08/msg00944.php) said to limit shared_buffers max to 8 GB on Linux and leave the rest for OS caching. Does the same advice hold on FreeBSD systems too?\nAmitabh",
"msg_date": "Fri, 18 Nov 2011 22:49:48 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD options, small database, ZFS"
},
{
"msg_contents": "Greg Smith wrote:\n> The whole memorys speed topic is also much more complicated than any \n> simple explanation can cover. How many banks of RAM you can use \n> effectively changes based on the number of CPUs and associated chipset \n> too. Someone just sent me an explanation recently of why I was seeing \n> some strange things on my stream-scaling benchmark program. That dove \n> into a bunch of trivia around how the RAM is actually accessed on the \n> motherboard. One of the reasons I keep so many samples on that \n> program's page is to help people navigate this whole maze, and have some \n> data points to set expectations against. See \n> https://github.com/gregs1104/stream-scaling for the code and the samples.\n\nI can confirm that a Xeon E5620 CPU wants memory to be in multiples of\n3, and a dual-CPU 5620 system needs memory in multiples of 6. (I\ninstalled 12 2GB modules.)\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Tue, 22 Nov 2011 13:10:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD options, small database, ZFS"
},
{
"msg_contents": "Amitabh Kant wrote:\n> > The whole memorys speed topic is also much more complicated than any\n> > simple explanation can cover. How many banks of RAM you can use\n> > effectively changes based on the number of CPUs and associated chipset too.\n> > Someone just sent me an explanation recently of why I was seeing some\n> > strange things on my stream-scaling benchmark program. That dove into a\n> > bunch of trivia around how the RAM is actually accessed on the motherboard.\n> > One of the reasons I keep so many samples on that program's page is to\n> > help people navigate this whole maze, and have some data points to set\n> > expectations against. See https://github.com/gregs1104/**stream-scaling<https://github.com/gregs1104/stream-scaling>for the code and the samples.\n> >\n> > --\n> > Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> > PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> >\n> >\n> >\n> Greg\n> \n> On a slightly unrelated note, you had once (\n> http://archives.postgresql.org/pgsql-general/2011-08/msg00944.php) said to\n> limit shared_buffers max to 8 GB on Linux and leave the rest for OS\n> caching. Does the same advice hold on FreeBSD systems too?\n\nHard to say. We don't know why this is happening but we are guessing it\nis the overhead of managing over one million shared buffers. Please\ntest and let us know.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Tue, 22 Nov 2011 13:11:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD options, small database, ZFS"
},
{
"msg_contents": "On Tue, Nov 22, 2011 at 11:41 PM, Bruce Momjian <[email protected]> wrote:\n\n> Amitabh Kant wrote:> >\n> >\n> > On a slightly unrelated note, you had once (\n> > http://archives.postgresql.org/pgsql-general/2011-08/msg00944.php) said\n> to\n> > limit shared_buffers max to 8 GB on Linux and leave the rest for OS\n> > caching. Does the same advice hold on FreeBSD systems too?\n>\n> Hard to say. We don't know why this is happening but we are guessing it\n> is the overhead of managing over one million shared buffers. Please\n> test and let us know.\n>\n> --\n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n>\nI do have a FreeBSD 8.0 server running Postgresql 8.4.9 with 32 GB of RAM\n(Dual Processor, SAS HD/RAID 10 with BBU for data and SAS HD/RAID 1 for\npg_xlog). The current settings set effective cache size to 22 GB (pgtune\ngenerated values). The server sees around between 500 to 1200 TPS and chugs\nalong pretty nicely. Sadly it's in production so I am not in a position to\nrun any tests on it. Changed values for postgresql.conf are:\n==========================================================\nmaintenance_work_mem = 1GB # pg_generate_conf wizard 2010-05-09\ncheckpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-05-09\neffective_cache_size = 22GB # pg_generate_conf wizard 2010-05-09\nwork_mem = 320MB # pg_generate_conf wizard 2010-05-09\nwal_buffers = 8MB # pg_generate_conf wizard 2010-05-09\ncheckpoint_segments = 16 # pg_generate_conf wizard 2010-05-09\nshared_buffers = 3840MB # pg_generate_conf wizard 2010-05-09\n==========================================================\n\nI do have another server that is to go into production(48 GB RAM, dual\nprocessor, Intel 710 SSD for data in RAID 1, SAS HD/RAID 1 for pg_xlog).\nApart from running pgbench and bonnie++, is there something else that I\nshould be testing on the new server? Greg's stream scaling seems to be for\nlinux, so not sure if it will work in FreeBSD.\n\nAmitabh\n\nOn Tue, Nov 22, 2011 at 11:41 PM, Bruce Momjian <[email protected]> wrote:\nAmitabh Kant wrote:> >>\n> On a slightly unrelated note, you had once (\n> http://archives.postgresql.org/pgsql-general/2011-08/msg00944.php) said to\n> limit shared_buffers max to 8 GB on Linux and leave the rest for OS\n> caching. Does the same advice hold on FreeBSD systems too?\n\nHard to say. We don't know why this is happening but we are guessing it\nis the overhead of managing over one million shared buffers. Please\ntest and let us know.\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n I do have a FreeBSD 8.0 server running Postgresql 8.4.9 with 32 GB of RAM (Dual Processor, SAS HD/RAID 10 with BBU for data and SAS HD/RAID 1 for pg_xlog). The current settings set effective cache size to 22 GB (pgtune generated values). The server sees around between 500 to 1200 TPS and chugs along pretty nicely. Sadly it's in production so I am not in a position to run any tests on it. Changed values for postgresql.conf are:\n==========================================================maintenance_work_mem = 1GB # pg_generate_conf wizard 2010-05-09checkpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-05-09\neffective_cache_size = 22GB # pg_generate_conf wizard 2010-05-09work_mem = 320MB # pg_generate_conf wizard 2010-05-09wal_buffers = 8MB # pg_generate_conf wizard 2010-05-09checkpoint_segments = 16 # pg_generate_conf wizard 2010-05-09\nshared_buffers = 3840MB # pg_generate_conf wizard 2010-05-09==========================================================I do have another server that is to go into production(48 GB RAM, dual processor, Intel 710 SSD for data in RAID 1, SAS HD/RAID 1 for pg_xlog). Apart from running pgbench and bonnie++, is there something else that I should be testing on the new server? Greg's stream scaling seems to be for linux, so not sure if it will work in FreeBSD.\nAmitabh",
"msg_date": "Wed, 23 Nov 2011 17:03:49 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD options, small database, ZFS"
}
] |
[
{
"msg_contents": "Hi,\n\nI stumbled upon a situation where the planner comes with a bad query \nplan, but I wanted to mention upfront that I'm using a dated PG version \nand I already see an update which mentions about improving planner \nperformance. I just wanted to check if this issue is already resolved, \nand if so, which version should I be eyeing.\n\nMy PG Version: 8.4.7\nProbably solved in: 8.4.8 / 9.0.4 ?\n\nIssue: It seems that the planner is unable to flatten the IN sub-query \ncausing the planner to take a bad plan and take ages (>2500 seconds) and \nexpects to give a 100 million row output, where in-fact it should get a \nsix row output. The same IN query, when flattened, PG gives the correct \nresult in a fraction of a second.\n\nDo let me know if this is a new case. I could try to give you the \nEXPLAIN ANALYSE outputs / approximate table sizes if required.\n\nEXISTING QUERY:\n SELECT field_b FROM large_table_a\n JOIN large_table_b USING (field_b)\n WHERE field_a IN (SELECT large_table_b.field_a\n FROM large_table_b WHERE field_b = 2673056)\n\nRECOMMENDED QUERY:\n SELECT s1.field_b FROM large_table_a\n JOIN large_table_b s1 USING (field_b)\n JOIN large_table_b s2 ON s1.field_a = s2.field_a\n WHERE s2.field_b = 2673056\n\n--\nRobins Tharakan",
"msg_date": "Mon, 17 Oct 2011 11:58:47 +0530",
"msg_from": "Robins Tharakan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad plan by Planner (Already resolved?)"
},
{
"msg_contents": "Robins Tharakan <[email protected]> wrote:\n \n> I stumbled upon a situation where the planner comes with a bad\n> query plan, but I wanted to mention upfront that I'm using a dated\n> PG version and I already see an update which mentions about\n> improving planner performance. I just wanted to check if this\n> issue is already resolved, and if so, which version should I be\n> eyeing.\n> \n> My PG Version: 8.4.7\n> Probably solved in: 8.4.8 / 9.0.4 ?\n \nFirst off, did you use pg_upgrade from an earlier major release? If\nso, be sure you've dealt with this issue:\n \nhttp://wiki.postgresql.org/wiki/20110408pg_upgrade_fix\n \nSecond, the releases you should be considering are on this page:\n \nhttp://www.postgresql.org/\n \nSo 8.4.9, 9.0.5, or 9.1.1.\n \nIf anybody recognized the issue from your description, they probably\nwould have posted by now. The fact that there has been no such post\ndoesn't necessarily mean it's not fixed -- the description is a\nlittle vague without table definitions and EXPLAIN ANALYZE output,\nso people might just not be sure. Since it's arguably in your best\ninterest to update at least to 8.4.9 anyway, the easiest way to get\nyour answer might be to do so and test it.\n \nhttp://www.postgresql.org/support/versioning\n \n> Do let me know if this is a new case. I could try to give you the \n> EXPLAIN ANALYSE outputs / approximate table sizes if required.\n \nIf you establish that the latest versions of the software still show\nthe issue, please post with more information, as described here:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Mon, 17 Oct 2011 11:02:12 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan by Planner (Already resolved?)"
},
{
"msg_contents": "Hi,\n\nI'll try to answer in-line.\n\nOn 10/17/2011 09:32 PM, Kevin Grittner wrote:\n> First off, did you use pg_upgrade from an earlier major release? If\n> so, be sure you've dealt with this issue:\nAlthough I joined recently, I doubt whether pg_upgrade was used here. \nAnd this doesn't look like the issue either. There are no data loss \nissues and this seems primarily a planner specific bug.\n\n\n> the description is a\n> little vague without table definitions and EXPLAIN ANALYZE output,\n> so people might just not be sure.\nMakes sense. Just that, I thought I shouldn't drop in a large mail, in \ncase the issue was a well-known one. Please find below the EXPLAIN \nANALYSE output. I've changed the table-names / field-names and provided \nother details as well.\n\nlarge_table_a: ~20million\nn_dead_tuples / reltuples : ~7%\nanalysed: <2 weeks\n\nlarge_table_b: ~140million\nn_dead_tuples / reltuples : ~0%\nanalysed: <2 days\n\ndefault_statistics_target: 1000\n\nfield_a: int (indexed)\nfield_b: int (indexed)\n\n\n> Since it's arguably in your best\n> interest to update at least to 8.4.9 anyway, the easiest way to get\n> your answer might be to do so and test it.\nFrankly, its slightly difficult to just try out versions. DB>1Tb and \ngetting that kind of resources to just try out versions for a query is \nnot that simple. Hope you would understand. I have the workaround \nimplemented, but just wanted to be sure that this is accommodated in a \nnewer version.\n\n\n===============\nEXISTING QUERY:\n SELECT field_a FROM large_table_a\n JOIN large_table_b USING (field_a)\n WHERE field_b IN (SELECT large_table_b.field_b\n FROM large_table_b WHERE field_a = 2673056)\n\nANALYSE:\nHash Join (cost=273247.23..6460088.89 rows=142564896 width=4)\n Hash Cond: (public.large_table_b.field_b = \npublic.large_table_b.field_b)\n -> Merge Join (cost=273112.62..5925331.24 rows=142564896 width=8)\n Merge Cond: (large_table_a.field_a = \npublic.large_table_b.field_a)\n -> Index Scan using \"PK_large_table_a\" on large_table_a \n(cost=0.00..570804.30 rows=22935395 width=4)\n -> Index Scan using \"IX_large_table_b_field_a\" on \nlarge_table_b (cost=0.00..4381499.54 rows=142564896 width=8)\n -> Hash (cost=133.32..133.32 rows=103 width=4)\n -> HashAggregate (cost=132.29..133.32 rows=103 width=4)\n -> Index Scan using \"IX_large_table_b_field_a\" on \nlarge_table_b (cost=0.00..131.87 rows=165 width=4)\n Index Cond: (field_a = 2673056)\n\n=====================\n\nALTERNATE QUERY:\n SELECT s1.field_a FROM large_table_a\n JOIN large_table_b s1 USING (field_a)\n JOIN large_table_b s2 ON s1.field_b = s2.field_b\n WHERE s2.field_a = 2673056\n\nANALYSE:\nNested Loop (cost=0.00..2368.74 rows=469 width=4) (actual \ntime=0.090..0.549 rows=6 loops=1)\n -> Nested Loop (cost=0.00..1784.06 rows=469 width=4) (actual \ntime=0.057..0.350 rows=16 loops=1)\n -> Index Scan using \"IX_large_table_b_field_a\" on \nlarge_table_b s2 (cost=0.00..131.87 rows=165 width=4) (actual \ntime=0.033..0.046 rows=6 loops=1)\n Index Cond: (field_a = 2673056)\n -> Index Scan using \"IX_large_table_b_SampleId\" on \nlarge_table_b s1 (cost=0.00..9.99 rows=2 width=8) (actual \ntime=0.037..0.047 rows=3 loops=6)\n Index Cond: (s1.field_b = s2.field_b)\n -> Index Scan using \"PK_large_table_a\" on large_table_a \n(cost=0.00..1.23 rows=1 width=4) (actual time=0.011..0.011 rows=0 loops=16)\n Index Cond: (large_table_a.field_a = s1.field_a)\nTotal runtime: 0.620 ms\n\n\n--\nRobins Tharakan",
"msg_date": "Tue, 18 Oct 2011 11:27:57 +0530",
"msg_from": "Robins Tharakan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad plan by Planner (Already resolved?)"
},
{
"msg_contents": "On 17/10/11 19:28, Robins Tharakan wrote:\n> Hi,\n>\n> I stumbled upon a situation where the planner comes with a bad query \n> plan, but I wanted to mention upfront that I'm using a dated PG \n> version and I already see an update which mentions about improving \n> planner performance. I just wanted to check if this issue is already \n> resolved, and if so, which version should I be eyeing.\n>\n> My PG Version: 8.4.7\n> Probably solved in: 8.4.8 / 9.0.4 ?\n>\n> Issue: It seems that the planner is unable to flatten the IN sub-query \n> causing the planner to take a bad plan and take ages (>2500 seconds) \n> and expects to give a 100 million row output, where in-fact it should \n> get a six row output. The same IN query, when flattened, PG gives the \n> correct result in a fraction of a second.\n>\n> Do let me know if this is a new case. I could try to give you the \n> EXPLAIN ANALYSE outputs / approximate table sizes if required.\n>\n> EXISTING QUERY:\n> SELECT field_b FROM large_table_a\n> JOIN large_table_b USING (field_b)\n> WHERE field_a IN (SELECT large_table_b.field_a\n> FROM large_table_b WHERE field_b = 2673056)\n>\n> RECOMMENDED QUERY:\n> SELECT s1.field_b FROM large_table_a\n> JOIN large_table_b s1 USING (field_b)\n> JOIN large_table_b s2 ON s1.field_a = s2.field_a\n> WHERE s2.field_b = 2673056\n>\n>\n\nPoor plans being generated for the subquery variant above were \nspecifically targeted in 8.4.9. It may be that you don't need the \nworkaround in that (or corresponding later) versions - 9.0.5, 9.1.0.\n\nRegards\n\nMark\n",
"msg_date": "Tue, 18 Oct 2011 19:18:13 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan by Planner (Already resolved?)"
}
] |
[
{
"msg_contents": "Hi,\n\nI've a postgres 9.1 database used for map generating ( tiles ).\nThe system has 24Go RAM and 5 processors.\nI'm using geoserver to generate the tiles.\n\nMy data used 8486 MB => psql -d gis -c \"SELECT\npg_size_pretty(pg_database_size('gis'))\"\n\nI've carefully indexes the table by the \"the_geom\" column.\n\nHere is my database config :\n\n--> change :\n--> listen_addresses = '*'\n--> max_connections = 50\n--> tcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds;\n--> shared_buffers = 1024MB # 10% of available RAM\n--> work_mem = 256MB # min 64kB\n--> maintenance_work_mem = 256MB # min 1MB\n--> effective_cache_size = 5120MB\n--> autovacuum = off\n\nsudo nano /etc/sysctl.conf\n--> kernel.shmmax=5368709120\n--> kernel.shmall=5368709120\n\nI wanted to have your opinion about this config ? What can I do to optimize\nthe performance ?\n\nThank you,\n\nHi,I've a postgres 9.1 database used for map generating ( tiles ).The system has 24Go RAM and 5 processors.I'm using geoserver to generate the tiles.My data used 8486 MB => psql -d gis -c \"SELECT pg_size_pretty(pg_database_size('gis'))\"\nI've carefully indexes the table by the \"the_geom\" column.Here is my database config :--> change : --> listen_addresses = '*'--> max_connections = 50--> tcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds;\n--> shared_buffers = 1024MB # 10% of available RAM --> work_mem = 256MB # min 64kB--> maintenance_work_mem = 256MB # min 1MB--> effective_cache_size = 5120MB\n--> autovacuum = offsudo nano /etc/sysctl.conf--> kernel.shmmax=5368709120--> kernel.shmall=5368709120I wanted to have your opinion about this config ? What can I do to optimize the performance ?\nThank you,",
"msg_date": "Mon, 17 Oct 2011 11:48:35 +0200",
"msg_from": "Micka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimize the database performance"
},
{
"msg_contents": "hello Micha,\n\ni think that noone can tell you much without more information about your\nsystem. roughly i would say that you could change the following parameters:\nshared_buffers = 1024MB -> 6GB\nwork_mem = 256MB -> 30-50 MB \neffective_cache_size = 5120MB -> 16GB (depends on whether its a dedicated db\nserver or not)\nkernel.shmmax=5368709120 : now its 5GB, probably you need more here, i would\nput 50% of ram\nkernel.shmall=5368709120 you need less here. check he shmsetup.sh script for\nmore info\nautovacuum off -> on\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Optimize-the-database-performance-tp4909314p4909422.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Mon, 17 Oct 2011 03:39:26 -0700 (PDT)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize the database performance"
},
{
"msg_contents": "On 10/17/2011 04:48 AM, Micka wrote:\n> Hi,\n>\n> I've a postgres 9.1 database used for map generating ( tiles ).\n> The system has 24Go RAM and 5 processors.\n> I'm using geoserver to generate the tiles.\n>\n> My data used 8486 MB => psql -d gis -c \"SELECT pg_size_pretty(pg_database_size('gis'))\"\n>\n> I've carefully indexes the table by the \"the_geom\" column.\n>\n> Here is my database config :\n>\n> --> change :\n> --> listen_addresses = '*'\n> --> max_connections = 50\n> --> tcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds;\n> --> shared_buffers = 1024MB # 10% of available RAM\n> --> work_mem = 256MB # min 64kB\n> --> maintenance_work_mem = 256MB # min 1MB\n> --> effective_cache_size = 5120MB\n> --> autovacuum = off\n>\n> sudo nano /etc/sysctl.conf\n> --> kernel.shmmax=5368709120\n> --> kernel.shmall=5368709120\n>\n> I wanted to have your opinion about this config ? What can I do to optimize the performance ?\n>\n> Thank you,\n>\n\nYeah... We're gonna need some more details. Whats slow? Are you CPU bound or IO bound? How many concurrent db connections? What does vmstat look like? And 10% of 24 gig is 2.4 gig, not 1 gig.\n\nOr is this box doing something else. I noticeeffective_cache_size is only 5 gig, so you must be doing other things on this box.\n\n> --> autovacuum = off\n\nAre you vacuuming by hand!? If not this is a \"really bad idea\" (tm)(c)(r)\n\n-Andy\n",
"msg_date": "Mon, 17 Oct 2011 13:35:09 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize the database performance"
},
{
"msg_contents": "2011/10/17 Micka <[email protected]>:\n> Hi,\n>\n> I've a postgres 9.1 database used for map generating ( tiles ).\n> The system has 24Go RAM and 5 processors.\n> I'm using geoserver to generate the tiles.\n>\n> My data used 8486 MB => psql -d gis -c \"SELECT\n> pg_size_pretty(pg_database_size('gis'))\"\n>\n> I've carefully indexes the table by the \"the_geom\" column.\n>\n> Here is my database config :\n>\n> --> change :\n> --> listen_addresses = '*'\n> --> max_connections = 50\n> --> tcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds;\n> --> shared_buffers = 1024MB # 10% of available RAM\n> --> work_mem = 256MB # min 64kB\n> --> maintenance_work_mem = 256MB # min 1MB\n> --> effective_cache_size = 5120MB\n> --> autovacuum = off\n>\n> sudo nano /etc/sysctl.conf\n> --> kernel.shmmax=5368709120\n> --> kernel.shmall=5368709120\n>\n> I wanted to have your opinion about this config ? What can I do to optimize\n> the performance ?\n>\n\nas other poeple said, you need to give more information on your\nhardware and usage of it to get more accurate answers.\n\nAssuming that all your db can stay in RAM, I would start with\nrandom_page_cost = 1 and seq_page_cost = 1.\n\neffective_cache_size should be the sum of all cache space (linux and\npostgresql), any number larger than 10GB should be fine, there is no\nrisk other than bad planning to set it too large (and it won't affect\nyou here I think)\n\nYou have memory available? you can increase the maintenance_work_mem\n(and you probably want to do that if you have a maintenance window\nwhen you do the vacuum manually - why not autovacum ?)\n\nFor shared_buffers, you should use pg_buffercache to see what's\nhappening and maybe change the value to something higher (2GB, 4GB,\n...) . You can also just test and find the best size for your\napplication workload.\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Mon, 17 Oct 2011 22:31:13 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize the database performance"
}
] |
[
{
"msg_contents": "Dear Anybody!\n\nI use pgr to store records. But the characterisitc of the record traffic are\nspecial. For example 50 of them arrived in one sec contignously trough weeks\nand aligned interally trough tables. \nTo absorb this traffic I put the pgr database to ramdisk (fast as possible).\nBut after more day work the pgr slowing down. \nWhat is important think for this task I do not need any tranasction. So the\nCOMMIT and ROLLBACK feature is useless. \nThe question is how I minimize the rollback activity to free resoureces?\n\nThanks for any idea.\n \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Heavy-contgnous-load-tp4913425p4913425.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Tue, 18 Oct 2011 05:09:25 -0700 (PDT)",
"msg_from": "kzsolt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Heavy contgnous load"
},
{
"msg_contents": "kzsolt <[email protected]> wrote:\n \n> I use pgr to store records. But the characterisitc of the record\n> traffic are special. For example 50 of them arrived in one sec\n> contignously trough weeks and aligned interally trough tables. \n> To absorb this traffic I put the pgr database to ramdisk (fast as\n> possible).\n \nCould you just stream it to disk files on the ramdisk and COPY it in\nto PostgreSQL in batches?\n \n> But after more day work the pgr slowing down. \n \nWe'd need a lot more information to guess why.\n \n> What is important think for this task I do not need any\n> tranasction. So the COMMIT and ROLLBACK feature is useless. \n \nBatching multiple inserts into a single transaction can *speed* data\nloads.\n \n> The question is how I minimize the rollback activity to free\n> resoureces?\n \nRollback activity? What rollback activity? When you're doing what?\nWhat is the exact message?\n \n-Kevin\n",
"msg_date": "Tue, 18 Oct 2011 15:40:26 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy contgnous load"
},
{
"msg_contents": "On 10/18/2011 08:09 PM, kzsolt wrote:\n\n> What is important think for this task I do not need any tranasction. So the\n> COMMIT and ROLLBACK feature is useless.\n> The question is how I minimize the rollback activity to free resoureces?\n\nActually, you do need transactions, because they're what prevents your \ndatabase from being corrupted or left in a half-updated state if/when \nthe database server loses power, crashes, etc.\n\nPresumably when you say \"rollback activity\" you mean the overheads \ninvolved in supporting transactional, atomic updates? If so, there isn't \nmuch you can do in an INSERT-only database except try to have as few \nindexes as possible and do your inserts inside transactions in batches, \nrather than one-by-one as individual statements.\n\nConsider logging to a flat file to accumulate data, then COPYing data in \nbatches into PostgreSQL.\n\nAn alternative would be to write your data to an unlogged table \n(PostgreSQL 9.1+ only) then `INSERT INTO ... SELECT ...' it into the \nmain table(s) in batches. Unlogged tables avoid most of the overheads of \nthe write-ahead log crash safety, but they do that by NOT BEING CRASH \nSAFE. If your server, or the PostgreSQL process, crashes then unlogged \ntables will be ERASED. If you can afford to lose a little data in this \ncase, you can use unlogged tables as a staging area.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 19 Oct 2011 08:56:43 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy contgnous load"
},
{
"msg_contents": "\"try to have as few indexes as possible and do your inserts inside\ntransactions in batches, rather than one-by-one as individual statements. \"\nThat is the main problem. I use now few index as possible. Unfortunately the\none-by-one INSERT is nature of or system. To join (batch) inserts is require\nspacial cache with inactivity timeout. But this timout are make more trouble\nfor our application. The flat file has same problem. \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Heavy-contgnous-load-tp4913425p4919006.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 19 Oct 2011 11:55:35 -0700 (PDT)",
"msg_from": "kzsolt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy contgnous load"
},
{
"msg_contents": "\nKevin Grittner wrote:\n> Rollback activity? What rollback activity? When you're doing what?\n> What is the exact message?\nI mean here some kind of option to save reources. \nFor example mysql has table (storage) type where no transaction support\n(rollback) in. This make the all server faster and use less resources. \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Heavy-contgnous-load-tp4913425p4919050.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 19 Oct 2011 12:04:29 -0700 (PDT)",
"msg_from": "kzsolt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy contgnous load"
},
{
"msg_contents": "On 10/20/2011 02:55 AM, kzsolt wrote:\n> \"try to have as few indexes as possible and do your inserts inside\n> transactions in batches, rather than one-by-one as individual statements. \"\n> That is the main problem. I use now few index as possible. Unfortunately the\n> one-by-one INSERT is nature of or system. To join (batch) inserts is require\n> spacial cache with inactivity timeout. But this timout are make more trouble\n> for our application. The flat file has same problem.\n\nWell, then you'll have to use an unlogged table (9.1 or newer only) to \ninsert into, then periodically copy rows from the unlogged table into \nthe main table using something like PgAgent to schedule the copy.\n\nAn unlogged table is a tiny bit more like MySQL's MyISAM tables in that \nit doesn't have any crash recovery features. It still supports \ntransactions, of course, and you won't find any way to remove \ntransaction support in PostgreSQL. One of the reasons MySQL has \nhistorically had so many bizarre behaviours, like (by default) writing \ninvalid data as NULL, inserting zeroes for invalid dates, etc is because \nMyISAM can't roll back transactions when it discovers a problem partway \nthrough, so it has to finish the job badly rather than error out and \nleave the job half-completed.\n\nIf you really need absolutely maximum insert performance, you should \njust use a flat file or a different database system. Relational \ndatabases like PostgreSQL are designed for reliable concurrency, crash \nsafety, fast querying, and data integrity, and they provide those at the \ncost of slower data insertion among other things.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 20 Oct 2011 11:44:36 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy contgnous load"
},
{
"msg_contents": "On Tue, Oct 18, 2011 at 5:09 AM, kzsolt <[email protected]> wrote:\n> Dear Anybody!\n>\n> I use pgr to store records. But the characterisitc of the record traffic are\n> special. For example 50 of them arrived in one sec contignously trough weeks\n> and aligned interally trough tables.\n\nWhat happens if the database has a hiccup and can't accept records for\na few seconds or minutes? Do the processes that insert the records\njust buffer them up, or drop them, or crash?\n\n\n> To absorb this traffic I put the pgr database to ramdisk (fast as possible).\n> But after more day work the pgr slowing down.\n> What is important think for this task I do not need any tranasction. So the\n> COMMIT and ROLLBACK feature is useless.\n> The question is how I minimize the rollback activity to free resoureces?\n\nIf you really don't need transactions, then you are incurring an awful\nlot of overhead by using a transactional database.\n\nIn any case, this seem like a case for synchronous_commit=off. If the\ndatabase crashes, you might be missing a few seconds of recent\ntransaction when it comes back up. But, if the records are still\nbeing generated while the database is down, you are losing those ones\nanyway, so losing a few more retroactively may not be a big deal.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 20 Oct 2011 08:55:39 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy contgnous load"
},
{
"msg_contents": "So guys, lot of thank you for all of the explanation and ideas!\n\n\nJeff Janes wrote:\n> What happens if the database has a hiccup and can't accept records for\n> a few seconds or minutes?\n\nCraig Ringer wrote:\n> If you really need absolutely maximum insert performance, you should \n> just use a flat file or a different database system.\nThis need some explanation:\nJust for easy explanation our system constructed by Pmodules called PMs. The\ntransport between PMs is a special reliable protocol with elastic high\ncapacity buffers. This absorb the peaks of asynchrnous event storm.\nThe related (small) part of our system called A_PM. This A_PM accept\nasynchrnous event from many (can be more dozen) other PMs, format it and\nstore onto record of SQL table. \nAfter the record inserted all must be open for complex querys requested by 3\nor more PM. \nOthersides we need to provide common public access to this records (and to\nmany other functions). This is why we use SQL database server for. But the\nrequirement is the user can be select freely the vendor of database server\nfrom four database server set (one of is PGR). To implement this we have\ntwin interface. \n\nThe synchronous_commit=off and unlogged table are good idea. I try it. \nThe crash make mouch more trouble for our system than trouble generated by\nloss of 200-300 record...\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Heavy-contgnous-load-tp4913425p4922748.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 20 Oct 2011 13:10:11 -0700 (PDT)",
"msg_from": "kzsolt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy contgnous load"
},
{
"msg_contents": "Looks like I found more magic.\n\nMy table is: each record near 1kbyte, contain dozen col some text some\nnumeric, some of the numeric columns are indexed. The database located at\nramdisk (tmpfs) ((I hope)). The table is contignously filled with rows.\n\nIf the table has less than 4Mrec then looks like everythink is fine.\nBut near at 6Mrec the CPU load is go to very high and even the COUNT(*) need\n8sec executing time (1sec/Mrec). The insert is slowing down too. \nBut more stange if I try to search a record by indexed col then the server\nbring up it very quick!\n\nMy server config is:\n/\nmax_connections = 24\nshared_buffers = 256MB\nlog_destination = 'stderr' # Valid values are combinations of\nlogging_collector = true\nsilent_mode = on # Run server silently.\nlog_line_prefix = '%t %d %u '\ndatestyle = 'iso, ymd'\nlc_messages = 'hu_HU' # locale for system error message\nlc_monetary = 'hu_HU' # locale for monetary formatting\nlc_numeric = 'hu_HU' # locale for number formatting\nlc_time = 'hu_HU' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.hungarian'\nport = 9033\nunix_socket_directory = standard disk\nlog_directory = standard disk\nlog_filename = 'sqld.log'\neffective_cache_size = 8MB\ncheckpoint_segments = 16\nsynchronous_commit = off\n/\n\nAny idea how it possible to increase the performance?\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Heavy-contgnous-load-tp4913425p4965371.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Fri, 4 Nov 2011 14:35:57 -0700 (PDT)",
"msg_from": "kzsolt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy contgnous load"
},
{
"msg_contents": "I try to found out more information. Looks like the COUNT(*) is not the\nstrongest part of pgr therfore I do a worakround. After this I have the\nfolwing result:\n\nBelow the 1Mrec the row insert time is ~23msec. Above the 7Mrec the insert\ntime is ~180msec. \n \nI belive I use the fastest index type (default). \n\nSo any idea to make postgres faster at higher number of records?\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Heavy-contgnous-load-tp4913425p4978893.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 9 Nov 2011 11:48:13 -0800 (PST)",
"msg_from": "kzsolt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy contgnous load"
},
{
"msg_contents": "Hello\n\n2011/11/9 kzsolt <[email protected]>:\n> I try to found out more information. Looks like the COUNT(*) is not the\n> strongest part of pgr therfore I do a worakround. After this I have the\n> folwing result:\n>\n> Below the 1Mrec the row insert time is ~23msec. Above the 7Mrec the insert\n> time is ~180msec.\n>\n\n* use a copy statement\n* use a explicit transaction\n* if you can disable triggers (and RI)\n* if you cannot and use a RI, unsures a indexes on PK and FK\n\nRegards\n\nPavel Stehule\n\n\n> I belive I use the fastest index type (default).\n>\n> So any idea to make postgres faster at higher number of records?\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Heavy-contgnous-load-tp4913425p4978893.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Sun, 13 Nov 2011 09:57:31 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy contgnous load"
}
] |
[
{
"msg_contents": "Robins Tharakan wrote:\n \n> I'll try to answer in-line.\n \nThanks; that's the preferred style on PostgreSQL lists.\n \n> On 10/17/2011 09:32 PM, Kevin Grittner wrote:\n>> First off, did you use pg_upgrade from an earlier major release?\n>> If so, be sure you've dealt with this issue:\n> Although I joined recently, I doubt whether pg_upgrade was used\n> here. And this doesn't look like the issue either. There are no\n> data loss issues and this seems primarily a planner specific bug.\n \nThe data loss doesn't happen until transaction ID wraparound -- so if\nyou had used pg_upgrade to get to where you are, and not used the\nrecovery techniques I pointed to, you could suddenly start losing\ndata at a time long after the conversion. Since you're on a version\nwhich came out before that was discovered I thought it would be\nfriendly to try to save you that trouble; but if you're sure you're\nnot in a vulnerable state, that's great.\n \n>> Since it's arguably in your best interest to update at least to\n>> 8.4.9 anyway, the easiest way to get your answer might be to do so\n>> and test it.\n \n> Frankly, its slightly difficult to just try out versions. DB>1Tb\n> and getting that kind of resources to just try out versions for a\n> query is not that simple. Hope you would understand.\n \nThat I don't understand. We have found that it takes no longer to\nupgrade to a new minor release on a 2.5 TB database cluster than on a\ntiny 300 MB cluster. (With pg_upgrade, it only takes five or ten\nminutes of down time to upgrade a new *major* release on a multi-TB\ndatabase, but that's not what we're talking about to get to 9.4.9.)\n \nWe build from source, and we include the minor release number in the\nprefix for the build, so we can have both old and new software\ninstalled side-by-side. The path for the client-side executables we\ndo through a symbolic link, so we can switch that painlessly. And we\nassign the prefix used for the server to an environment variable in\nour services script. So here is our process:\n \n - Build and install the new minor release.\n - Change the symlink to use it for clients (like pg_dump and psql).\n - Change the service script line that sets the prefix to point to\n the new minor release.\n - Run the service script with \"stop\" and then run the service script\n with \"start\". (Unless your service script does a restart by using\n stop and start, don't run it with \"restart\", because a PostgreSQL\n restart won't pick up the new executables.)\n \nThere is literally no more down time than it takes to stop and start\nthe database service. Our client software retries on a broken\nconnection, so we can even do this while users are running and they\njust get a clock for a few seconds; but we usually prefer not to\ncause even that much disruption, at least during normal business\nhours. We have enough hardware to load balance off of one machine at\na time to do this without interruption of service.\n \nThere are sometimes bugs fixed in a minor release which require\ncleanup of possibly damaged data, like what I mentioned above. You\nmay need to vacuum or reindex something to recover from the damage\ncaused by the now-fixed bug, but the alternative is to continue to\nrun with the damage. I don't understand why someone would knowingly\nchoose that.\n \nReally, it is worthwhile to keep up on minor releases.\n \nhttp://www.postgresql.org/support/versioning\n \nPerhaps the difference is that you feel I'm suggesting that you\nupgrade in order to see if performance improves. I'm not. I'm\nsuggesting that you upgrade to get the bug fixes and security fixes. \nAfter the upgrade, it would make sense to see if it also fixed your\nperformance problem.\n \n> I have the workaround implemented, but just wanted to be sure that\n> this is accommodated in a newer version.\n \nYou've already gotten feedback on that; I don't have anything to add\nthere.\n \n-Kevin\n",
"msg_date": "Tue, 18 Oct 2011 07:46:18 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad plan by Planner (Already resolved?)"
},
{
"msg_contents": "Thanks Kevin,\n\nThat's a pretty neat way to managing (at least) minor upgrades. Like I \nsaid, this place is new, and so although I'm quite positive about \nupgrading to the latest, I should probably take things one-at-a-time and \nbring in this idea of implementing regular updates sometime in the future.\n\nAs for the query, I tried the same query on an alternate machine, and \nthis is how EXPLAIN ANALYZE came up. Its much faster than the earlier \nslow query, but nowhere near the performance of the second query shown \nearlier. Do I have to live with that until this is implemented (if I am \nonly doing a minor version upgrade) or am I missing something else here?\n\nI've provided the EXPLAIN ANALYZE as well as the web-link for a pretty \noutput of the EXPLAIN ANALYZE for your review.\n\n\nORIGINAL QUERY (on PostgreSQL 8.4.9):\nhttp://explain.depesz.com/s/bTm\n\nEXPLAIN ANALYZE SELECT field_a FROM large_table_a JOIN large_table_b \nUSING (field_a) WHERE field_b IN (SELECT large_table_b.field_b FROM \nlarge_table_b WHERE field_a = 2673056) ;\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=132.97..194243.54 rows=156031 width=4) (actual \ntime=6.612..43179.524 rows=2120 loops=1)\n -> Nested Loop (cost=132.97..1107.63 rows=156031 width=4) (actual \ntime=6.576..29122.017 rows=6938 loops=1)\n -> HashAggregate (cost=132.97..133.96 rows=99 width=4) \n(actual time=6.543..12.726 rows=2173 loops=1)\n -> Index Scan using \"IX_large_table_b_SigId\" on \nlarge_table_b (cost=0.00..132.56 rows=164 width=4) (actual \ntime=0.029..3.425 rows=2173 loops=1)\n Index Cond: (field_a = 2673056)\n -> Index Scan using \"IX_large_table_b_field_b\" on \nlarge_table_b (cost=0.00..9.81 rows=2 width=8) (actual \ntime=6.732..13.384 rows=3 loops=2173)\n Index Cond: (public.large_table_b.field_b = \npublic.large_table_b.field_b)\n -> Index Scan using \"PK_large_table_a\" on large_table_a \n(cost=0.00..1.23 rows=1 width=4) (actual time=2.021..2.021 rows=0 \nloops=6938)\n Index Cond: (large_table_a.field_a = public.large_table_b.field_a)\n Total runtime: 43182.975 ms\n\n\n\n\nOPTIMIZED QUERY (on PostgreSQL 8.4.7):\nhttp://explain.depesz.com/s/emO\n\nEXPLAIN ANALYZE SELECT s1.field_a FROM large_table_a JOIN large_table_b \ns1 USING (field_a) JOIN large_table_b s2 ON s1.field_b = s2.field_b \nWHERE s2.field_a = 2673056;\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2356.98 rows=494 width=4) (actual \ntime=0.086..96.056 rows=2120 loops=1)\n -> Nested Loop (cost=0.00..1745.51 rows=494 width=4) (actual \ntime=0.051..48.900 rows=6938 loops=1)\n -> Index Scan using \"IX_large_table_b_SigId\" on large_table_b \ns2 (cost=0.00..132.56 rows=164 width=4) (actual time=0.028..3.411 \nrows=2173 loops=1)\n Index Cond: (field_a = 2673056)\n -> Index Scan using \"IX_large_table_b_field_b\" on \nlarge_table_b s1 (cost=0.00..9.81 rows=2 width=8) (actual \ntime=0.007..0.012 rows=3 loops=2173)\n Index Cond: (s1.field_b = s2.field_b)\n -> Index Scan using \"PK_large_table_a\" on large_table_a \n(cost=0.00..1.23 rows=1 width=4) (actual time=0.004..0.004 rows=0 \nloops=6938)\n Index Cond: (large_table_a.field_a = s1.field_a)\n Total runtime: 98.165 ms\n\n\n--\nRobins Tharakan\n\nOn 10/18/2011 06:16 PM, Kevin Grittner wrote:\n> We build from source, and we include the minor release number in the\n> prefix for the build, so we can have both old and new software\n> installed side-by-side. The path for the client-side executables we\n> do through a symbolic link, so we can switch that painlessly. And we\n> assign the prefix used for the server to an environment variable in\n> our services script. So here is our process:\n>\n> - Build and install the new minor release.\n> - Change the symlink to use it for clients (like pg_dump and psql).\n> - Change the service script line that sets the prefix to point to\n> the new minor release.\n> - Run the service script with \"stop\" and then run the service script\n> with \"start\". (Unless your service script does a restart by using\n> stop and start, don't run it with \"restart\", because a PostgreSQL\n> restart won't pick up the new executables.)\n>\n> There is literally no more down time than it takes to stop and start\n> the database service. Our client software retries on a broken\n> connection, so we can even do this while users are running and they\n> just get a clock for a few seconds; but we usually prefer not to\n> cause even that much disruption, at least during normal business\n> hours. We have enough hardware to load balance off of one machine at\n> a time to do this without interruption of service.\n>\n> There are sometimes bugs fixed in a minor release which require\n> cleanup of possibly damaged data, like what I mentioned above. You\n> may need to vacuum or reindex something to recover from the damage\n> caused by the now-fixed bug, but the alternative is to continue to\n> run with the damage. I don't understand why someone would knowingly\n> choose that.\n>\n> Really, it is worthwhile to keep up on minor releases.\n> -Kevin",
"msg_date": "Tue, 25 Oct 2011 16:23:34 +0530",
"msg_from": "Robins Tharakan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan by Planner (Already resolved?)"
},
{
"msg_contents": "Robins Tharakan <[email protected]> writes:\n> ORIGINAL QUERY (on PostgreSQL 8.4.9):\n> http://explain.depesz.com/s/bTm\n\n> EXPLAIN ANALYZE SELECT field_a FROM large_table_a JOIN large_table_b \n> USING (field_a) WHERE field_b IN (SELECT large_table_b.field_b FROM \n> large_table_b WHERE field_a = 2673056) ;\n\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=132.97..194243.54 rows=156031 width=4) (actual \n> time=6.612..43179.524 rows=2120 loops=1)\n> -> Nested Loop (cost=132.97..1107.63 rows=156031 width=4) (actual \n> time=6.576..29122.017 rows=6938 loops=1)\n> -> HashAggregate (cost=132.97..133.96 rows=99 width=4) \n> (actual time=6.543..12.726 rows=2173 loops=1)\n> -> Index Scan using \"IX_large_table_b_SigId\" on \n> large_table_b (cost=0.00..132.56 rows=164 width=4) (actual \n> time=0.029..3.425 rows=2173 loops=1)\n> Index Cond: (field_a = 2673056)\n> -> Index Scan using \"IX_large_table_b_field_b\" on \n> large_table_b (cost=0.00..9.81 rows=2 width=8) (actual \n> time=6.732..13.384 rows=3 loops=2173)\n> Index Cond: (public.large_table_b.field_b = \n> public.large_table_b.field_b)\n> -> Index Scan using \"PK_large_table_a\" on large_table_a \n> (cost=0.00..1.23 rows=1 width=4) (actual time=2.021..2.021 rows=0 \n> loops=6938)\n> Index Cond: (large_table_a.field_a = public.large_table_b.field_a)\n> Total runtime: 43182.975 ms\n\n\n> OPTIMIZED QUERY (on PostgreSQL 8.4.7):\n> http://explain.depesz.com/s/emO\n\n> EXPLAIN ANALYZE SELECT s1.field_a FROM large_table_a JOIN large_table_b \n> s1 USING (field_a) JOIN large_table_b s2 ON s1.field_b = s2.field_b \n> WHERE s2.field_a = 2673056;\n\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..2356.98 rows=494 width=4) (actual \n> time=0.086..96.056 rows=2120 loops=1)\n> -> Nested Loop (cost=0.00..1745.51 rows=494 width=4) (actual \n> time=0.051..48.900 rows=6938 loops=1)\n> -> Index Scan using \"IX_large_table_b_SigId\" on large_table_b \n> s2 (cost=0.00..132.56 rows=164 width=4) (actual time=0.028..3.411 \n> rows=2173 loops=1)\n> Index Cond: (field_a = 2673056)\n> -> Index Scan using \"IX_large_table_b_field_b\" on \n> large_table_b s1 (cost=0.00..9.81 rows=2 width=8) (actual \n> time=0.007..0.012 rows=3 loops=2173)\n> Index Cond: (s1.field_b = s2.field_b)\n> -> Index Scan using \"PK_large_table_a\" on large_table_a \n> (cost=0.00..1.23 rows=1 width=4) (actual time=0.004..0.004 rows=0 \n> loops=6938)\n> Index Cond: (large_table_a.field_a = s1.field_a)\n> Total runtime: 98.165 ms\n\n\nI suspect that you're just fooling yourself here, and the \"optimized\"\nquery is no such thing. Those plans are identical except for the\ninsertion of the HashAggregate step, which in itself adds less than\n10msec to the runtime, and we can see it's not eliminating any rows\neither. So why does the second one run so much faster? I can think\nof three theories:\n\n1. The tables are horrendously bloated on the first database, so that\nmany more pages have to be touched to get the same number of tuples.\nThis would likely indicate an improper autovacuum configuration.\n\n2. You failed to account for caching effects, ie the first example\nis being run \"cold\" and has to actually read everything from disk,\nwhereas the second example has everything it needs already in RAM.\nIn that case the speed differential is quite illusory.\n\n3. The HashAggregate would likely spit out the rows in a completely\ndifferent order than it received them. If scanning large_table_b in\nthe order of IX_large_table_b_SigId happens to yield field_b values\nthat are very well ordered, it's possible that locality of access in\nthe other indexscans would be enough better in the second plan to\naccount for the speedup. This seems the least likely theory, though.\n\nBTW, how come is it that \"SELECT large_table_b.field_b FROM \nlarge_table_b WHERE field_a = 2673056\" produces no duplicate field_b\nvalues? Is that just luck? Is there a unique constraint on the table\nthat implies it will happen?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 Oct 2011 15:57:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan by Planner (Already resolved?) "
},
{
"msg_contents": "Thanks Tom!\n\nRegret the delay in reply, but two of the three guesses were spot-on and \nresolved the doubt. 8.4.9 does take care of this case very well.\n\nOn 10/27/2011 01:27 AM, Tom Lane wrote:\n> I suspect that you're just fooling yourself here, and the \"optimized\"\n> query is no such thing.\n:) I actually meant 'faster' query, but well...\n\n\n> 1. The tables are horrendously bloated on the first database, so that\n> many more pages have to be touched to get the same number of tuples.\n> This would likely indicate an improper autovacuum configuration.\nI believe you've nailed it pretty accurately. The tables are \nhorrendously bloated and I may need to tune AutoVacuum to be much more \naggressive than it is. I did see that HashAggregate makes only a minor \ndifference, but what didn't strike is that the slowness could be bloat.\n\n\n> 2. You failed to account for caching effects, ie the first example\n> is being run \"cold\" and has to actually read everything from disk,\n> whereas the second example has everything it needs already in RAM.\n> In that case the speed differential is quite illusory.\nOn hindsight, this was a miss. Should have warmed the caches before \nposting. Re-running this query multiple times, brought out the result in \n~100ms.\n\n> BTW, how come is it that \"SELECT large_table_b.field_b FROM\n> large_table_b WHERE field_a = 2673056\" produces no duplicate field_b\n> values? Is that just luck? Is there a unique constraint on the table\n> that implies it will happen?\nIts just luck. Sometimes the corresponding values genuinely don't exist \nin the other table, so that's ok.\n\n--\nRobins Tharakan",
"msg_date": "Sat, 29 Oct 2011 20:12:28 +0530",
"msg_from": "Robins Tharakan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan by Planner (Already resolved?)"
}
] |
[
{
"msg_contents": "Hi everybody,\nI googled a bit around and also checked the mailing lists but I still \ncan't make an idea. We plan to use postgres 9 and the Cluster Database \nReplica.\nMy colleagues are asking how many Cluster Databases (initdb) can I \ncreate and run on a single server. I mean, supposed my server has the \nresources, can I create 100 or even 200 Cluster Databases? Everyone with \nthe right configuration and in respect of the requisites?\nOr the postgres architecture doesn't provide similar numbers?\nWe are thinking to use the replica from near 200 databases around the \ninternet on a single db server.\nDoes anyone already did something like this?\n\nBTW, this is my first email to postgresql mailing list. If I'm doing \nsomething wrong do not hesitate to correct me :)\n\nThanks\nDavo\n",
"msg_date": "Wed, 19 Oct 2011 11:46:42 +0200",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How many Cluster database on a single server"
},
{
"msg_contents": "On 10/19/11 2:46 AM, [email protected] wrote:\n> Hi everybody,\n> I googled a bit around and also checked the mailing lists but I still can't make an idea. We plan to use postgres 9 and the Cluster Database Replica.\n> My colleagues are asking how many Cluster Databases (initdb) can I create and run on a single server. I mean, supposed my server has the resources, can I create 100 or even 200 Cluster Databases? Everyone with the right configuration and in respect of the requisites?\n> Or the postgres architecture doesn't provide similar numbers?\n> We are thinking to use the replica from near 200 databases around the internet on a single db server.\nYou don't need to do initdb on each one. Postgres can create many databases on a single server and manage them without difficulty.\n\nWe currently operate about 300 databases on a single server. Most are small, and one is an aggregate of all the small ones. I believe there are sites that have >1000 separate databases on one server.\n\nPostgres has a slightly different concept of a \"database\" than Oracle or MySQL, which is why your question about initdb is slightly off. You can indeed create several separate instances of Postgres (separate initdb for each), but the only reason you ever need to do that is if you're running different versions of Postgres (like 8.4 and 9.0) simultaneously.\n\nPostgres runs into problems when the total number of objects (tables, views, sequences, ...) across all databases gets very large, where \"very large\" is ill defined but is somewhere between a few hundred thousand and a million. We once had a rogue process that created 5 million tables, and we had to completely abandon the installation because of some sort of N^2 phenomenon that made it impossible to even use pg_dump to save and restore the system. So the advice is, \"don't do dumb stuff like that\" and you should be able to manage many databases.\n\nCraig\n\n",
"msg_date": "Wed, 19 Oct 2011 06:54:30 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How many Cluster database on a single server"
},
{
"msg_contents": "Hi Craig,\nthanks for your reply. I think I need to add some details on my \nquestion, like why we would need more than one Cluster Database. We are \nthinking to use the Streaming Replica feature to keep in sync a number \nof little DB servers around the net. The replica should happen on one or \nmore centralized servers. I didn't tested the replica personally bus as \nI can see, it syncs the whole Cluster DB. So, on the centralized \nserver(s), we will have perfect copies of the Cluster Databases.\nWe sure need to test this configuration but first of all I was wondering \nif there are known drawbacks.\nThanks again.\n\n\nOn 10/19/2011 03:54 PM, Craig James wrote:\n> On 10/19/11 2:46 AM, [email protected] wrote:\n>> Hi everybody,\n>> I googled a bit around and also checked the mailing lists but I still \n>> can't make an idea. We plan to use postgres 9 and the Cluster \n>> Database Replica.\n>> My colleagues are asking how many Cluster Databases (initdb) can I \n>> create and run on a single server. I mean, supposed my server has the \n>> resources, can I create 100 or even 200 Cluster Databases? Everyone \n>> with the right configuration and in respect of the requisites?\n>> Or the postgres architecture doesn't provide similar numbers?\n>> We are thinking to use the replica from near 200 databases around the \n>> internet on a single db server.\n> You don't need to do initdb on each one. Postgres can create many \n> databases on a single server and manage them without difficulty.\n>\n> We currently operate about 300 databases on a single server. Most are \n> small, and one is an aggregate of all the small ones. I believe there \n> are sites that have >1000 separate databases on one server.\n>\n> Postgres has a slightly different concept of a \"database\" than Oracle \n> or MySQL, which is why your question about initdb is slightly off. \n> You can indeed create several separate instances of Postgres (separate \n> initdb for each), but the only reason you ever need to do that is if \n> you're running different versions of Postgres (like 8.4 and 9.0) \n> simultaneously.\n>\n> Postgres runs into problems when the total number of objects (tables, \n> views, sequences, ...) across all databases gets very large, where \n> \"very large\" is ill defined but is somewhere between a few hundred \n> thousand and a million. We once had a rogue process that created 5 \n> million tables, and we had to completely abandon the installation \n> because of some sort of N^2 phenomenon that made it impossible to even \n> use pg_dump to save and restore the system. So the advice is, \"don't \n> do dumb stuff like that\" and you should be able to manage many databases.\n>\n> Craig\n>\n>\n\n",
"msg_date": "Wed, 19 Oct 2011 17:02:54 +0200",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How many Cluster database on a single server"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> wrote:\n \n> We are thinking to use the Streaming Replica feature to keep in\n> sync a number of little DB servers around the net. The replica\n> should happen on one or more centralized servers. I didn't tested\n> the replica personally bus as I can see, it syncs the whole\n> Cluster DB. So, on the centralized server(s), we will have perfect\n> copies of the Cluster Databases. We sure need to test this\n> configuration but first of all I was wondering if there are known\n> drawbacks.\n \nWe do something very much like this with about 100 standby database\nclusters on a single machine. We don't have any illusion that we\ncould switch to one of these for a normal production load and have\ngood performance with all of these competing for resources -- it's\nprimarily to confirm that the PITR backup process is working and\nstaying up to date, and to provide a quick source for a copy to a\nstandby production server.\n \nThe one thing I would strongly recommend is that you use a separate\nOS user as the owner of each cluster's data directory (and, of\ncourse, to run the cluster's service). We didn't initially do this,\nand had problems on recovery when the server crashed. If you search\nthe archives you can probably dig up all the details on why this is\nan issue and why separate users is a good solution; but really, this\nis important.\n \n-Kevin\n",
"msg_date": "Wed, 19 Oct 2011 10:13:42 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How many Cluster database on a single server"
},
{
"msg_contents": "On Wed, Oct 19, 2011 at 9:02 AM, [email protected]\n<[email protected]> wrote:\n> Hi Craig,\n> thanks for your reply. I think I need to add some details on my question,\n> like why we would need more than one Cluster Database. We are thinking to\n> use the Streaming Replica feature to keep in sync a number of little DB\n> servers around the net. The replica should happen on one or more centralized\n> servers. I didn't tested the replica personally bus as I can see, it syncs\n> the whole Cluster DB. So, on the centralized server(s), we will have perfect\n> copies of the Cluster Databases.\n> We sure need to test this configuration but first of all I was wondering if\n> there are known drawbacks.\n\nThe problem with having so many clusters on one machine is the shared\nmemory that each one needs. Even with a relatively small shared\nmemory segment of say 16MB, with 100 clusters you're going to be using\n1600MB of memory on that machine for shared memory.\n\nYou might be better off with one cluster and using slony to replicate\njust the parts that need replication.\n",
"msg_date": "Wed, 19 Oct 2011 09:44:39 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How many Cluster database on a single server"
},
{
"msg_contents": "On 10/19/2011 05:46 PM, [email protected] wrote:\n\n> My colleagues are asking how many Cluster Databases (initdb) can I\n> create and run on a single server. I mean, supposed my server has the\n> resources, can I create 100 or even 200 Cluster Databases?\n\nYep. It won't be fast, but it'll work.\n\nYou'll have two performance problems to deal with:\n\n- The memory, CPU and disk I/O overhead of all those extra postmasters, \nbgwriters, autovacuum daemons etc running for each cluster; and\n\n- having to split the available shared memory up between each cluster, \nso no single cluster gets very much shared memory to use for shared_buffers.\n\nIf you keep your shared_buffers low, it should work just fine, but it \nwon't perform as well as a single PostgreSQL cluster with lots of databases.\n\nIn the future I'm hoping someone'll be enthusiastic enough to / need to \nadd support split WAL logging or partial replication so this sort of \nthing isn't necessary. For now it does seem to be the best way to handle \ncases where different databases need different replication.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 20 Oct 2011 11:34:03 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How many Cluster database on a single server"
},
{
"msg_contents": "Thanks everybody for the suggestions. Now I have a better idea on which \ndirection proceed. When we will have some results we will post again to \nshare.\nThanks again!\n\n\nOn 10/19/2011 05:13 PM, Kevin Grittner wrote:\n> \"[email protected]\"<[email protected]> wrote:\n>\n>> We are thinking to use the Streaming Replica feature to keep in\n>> sync a number of little DB servers around the net. The replica\n>> should happen on one or more centralized servers. I didn't tested\n>> the replica personally bus as I can see, it syncs the whole\n>> Cluster DB. So, on the centralized server(s), we will have perfect\n>> copies of the Cluster Databases. We sure need to test this\n>> configuration but first of all I was wondering if there are known\n>> drawbacks.\n>\n> We do something very much like this with about 100 standby database\n> clusters on a single machine. We don't have any illusion that we\n> could switch to one of these for a normal production load and have\n> good performance with all of these competing for resources -- it's\n> primarily to confirm that the PITR backup process is working and\n> staying up to date, and to provide a quick source for a copy to a\n> standby production server.\n>\n> The one thing I would strongly recommend is that you use a separate\n> OS user as the owner of each cluster's data directory (and, of\n> course, to run the cluster's service). We didn't initially do this,\n> and had problems on recovery when the server crashed. If you search\n> the archives you can probably dig up all the details on why this is\n> an issue and why separate users is a good solution; but really, this\n> is important.\n>\n> -Kevin\n\n",
"msg_date": "Fri, 21 Oct 2011 08:57:48 +0200",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How many Cluster database on a single server"
}
] |
[
{
"msg_contents": "Hi,\n\nIn PostgreSQL, is there any performance difference between queries written\nusing \"explicit join notation\" vs \"implicit join notation\" in complex\nqueries?\n\nEXAMPLE: Simple \"explicit join notation\"\nSELECT *\nFROM employee INNER JOIN department\nON employee.DepartmentID = department.DepartmentID;\n\nEXAMPLE: Simple \"implicit join notation\"\nSELECT *\nFROM employee, department \nWHERE employee.DepartmentID = department.DepartmentID;\n\nRegards,\nGnanam\n\n\n",
"msg_date": "Wed, 19 Oct 2011 16:09:44 +0530",
"msg_from": "\"Gnanakumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inner Join - Explicit vs Implicit Join Performance"
},
{
"msg_contents": "Hello\n\nno, there is no difference - you can check it via EXPLAIN statement\n\nRegards\n\nPavel Stehule\n\n2011/10/19 Gnanakumar <[email protected]>:\n> Hi,\n>\n> In PostgreSQL, is there any performance difference between queries written\n> using \"explicit join notation\" vs \"implicit join notation\" in complex\n> queries?\n>\n> EXAMPLE: Simple \"explicit join notation\"\n> SELECT *\n> FROM employee INNER JOIN department\n> ON employee.DepartmentID = department.DepartmentID;\n>\n> EXAMPLE: Simple \"implicit join notation\"\n> SELECT *\n> FROM employee, department\n> WHERE employee.DepartmentID = department.DepartmentID;\n>\n> Regards,\n> Gnanam\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 19 Oct 2011 13:14:24 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inner Join - Explicit vs Implicit Join Performance"
}
] |
[
{
"msg_contents": "Hi\n\nI'm a postgres novice so ....\n\nI have this fairly simple table\n-------------------------------------------------\n device integer not null,\n group integer not null,\n datum timestamp without time zone not null,\n val1 numeric(7,4) not null default 0.000,\n val2 numeric(7,4) not null default 0.000\n-------------------------------------------------\n\nThe device column is a foreign-key to the PK of my device table.\nand I have a UNIQUE INDEX on 3 columns device, group, datum\n\nThis is just a test database and I want to keep the \"datum\" values\n(timestamps that span approx 1 month) all bound to CURRENT_DATE().\n\nSo I thought I’d just run this once (via cron) every morning.\n BEGIN;\n DROP INDEX data_unique;\n UPDATE data SET datum = (data.datum + interval '24 hours');\n CREATE UNIQUE INDEX data_unique ON public.data USING BTREE\n(device, group, datum);\n COMMIT;\n\nBut\n1.\tit’s taking forever and\n2.\tI’m seeing that my disk is filling up real fast.\n\nAny suggestions?\n\nAlan\n",
"msg_date": "Wed, 19 Oct 2011 08:03:48 -0700 (PDT)",
"msg_from": "alan <[email protected]>",
"msg_from_op": true,
"msg_subject": "delete/recreate indexes"
},
{
"msg_contents": "On Wed, 2011-10-19 at 08:03 -0700, alan wrote:\n> So I thought I’d just run this once (via cron) every morning.\n> BEGIN;\n> DROP INDEX data_unique;\n> UPDATE data SET datum = (data.datum + interval '24 hours');\n> CREATE UNIQUE INDEX data_unique ON public.data USING BTREE\n> (device, group, datum);\n> COMMIT;\n> \n> But\n> 1.\tit’s taking forever and\n> 2.\tI’m seeing that my disk is filling up real fast.\n\nAn unrestricted update will end up rewriting the whole table. It's\nadvisable to run VACUUM afterward, so that the wasted space can be\nreclaimed. What version are you on? Do you have autovacuum enabled?\n\nAlso, to take a step back, why do you try to keep the timestamps\nchanging like that? Why not store the information you need in the record\n(e.g. insert time as well as the datum) and then compute the result you\nneed using a SELECT (or make it a view for convenience)? Fundamentally,\nthese records aren't changing, you are just trying to interpret them in\nthe context of the current day. That should be done using a SELECT, not\nan UPDATE.\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Wed, 19 Oct 2011 19:51:09 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: delete/recreate indexes"
},
{
"msg_contents": "> An unrestricted update will end up rewriting the whole table. \r\n> It's advisable to run VACUUM afterward, so that the wasted \r\n> space can be reclaimed. What version are you on? Do you have \r\n> autovacuum enabled?\r\n> \r\n> Also, to take a step back, why do you try to keep the \r\n> timestamps changing like that? Why not store the information \r\n> you need in the record (e.g. insert time as well as the \r\n> datum) and then compute the result you need using a SELECT \r\n> (or make it a view for convenience)? Fundamentally, these \r\n> records aren't changing, you are just trying to interpret \r\n> them in the context of the current day. That should be done \r\n> using a SELECT, not an UPDATE.\r\n> \r\n\r\nI like Jeff's idea of redefining the problem. If you need the data to contain dates in the last 30 days, you might want to consider storing an interval, then using a view that includes a calculation using CURRENT_DATE().\r\n\r\nRegards,\r\nPaul Bort\r\nSystems Engineer\r\nTMW Systems, Inc.\r\[email protected]\r\n216 831 6606 x2233\r\n216 8313606 (fax)\r\n\r\n ",
"msg_date": "Thu, 20 Oct 2011 13:24:45 +0000",
"msg_from": "\"Bort, Paul\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: delete/recreate indexes"
},
{
"msg_contents": "Thanks Jeff,\n\nOn Oct 20, 4:51 am, [email protected] (Jeff Davis) wrote:\n> Also, to take a step back, why do you try to keep the timestamps\n> changing like that? Why not store the information you need in the record\n> (e.g. insert time as well as the datum) and then compute the result you\n> need using a SELECT (or make it a view for convenience)? Fundamentally,\n> these records aren't changing, you are just trying to interpret them in\n> the context of the current day. That should be done using a SELECT, not\n> an UPDATE.\n\nWell this is not the way my \"production\" table is getting updated.\nThis was a developer's test DB so I thought the update statement would\nbe a\nquick way to just shift all the values.\n\nTo mimic how my \"production\" database is being updated I should be\ndoing\nthis once each morning:\n\n 1. delete the old entries older than 6 days (i.e.: my table holds\none week's data)\n 2. add new entries for yesterday\n\nI'm doing this via a perl script. For 1. I just do a\nDELETE FROM device WHERE datum < (CURRENT_DATE - interval ' 7 days' )\n\nFor 2. I tried this but I get an \"invalid input syntax for type\ntimestamp:\" error:\n\tmy $val1 = rand(100);\n\tmy $val2 = rand(100);\n\tmy $stmt = \"INSERT INTO data (device,group,datum,val1,val2)\nVALUES(?,?,?,?,?)\";\n my $insert = $dbh->prepare($stmt) or die $dbh->errstr;\n\tmy $timestamp = \"TO_TIMESTAMP(text(CURRENT_DATE - interval '1\nday'),'YYYY-MM-DD HH24:MI:SS')\";\n\t$insert->execute($device,$groupid,$timestamp,$val1,$val2));\n\nAlan\n",
"msg_date": "Mon, 24 Oct 2011 07:08:45 -0700 (PDT)",
"msg_from": "alan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: delete/recreate indexes"
}
] |
[
{
"msg_contents": "I recently ran a query against our production database and saw several\ndisused indexes. Is there a performance harm in having disused\nindexes out there?\n\nOf course, I will be checking our code base and with developers to\nensure that these indexes aren't being used programmatically to\nprevent redundant inserts and the like. ..\n",
"msg_date": "Wed, 19 Oct 2011 08:24:00 -0700 (PDT)",
"msg_from": "Elisa <[email protected]>",
"msg_from_op": true,
"msg_subject": "disused indexes and performance?"
},
{
"msg_contents": "On Wed, Oct 19, 2011 at 9:24 AM, Elisa <[email protected]> wrote:\n> I recently ran a query against our production database and saw several\n> disused indexes. Is there a performance harm in having disused\n> indexes out there?\n\nSure there is, they'll slow down writes and use more disk space.\n\n> Of course, I will be checking our code base and with developers to\n> ensure that these indexes aren't being used programmatically to\n> prevent redundant inserts and the like. ..\n\nUnique indexes are usually a small set compared to extra non-unique\nindexes. Also, some indexes may be seldomly used by make a very big\ndifference on the rare occasion they are used, for instance partial or\nfunctional or full test indexes.\n",
"msg_date": "Wed, 19 Oct 2011 20:01:42 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disused indexes and performance?"
}
] |
[
{
"msg_contents": "For example:\n\nTable A\n\n-id (PK)\n\n-name\n\n \n\nTable B\n\n-table_a_id (PK, FK)\n\n-address \n\n \n\nWhen I do an insert on table B, the database check if value for column\n\"table_a_id\" exists in table A\n\nBut, if I do an update of column \"address\" of table B, does the database\ncheck again?\n\n \n\nMy question is due to the nature of and update in postgres, that basically\nis a new version \"insert\".\n\n \n\nThanks\n\n\nFor example:Table A-id (PK)-name Table B-table_a_id (PK, FK)-address When I do an insert on table B, the database check if value for column “table_a_id” exists in table ABut, if I do an update of column “address” of table B, does the database check again? My question is due to the nature of and update in postgres, that basically is a new version “insert”. Thanks",
"msg_date": "Wed, 19 Oct 2011 12:51:08 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "does update of column with no relation imply a relation check of\n\tother column?"
},
{
"msg_contents": "On 19 Oct 2011, at 17:51, Anibal David Acosta wrote:\n\n> For example:\n> Table A\n> -id (PK)\n> -name\n> \n> Table B\n> -table_a_id (PK, FK)\n> -address\n> \n> When I do an insert on table B, the database check if value for column “table_a_id” exists in table A\n> But, if I do an update of column “address” of table B, does the database check again?\n> \n> My question is due to the nature of and update in postgres, that basically is a new version “insert”.\n\n\nIn short - I believe it does. No reason for it not to. \n\n\nOn 19 Oct 2011, at 17:51, Anibal David Acosta wrote:For example:Table A-id (PK)-name Table B-table_a_id (PK, FK)-address When I do an insert on table B, the database check if value for column “table_a_id” exists in table ABut, if I do an update of column “address” of table B, does the database check again? My question is due to the nature of and update in postgres, that basically is a new version “insert”.In short - I believe it does. No reason for it not to.",
"msg_date": "Wed, 19 Oct 2011 18:42:59 +0200",
"msg_from": "Greg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does update of column with no relation imply a relation check of\n\tother column?"
},
{
"msg_contents": "On Wed, Oct 19, 2011 at 12:42 PM, Greg Jaskiewicz <[email protected]> wrote:\n> For example:\n> Table A\n> -id (PK)\n> -name\n>\n> Table B\n> -table_a_id (PK, FK)\n> -address\n>\n> When I do an insert on table B, the database check if value for column\n> “table_a_id” exists in table A\n> But, if I do an update of column “address” of table B, does the database\n> check again?\n>\n> My question is due to the nature of and update in postgres, that basically\n> is a new version “insert”.\n>\n> In short - I believe it does. No reason for it not to.\n\nI just tested this, and it seems not.\n\nrhaas=# create table a (id serial primary key);\nNOTICE: CREATE TABLE will create implicit sequence \"a_id_seq\" for\nserial column \"a.id\"\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"a_pkey\" for table \"a\"\nCREATE TABLE\nrhaas=# create table b (table_a_id integer primary key references a\n(id), address text);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"b_pkey\" for table \"b\"\nCREATE TABLE\nrhaas=# insert into a DEFAULT VALUES ;\nINSERT 0 1\nrhaas=# insert into b values (1);\nINSERT 0 1\n\nThen, in another session:\n\nrhaas=# begin;\nBEGIN\nrhaas=# lock a;\nLOCK TABLE\n\nBack to the first session:\n\nrhaas=# update b set address = 'cow';\nUPDATE 1\nrhaas=# select * from b;\n table_a_id | address\n------------+---------\n 1 | cow\n(1 row)\n\nrhaas=# update b set table_a_id = table_a_id + 1;\n<blocks>\n\nSo it seems that, when the fk field was unchanged, nothing was done\nthat required accessing table a; otherwise, the access exclusive lock\nheld by the other session would have blocked it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 31 Oct 2011 14:19:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does update of column with no relation imply a relation\n\tcheck of other column?"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Oct 19, 2011 at 12:42 PM, Greg Jaskiewicz <[email protected]> wrote:\n>> When I do an insert on table B, the database check if value for column\n>> �table_a_id� exists in table A\n>> But, if I do an update of column �address� of table B, does the database\n>> check again?\n\n> I just tested this, and it seems not.\n\nIt will not, unless you update the same row more than once in a single\ntransaction. If you do that, it no longer has enough information to be\nsure the referencing value hasn't changed in that transaction, so it\nwill do a check.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Oct 2011 15:55:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does update of column with no relation imply a relation check of\n\tother column?"
}
] |
[
{
"msg_contents": "Hi there!\n\nWe have a dev machine running 9.0.1 (an i3 laptop, with a regular hard disk,\nwith 4GB of RAM, and a mostly untuned postgresql.conf file). The changed\nlines are:\n shared_buffers = 512MB\n temp_buffers = 48MB\n work_mem = 32MB\n maintenance_work_mem = 348MB\n checkpoint_segments = 10\n effective_cache_size = 512MB\n\nThe same database is loaded onto a production server running 9.1.1 (dual QC\nprocessors, RAID-10 SAS drives, 36GB of RAM), which replicates to a backup\nserver. This has a lot of changed properties:\n shared_buffers = 8500MB\n work_mem = 35MB\n maintenance_work_mem = 512MB\n wal_level = hot_standby\n checkpoint_segments = 50\n max_wal_senders = 3\n wal_keep_segments = 144\n random_page_cost = 1.0\n effective_cache_size = 16384MB\n effective_io_concurrency = 6\n\nThe same DB is loaded on both the production and the dev environment, and in\nall cases (about 5000 distinct different queries), the production\nenvironment is about 500x faster, except for one type of query (both\ndatabases were loaded from the same pg_dump on an 8.4.4 database):\n\n On the dev box, we have: http://explain.depesz.com/s/rwU - about 131\nseconds\n On the production box, we have: http://explain.depesz.com/s/3dt -\nabout .25 seconds\n\nFor the life of me, I don't understand why it would be slower. What can we\ndo to speed up this one query?\n\nBy the way, on 8.4.4, the query took about 84 seconds. I cannot understand\nwhy the 9.0 is so blazing fast, but 8.4.4 and 9.1.1 are slower. We've\nchecked the query results (they are identical) to make sure we're not\nmissing any data.\n\n\n-- \nAnthony\n\nHi there!We have a dev machine running 9.0.1 (an i3 laptop, with a regular hard disk, with 4GB of RAM, and a mostly untuned postgresql.conf file). The changed lines are: shared_buffers = 512MB\n temp_buffers = 48MB work_mem = 32MB maintenance_work_mem = 348MB checkpoint_segments = 10 effective_cache_size = 512MBThe same database is loaded onto a production server running 9.1.1 (dual QC processors, RAID-10 SAS drives, 36GB of RAM), which replicates to a backup server. This has a lot of changed properties:\n shared_buffers = 8500MB work_mem = 35MB maintenance_work_mem = 512MB wal_level = hot_standby checkpoint_segments = 50 max_wal_senders = 3 wal_keep_segments = 144\n random_page_cost = 1.0 effective_cache_size = 16384MB effective_io_concurrency = 6The same DB is loaded on both the production and the dev environment, and in all cases (about 5000 distinct different queries), the production environment is about 500x faster, except for one type of query (both databases were loaded from the same pg_dump on an 8.4.4 database):\n On the dev box, we have: http://explain.depesz.com/s/rwU - about 131 seconds On the production box, we have: http://explain.depesz.com/s/3dt - about .25 seconds\nFor the life of me, I don't understand why it would be slower. What can we do to speed up this one query?By the way, on 8.4.4, the query took about 84 seconds. I cannot understand why the 9.0 is so blazing fast, but 8.4.4 and 9.1.1 are slower. We've checked the query results (they are identical) to make sure we're not missing any data.\n-- Anthony",
"msg_date": "Fri, 21 Oct 2011 18:39:46 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.4.4, 9.0, and 9.1 Planner Differences"
},
{
"msg_contents": "Anthony Presley <[email protected]> writes:\n> We have a dev machine running 9.0.1 (an i3 laptop, with a regular hard disk,\n> with 4GB of RAM, and a mostly untuned postgresql.conf file). The changed\n> lines are:\n> shared_buffers = 512MB\n> temp_buffers = 48MB\n> work_mem = 32MB\n> maintenance_work_mem = 348MB\n> checkpoint_segments = 10\n> effective_cache_size = 512MB\n\n> The same database is loaded onto a production server running 9.1.1 (dual QC\n> processors, RAID-10 SAS drives, 36GB of RAM), which replicates to a backup\n> server. This has a lot of changed properties:\n> shared_buffers = 8500MB\n> work_mem = 35MB\n> maintenance_work_mem = 512MB\n> wal_level = hot_standby\n> checkpoint_segments = 50\n> max_wal_senders = 3\n> wal_keep_segments = 144\n> random_page_cost = 1.0\n> effective_cache_size = 16384MB\n> effective_io_concurrency = 6\n\nThat random_page_cost setting is going to have a huge effect on the\nplanner's choices, and the larger effective_cache_size setting will\nlikely affect plans too. I don't find it surprising in the least\nthat you're getting different plan choices ... and even less so when\nyour \"dev\" and \"production\" DBs aren't even the same major version.\nYou might want to think about making your dev environment more like\nyour production.\n\n> The same DB is loaded on both the production and the dev environment, and in\n> all cases (about 5000 distinct different queries), the production\n> environment is about 500x faster, except for one type of query (both\n> databases were loaded from the same pg_dump on an 8.4.4 database):\n\n> On the dev box, we have: http://explain.depesz.com/s/rwU - about 131\n> seconds\n> On the production box, we have: http://explain.depesz.com/s/3dt -\n> about .25 seconds\n\nDid you mislabel these? Because if you didn't, the numbers are right\nin line with what you say above. But anyway, the problem with the\nslower query appears to be poor rowcount estimates, leading the planner\nto use a nestloop join when it shouldn't. You haven't provided nearly\nenough context to let anyone guess why the estimates are off, other\nthan boilerplate suggestions like making sure the tables have been\nANALYZEd recently, and maybe increasing the statistics targets.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Oct 2011 11:58:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.4.4, 9.0, and 9.1 Planner Differences "
},
{
"msg_contents": "On Sat, Oct 22, 2011 at 10:58 AM, Tom Lane <[email protected]> wrote:\n\n> Anthony Presley <[email protected]> writes:\n> > We have a dev machine running 9.0.1 (an i3 laptop, with a regular hard\n> disk,\n> > with 4GB of RAM, and a mostly untuned postgresql.conf file). The changed\n> > lines are:\n> > shared_buffers = 512MB\n> > temp_buffers = 48MB\n> > work_mem = 32MB\n> > maintenance_work_mem = 348MB\n> > checkpoint_segments = 10\n> > effective_cache_size = 512MB\n>\n> > The same database is loaded onto a production server running 9.1.1 (dual\n> QC\n> > processors, RAID-10 SAS drives, 36GB of RAM), which replicates to a\n> backup\n> > server. This has a lot of changed properties:\n> > shared_buffers = 8500MB\n> > work_mem = 35MB\n> > maintenance_work_mem = 512MB\n> > wal_level = hot_standby\n> > checkpoint_segments = 50\n> > max_wal_senders = 3\n> > wal_keep_segments = 144\n> > random_page_cost = 1.0\n> > effective_cache_size = 16384MB\n> > effective_io_concurrency = 6\n>\n> That random_page_cost setting is going to have a huge effect on the\n> planner's choices, and the larger effective_cache_size setting will\n> likely affect plans too. I don't find it surprising in the least\n> that you're getting different plan choices ... and even less so when\n> your \"dev\" and \"production\" DBs aren't even the same major version.\n> You might want to think about making your dev environment more like\n> your production.\n>\n\nTom - thanks for your input.\n\nUpgrading to 9.1.1 on the dev box is certainly next on our list ... I like\nto make sure that the dev team uses a MUCH slower box than the production\nserver, making sure that if the developers are making things fast for the\nmachines, it's really fast on the production box. For all of our queries\nexcept this one, this strategy is \"working\".\n\n> The same DB is loaded on both the production and the dev environment, and\n> in\n> > all cases (about 5000 distinct different queries), the production\n> > environment is about 500x faster, except for one type of query (both\n> > databases were loaded from the same pg_dump on an 8.4.4 database):\n>\n> > On the dev box, we have: http://explain.depesz.com/s/rwU - about\n> 131\n> > seconds\n> > On the production box, we have: http://explain.depesz.com/s/3dt -\n> > about .25 seconds\n>\n> Did you mislabel these? Because if you didn't, the numbers are right\n> in line with what you say above. But anyway, the problem with the\n> slower query appears to be poor rowcount estimates, leading the planner\n> to use a nestloop join when it shouldn't. You haven't provided nearly\n> enough context to let anyone guess why the estimates are off, other\n> than boilerplate suggestions like making sure the tables have been\n> ANALYZEd recently, and maybe increasing the statistics targets.\n>\n\nI *did* mis-label them. The 131 seconds is actually the production box.\n IE:\n production is ... http://explain.depesz.com/s/rwU\n\nThe .25 seconds is the development box. IE:\n development is ... http://explain.depesz.com/s/3dt\n\nI wasn't surprised that the plans are different. I was surprised that the\ndevelopment box *spanked* the production system.\n\nHere's the actual query:\nselect\n preference0_.*\nfrom\n preference preference0_, location location1_, employee employee2_\nwhere\n preference0_.employee_id=employee2_.id\n and preference0_.location_id=location1_.id\n and location1_.corporation_id=41197\n and employee2_.deleted='N'\n and preference0_.deleted='N'\n and\n (preference0_.id not in (\n select preference3_.id from preference preference3_, location\nlocation4_, employee employee5_\nwhere preference3_.employee_id=employee5_.id and\npreference3_.location_id=location4_.id\nand location4_.corporation_id=41197 and employee5_.deleted='N' and\npreference3_.deleted='N'\nand (preference3_.startDate>'2011-11-03 00:00:00' or\npreference3_.endDate<'2011-11-02 00:00:00'))\n ) and\n (preference0_.employee_id in\n (select employee6_.id from employee employee6_ inner join app_user\nuser7_ on employee6_.user_id=user7_.id\ninner join user_location userlocati8_ on user7_.id=userlocati8_.user_id,\nlocation location9_\nwhere userlocati8_.location_id=location9_.id and\nuserlocati8_.location_id=6800 and userlocati8_.deleted='N'\nand location9_.deleted='N' and employee6_.deleted='N')\n ) order by preference0_.date_created;\n\nI have tried setting the statistics on employee.user_id to be 100 and 1000,\nand the rest are the default (100).\n\nI've run both an \"ANALYZE\" and a \"VACUUM ANALYZE\" on the production system -\nboth \"generally\", and on each of the above tables (employee, app_user,\nlocation, preference).\n\nHere's an updated explain of the most recent attempt. About 5 minutes after\nI analyzed them:\n http://explain.depesz.com/s/G32\n\nWhat else would I need to provide?\n\n\n-- \nAnthony Presley\n\nOn Sat, Oct 22, 2011 at 10:58 AM, Tom Lane <[email protected]> wrote:\nAnthony Presley <[email protected]> writes:\n> We have a dev machine running 9.0.1 (an i3 laptop, with a regular hard disk,\n> with 4GB of RAM, and a mostly untuned postgresql.conf file). The changed\n> lines are:\n> shared_buffers = 512MB\n> temp_buffers = 48MB\n> work_mem = 32MB\n> maintenance_work_mem = 348MB\n> checkpoint_segments = 10\n> effective_cache_size = 512MB\n\n> The same database is loaded onto a production server running 9.1.1 (dual QC\n> processors, RAID-10 SAS drives, 36GB of RAM), which replicates to a backup\n> server. This has a lot of changed properties:\n> shared_buffers = 8500MB\n> work_mem = 35MB\n> maintenance_work_mem = 512MB\n> wal_level = hot_standby\n> checkpoint_segments = 50\n> max_wal_senders = 3\n> wal_keep_segments = 144\n> random_page_cost = 1.0\n> effective_cache_size = 16384MB\n> effective_io_concurrency = 6\n\nThat random_page_cost setting is going to have a huge effect on the\nplanner's choices, and the larger effective_cache_size setting will\nlikely affect plans too. I don't find it surprising in the least\nthat you're getting different plan choices ... and even less so when\nyour \"dev\" and \"production\" DBs aren't even the same major version.\nYou might want to think about making your dev environment more like\nyour production.Tom - thanks for your input.Upgrading to 9.1.1 on the dev box is certainly next on our list ... I like to make sure that the dev team uses a MUCH slower box than the production server, making sure that if the developers are making things fast for the machines, it's really fast on the production box. For all of our queries except this one, this strategy is \"working\".\n\n> The same DB is loaded on both the production and the dev environment, and in\n> all cases (about 5000 distinct different queries), the production\n> environment is about 500x faster, except for one type of query (both\n> databases were loaded from the same pg_dump on an 8.4.4 database):\n\n> On the dev box, we have: http://explain.depesz.com/s/rwU - about 131\n> seconds\n> On the production box, we have: http://explain.depesz.com/s/3dt -\n> about .25 seconds\n\nDid you mislabel these? Because if you didn't, the numbers are right\nin line with what you say above. But anyway, the problem with the\nslower query appears to be poor rowcount estimates, leading the planner\nto use a nestloop join when it shouldn't. You haven't provided nearly\nenough context to let anyone guess why the estimates are off, other\nthan boilerplate suggestions like making sure the tables have been\nANALYZEd recently, and maybe increasing the statistics targets.I *did* mis-label them. The 131 seconds is actually the production box. IE: production is ... http://explain.depesz.com/s/rwU\nThe .25 seconds is the development box. IE: development is ... http://explain.depesz.com/s/3dt\nI wasn't surprised that the plans are different. I was surprised that the development box *spanked* the production system.Here's the actual query:select preference0_.*\nfrom preference preference0_, location location1_, employee employee2_where preference0_.employee_id=employee2_.id and preference0_.location_id=location1_.id \n and location1_.corporation_id=41197 and employee2_.deleted='N' and preference0_.deleted='N' and (preference0_.id not in ( select preference3_.id from preference preference3_, location location4_, employee employee5_ \nwhere preference3_.employee_id=employee5_.id and preference3_.location_id=location4_.id and location4_.corporation_id=41197 and employee5_.deleted='N' and preference3_.deleted='N' \nand (preference3_.startDate>'2011-11-03 00:00:00' or preference3_.endDate<'2011-11-02 00:00:00')) ) and (preference0_.employee_id in (select employee6_.id from employee employee6_ inner join app_user user7_ on employee6_.user_id=user7_.id \ninner join user_location userlocati8_ on user7_.id=userlocati8_.user_id, location location9_ where userlocati8_.location_id=location9_.id and userlocati8_.location_id=6800 and userlocati8_.deleted='N' \nand location9_.deleted='N' and employee6_.deleted='N') ) order by preference0_.date_created;I have tried setting the statistics on employee.user_id to be 100 and 1000, and the rest are the default (100).\nI've run both an \"ANALYZE\" and a \"VACUUM ANALYZE\" on the production system - both \"generally\", and on each of the above tables (employee, app_user, location, preference).\nHere's an updated explain of the most recent attempt. About 5 minutes after I analyzed them: http://explain.depesz.com/s/G32\nWhat else would I need to provide?-- Anthony Presley",
"msg_date": "Sat, 22 Oct 2011 18:12:51 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.4.4, 9.0, and 9.1 Planner Differences"
},
{
"msg_contents": "Anthony Presley <[email protected]> writes:\n> I have tried setting the statistics on employee.user_id to be 100 and 1000,\n> and the rest are the default (100).\n\n> I've run both an \"ANALYZE\" and a \"VACUUM ANALYZE\" on the production system -\n> both \"generally\", and on each of the above tables (employee, app_user,\n> location, preference).\n\n> Here's an updated explain of the most recent attempt. About 5 minutes after\n> I analyzed them:\n> http://explain.depesz.com/s/G32\n\nLooks like the biggest estimation errors are on the location_id joins.\nMaybe you should be cranking up the stats targets on those columns.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Oct 2011 22:31:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.4.4, 9.0, and 9.1 Planner Differences "
}
] |
[
{
"msg_contents": "Hi there,,\nHow can use explain to estimate workload contains more than one query in the same time.\nsuch as\nexplain (q1,q2,q3)......i want the total cost for all queries in the workload using one explain ,,??\nregards..\n\nHi there,,How can use explain to estimate workload contains more than one query in the same time.such asexplain (q1,q2,q3)......i want the total cost for all queries in the workload using one explain ,,??regards..",
"msg_date": "Fri, 21 Oct 2011 18:11:41 -0700 (PDT)",
"msg_from": "Radhya sahal <[email protected]>",
"msg_from_op": true,
"msg_subject": "explain workload"
},
{
"msg_contents": "Hi Radhya,\n\nMake multiple EXPLAIN requests, and add them up in your application, I \nguess?\n\n--\nRobins\nSr. PGDBA\nComodo India\n\nOn 10/22/2011 06:41 AM, Radhya sahal wrote:\n> such as\n> explain (q1,q2,q3)......i want the total cost for all queries in the\n> workload using one explain ,,??",
"msg_date": "Sun, 23 Oct 2011 16:08:19 +0530",
"msg_from": "Robins Tharakan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: explain workload"
},
{
"msg_contents": "Robins Tharakan <[email protected]> writes:\n> Hi Radhya,\n> Make multiple EXPLAIN requests, and add them up in your application, I \n> guess?\n\nOr maybe contrib/auto_explain would help.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Oct 2011 11:44:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: explain workload "
}
] |
[
{
"msg_contents": "Hi all\n\nI'd like to tune the following hstore-related query which selects all\nZoos from table osm_point:\n\nSELECT osm_id, name, tags\n FROM osm_point\n WHERE tags @> hstore('tourism','zoo')\n ORDER BY name;\n\n... given the following table and indexes definition:\n\nCREATE TABLE osm_point (\n osm_id integer,\n name text,\n tags hstore,\n way geometry\n)\n\nCREATE INDEX osm_point_index ON osm_point USING gist (way);\nCREATE INDEX osm_point_name_idx ON osm_point USING btree (name) WITH\n(FILLFACTOR=100);\nALTER TABLE osm_point CLUSTER ON osm_point_name_idx;\nCREATE INDEX osm_point_pkey ON osm_point USING btree (osm_id);\nCREATE INDEX osm_point_tags_idx ON osm_point USING gist (tags) WITH\n(FILLFACTOR=100);\n\n... and following statistics:\n* Live Tuples 9626138 (that's also what COUNT(*) returns)\n* Table Size 1029 MB\n* Toast Table Size 32 kB\n* Indexes Size 1381 MB (?)\n** osm_point_index 1029 MB\n** osm_point_name_idx 1029 MB\n** osm_point_pkey 1029 MB\n** osm_point_tags_idx 1029 MB\n\nPostgreSQL has version 9.0.4, runs on on Ubuntu Linux 10.04 LTS\n(64-Bit) with 1 vCPU and 1 GB vRAM.\nAdding more memory (say to total of 32 GB) would only postpone the problem.\nI already increased the PostgreSQL configuration of shared_buffers\n(using pgtune).\n\nNow EXPLAIN ANALYZE returns (if run several times):\nSort (cost=30819.51..30843.58 rows=9626 width=65) (actual\ntime=11.502..11.502 rows=19 loops=1)\n Sort Key: name\n Sort Method: quicksort Memory: 29kB\n -> Bitmap Heap Scan on osm_point (cost=313.21..30182.62 rows=9626\nwidth=65) (actual time=10.727..11.473 rows=19 loops=1)\n Recheck Cond: (tags @> 'tourism=>zoo'::hstore)\n -> Bitmap Index Scan on osm_point_tags_idx\n(cost=0.00..310.80 rows=9626 width=0) (actual time=10.399..10.399\nrows=591 loops=1)\n Index Cond: (tags @> 'tourism=>zoo'::hstore)\nTotal runtime: 11 ms\n\nFirst time the query lasts about 10 time longer (~ 1010 ms) - but I'd\nlike to get better results already in the first query.\n\n=> 1. When I add the \"actual time\" from EXPLAIN above, I get 11 + 10 +\n10ms which is three times greater than the 11ms reported. Why?\n=> 2. Why does the planner choose to sort first instead of sorting the\n(smaller) result query at the end the?\n=> 3. What could I do to speed up such queries (first time, i.e.\nwithout caching) besides simply adding more memory?\n\nYours, Stefan\n",
"msg_date": "Sun, 23 Oct 2011 01:33:53 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "hstore query: Any better idea than adding more memory?"
},
{
"msg_contents": "* Stefan Keller ([email protected]) wrote:\n> Adding more memory (say to total of 32 GB) would only postpone the problem.\n\nErm, seems like you're jumping to conclusions here...\n\n> First time the query lasts about 10 time longer (~ 1010 ms) - but I'd\n> like to get better results already in the first query.\n\nDo you mean first time after a database restart?\n\n> => 1. When I add the \"actual time\" from EXPLAIN above, I get 11 + 10 +\n> 10ms which is three times greater than the 11ms reported. Why?\n\nBecause they include the times from the nodes under them.\n\n> => 2. Why does the planner choose to sort first instead of sorting the\n> (smaller) result query at the end the?\n\nYou're reading the explain 'backwards' regarding time.. It *does* do\nthe sort last. Nodes which are indented feed the nodes above them, so\nthe bitmap index scan and recheck feed into the sort, hence the sort is\nactually done after. Can't really work any other way anyway, PG has to\nget the data before it can sort it..\n\n> => 3. What could I do to speed up such queries (first time, i.e.\n> without caching) besides simply adding more memory?\n\nThere didn't look like anything there that could really be done much\nfaster, at the plan level. It's not uncommon for people to\nintentionally get a box with more memory than the size of their\ndatabase, so everything is in memory.\n\nAt the end of the day, if the blocks aren't in memory then PG has to get\nthem from disk. If disk is slow, the query is going to be slow. Now,\nhopefully, you're hitting this table often enough with similar queries\nthat important, common, parts of the table and index are already in\nmemory, but there's no magic PG can perform to ensure that.\n\nIf there's a lot of updates/changes to this table, you might check if\nthere's a lot of bloat (check_postgres works great for this..).\nEliminating excessive bloat, if there is any, could help with all\naccesses to that table, of course, since it would reduce the amount of\ndata which would need to be.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Sun, 23 Oct 2011 19:25:08 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hstore query: Any better idea than adding more\n memory?"
},
{
"msg_contents": "Hi Stephen\n\nThanks for your answer and hints.\n\n2011/10/24 Stephen Frost <[email protected]> wrote:\n> * Stefan Keller ([email protected]) wrote:\n>> Adding more memory (say to total of 32 GB) would only postpone the problem.\n> Erm, seems like you're jumping to conclusions here...\n\nSorry. I actually only wanted to report here what's special in my\npostgresql.conf.\n\n>> First time the query lasts about 10 time longer (~ 1010 ms) - but I'd\n>> like to get better results already in the first query.\n>\n> Do you mean first time after a database restart?\n\nNo: I simply meant doing the query when one can assume that the query\nresult is not yet in the postgres' cache.\nYou can check that here online: http://labs.geometa.info/postgisterminal\n\n>> => 1. When I add the \"actual time\" from EXPLAIN above, I get 11 + 10 +\n>> 10ms which is three times greater than the 11ms reported. Why?\n>\n> Because they include the times from the nodes under them.\n>\n>> => 2. Why does the planner choose to sort first instead of sorting the\n>> (smaller) result query at the end the?\n>\n> You're reading the explain 'backwards' regarding time.. It *does* do\n> the sort last. Nodes which are indented feed the nodes above them, so\n> the bitmap index scan and recheck feed into the sort, hence the sort is\n> actually done after. Can't really work any other way anyway, PG has to\n> get the data before it can sort it..\n\nOh, thanks. I should have realized that.\n\nBut then what should the arrow (\"->\") wants to stand for?\nSort (cost=30819.51...\n -> Bitmap Heap Scan on osm_point (cost=313.21...\n -> Bitmap Index Scan on osm_point_tags_idx\n\nI would suggest that the inverse arrow would be more intuitive:\nSort (cost=30819.51...\n <- Bitmap Heap Scan on osm_point (cost=313.21...\n <- Bitmap Index Scan on osm_point_tags_idx\n\n>> => 3. What could I do to speed up such queries (first time, i.e.\n>> without caching) besides simply adding more memory?\n>\n> There didn't look like anything there that could really be done much\n> faster, at the plan level. It's not uncommon for people to\n> intentionally get a box with more memory than the size of their\n> database, so everything is in memory.\n>\n> At the end of the day, if the blocks aren't in memory then PG has to get\n> them from disk. If disk is slow, the query is going to be slow. Now,\n> hopefully, you're hitting this table often enough with similar queries\n> that important, common, parts of the table and index are already in\n> memory, but there's no magic PG can perform to ensure that.\n>\n> If there's a lot of updates/changes to this table, you might check if\n> there's a lot of bloat (check_postgres works great for this..).\n> Eliminating excessive bloat, if there is any, could help with all\n> accesses to that table, of course, since it would reduce the amount of\n> data which would need to be.\n\nThanks for the hint.\n\nBut there are only periodic updates (currently once a night) and these\nare actually done by 1. truncating the database and 2. bulk loading\nall the stuff, then 3. reindexing.\n\nIf one tries to completely fit the whole data into memory, then to me\nPostgreSQL features borrowed from in-memory databases become\ninteresting.\n\n=> Is there anything else than \"index-only scans\" (planned for 9.2?)\nwhich could be of interest here?\n\nStefan\n",
"msg_date": "Mon, 24 Oct 2011 02:19:57 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hstore query: Any better idea than adding more memory?"
},
{
"msg_contents": "* Stefan Keller ([email protected]) wrote:\n> >> Adding more memory (say to total of 32 GB) would only postpone the problem.\n> > Erm, seems like you're jumping to conclusions here...\n> \n> Sorry. I actually only wanted to report here what's special in my\n> postgresql.conf.\n\nMy comment was referring to \"postpone the problem\".\n\n> No: I simply meant doing the query when one can assume that the query\n> result is not yet in the postgres' cache.\n> You can check that here online: http://labs.geometa.info/postgisterminal\n\nIf it's not in PG's cache, and it's not in the OS's cache, then it's\ngotta come from disk. :/\n\n> But then what should the arrow (\"->\") wants to stand for?\n\nEh.. I wouldn't read the arrows as meaning all that much. :) They're\nthere as a visual aide only, aiui. Also, explain really shows the\n*plan* that PG ended up picking for this query, thinking about it that\nway might help.\n\n> I would suggest that the inverse arrow would be more intuitive:\n\nPerhaps, but don't get your hopes up about us breaking explain-reading\napplications by changing that. :)\n\n> But there are only periodic updates (currently once a night) and these\n> are actually done by 1. truncating the database and 2. bulk loading\n> all the stuff, then 3. reindexing.\n\nWell, that would certainly help avoid bloat. :)\n\n> If one tries to completely fit the whole data into memory, then to me\n> PostgreSQL features borrowed from in-memory databases become\n> interesting.\n\n... huh? I don't know of any system that's going to be able to make\nsure that all your queries perform like in-memory queries when you don't\nhave enough memory to actually hold it all..\n\n> => Is there anything else than \"index-only scans\" (planned for 9.2?)\n> which could be of interest here?\n\nindex-only scans may be able to help with this as it may be able to\nreduce the amount of disk i/o that has to be done, and reduce the amount\nof memory needed to get everything into memory, but if you don't have\nenough memory then you're still going to see a performance difference\nbetween querying data that's cached and data that has to come from disk.\n\nI don't know if index-only scans will, or will not, be able to help with\nthese specific queries. I suspect they won't be much help since the\ndata being returned has to be in the index. If I remember your query,\nyou were pulling out data which wasn't actaully in the index that was\nbeing used to filter the result set. Also, I don't know if we'll have\nindex-only scans for GIST/GIN indexes in 9.2 or if it won't be available\ntill a later release. AIUI, only btree indexes can perform index-only\nscans in the currently committed code.\n\nNow, we've also been discussing ways to have PG automatically\nre-populate shared buffers and possibly OS cache based on what was in\nmemory at the time of the last shut-down, but I'm not sure that would\nhelp your case either since you're rebuilding everything every night and\nthat's what's trashing your buffers (because everything ends up getting\nmoved around). You might actually want to consider if that's doing more\nharm than good for you. If you weren't doing that, then the cache\nwouldn't be getting destroyed every night..\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Sun, 23 Oct 2011 20:41:41 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hstore query: Any better idea than adding more\n memory?"
}
] |
[
{
"msg_contents": "If I read the xact_commit field returned by \"Select * from pg_stat_database\"\nmultiple times, and then average the difference between consecutive values,\nwould this give an approx idea about the transactions per second in my\ndatabase?\n\nDoes this figure include the number of select statements being executed in\nthe db?\n\n\nWith regards\n\nAmitabh\n\nIf I read the xact_commit field returned by \"Select * from pg_stat_database\" multiple times, and then average the difference between consecutive values, would this give an approx idea about the transactions per second in my database? \nDoes this figure include the number of select statements being executed in the db? With regardsAmitabh",
"msg_date": "Mon, 24 Oct 2011 19:34:20 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Usage of pg_stat_database"
},
{
"msg_contents": "On Mon, 2011-10-24 at 19:34 +0530, Amitabh Kant wrote:\n> If I read the xact_commit field returned by \"Select * from\n> pg_stat_database\" multiple times, and then average the difference\n> between consecutive values, would this give an approx idea about the\n> transactions per second in my database? \n\nYes, approximately. It relies on the stats messages.\n\n> Does this figure include the number of select statements being\n> executed in the db? \n> \nYes, a quick test shows that select statements count as well. However,\nit seems that very simple selects, like \"SELECT 1\" might not send the\nstats messages right away, and they might end up getting lost.\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Thu, 27 Oct 2011 22:54:51 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Usage of pg_stat_database"
},
{
"msg_contents": "Thanks Jeff. This should help in getting a fairly approximate values.\n\nAmitabh\n\nOn Fri, Oct 28, 2011 at 11:24 AM, Jeff Davis <[email protected]> wrote:\n\n> On Mon, 2011-10-24 at 19:34 +0530, Amitabh Kant wrote:\n> > If I read the xact_commit field returned by \"Select * from\n> > pg_stat_database\" multiple times, and then average the difference\n> > between consecutive values, would this give an approx idea about the\n> > transactions per second in my database?\n>\n> Yes, approximately. It relies on the stats messages.\n>\n> > Does this figure include the number of select statements being\n> > executed in the db?\n> >\n> Yes, a quick test shows that select statements count as well. However,\n> it seems that very simple selects, like \"SELECT 1\" might not send the\n> stats messages right away, and they might end up getting lost.\n>\n> Regards,\n> Jeff Davis\n>\n>\n>\n\nThanks Jeff. This should help in getting a fairly approximate values.\nAmitabhOn Fri, Oct 28, 2011 at 11:24 AM, Jeff Davis <[email protected]> wrote:\nOn Mon, 2011-10-24 at 19:34 +0530, Amitabh Kant wrote:\n> If I read the xact_commit field returned by \"Select * from\n> pg_stat_database\" multiple times, and then average the difference\n> between consecutive values, would this give an approx idea about the\n> transactions per second in my database?\n\nYes, approximately. It relies on the stats messages.\n\n> Does this figure include the number of select statements being\n> executed in the db?\n>\nYes, a quick test shows that select statements count as well. However,\nit seems that very simple selects, like \"SELECT 1\" might not send the\nstats messages right away, and they might end up getting lost.\n\nRegards,\n Jeff Davis",
"msg_date": "Fri, 28 Oct 2011 12:07:44 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Usage of pg_stat_database"
}
] |
[
{
"msg_contents": "Hello\n\nI need to choose between Intel 320 , Intel 510 and OCZ Vertex 3 SSD's for my\ndatabase server. From recent reading in the list and other places, I have\ncome to understand that OCZ Vertex 3 should not be used, Intel 510 uses a\nMarvel controller while Intel 320 had a nasty bug which has been rectified.\nSo the list narrows down to only 510 and 320, unless I have understood the\nOCZ Vertex reviews incorrectly.\n\nThe server would itself be built along these lines: Dual CPU Xeon 5620, 32\nor 48 GB RAM, 2 SAS 10K disk in RAID 1 for OS, 2 SAS 10K disk in RAID 1 for\npg_xlog and 4 SSD in RAID 10 for data directory (overkill??). OS would be\nFreeBSD 8.2 (I would be tuning the sysctl variables). PG version would be\n9.1 with replication set to another machine (Dual CPU Xeon 54xx, 32 GB RAM,\n6 15K SAS 146 GB: 4 in RAID 10 for data and 2 in RAID 1 for OS + pg_xlog).\nThe second machine hosts my current db , and there is not much of an issue\nwith the performance. We need better redundancy now(current was to take a\ndump/backup every 12 hours), so the new machine.\n\nMy database itself is not very big, approx 40 GB as of now, and would not\ngrow beyond 80 GB in the next year or two. There are some tables where\ninsert & updates are fairly frequent. From what I could gather, we are not\ndoing more than 300-400 tps at the moment, and the growth should not be very\nhigh in the short term.\n\nHope someone can give some pointers to which SSD I should go for at the\nmoment.\n\n\nAmitabh\n\nHelloI need to choose between Intel 320 , Intel 510 and OCZ Vertex 3 SSD's for my database server. From recent reading in the list and other places, I have come to understand that OCZ Vertex 3 should not be used, Intel 510 uses a Marvel controller while Intel 320 had a nasty bug which has been rectified. So the list narrows down to only 510 and 320, unless I have understood the OCZ Vertex reviews incorrectly.\nThe server would itself be built along these lines: Dual CPU Xeon 5620, 32 or 48 GB RAM, 2 SAS 10K disk in RAID 1 for OS, 2 SAS 10K disk in RAID 1 for pg_xlog and 4 SSD in RAID 10 for data directory (overkill??). OS would be FreeBSD 8.2 (I would be tuning the sysctl variables). PG version would be 9.1 with replication set to another machine (Dual CPU Xeon 54xx, 32 GB RAM, 6 15K SAS 146 GB: 4 in RAID 10 for data and 2 in RAID 1 for OS + pg_xlog). The second machine hosts my current db , and there is not much of an issue with the performance. We need better redundancy now(current was to take a dump/backup every 12 hours), so the new machine.\nMy database itself is not very big, approx 40 GB as of now, and would not grow beyond 80 GB in the next year or two. There are some tables where insert & updates are fairly frequent. From what I could gather, we are not doing more than 300-400 tps at the moment, and the growth should not be very high in the short term.\nHope someone can give some pointers to which SSD I should go for at the moment.Amitabh",
"msg_date": "Mon, 24 Oct 2011 19:39:37 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Choosing between Intel 320,\n\tIntel 510 or OCZ Vertex 3 SSD for db server"
},
{
"msg_contents": "\nA few quick thoughts:\n\n1. 320 would be the only SSD I'd trust from your short-list. It's the \nonly one with proper protection from unexpected power loss.\n2. Multiple RAID'ed SSDs sounds like (vast) overkill for your workload. \nA single SSD should be sufficient (will get you several thousand TPS on \npgbench for your DB size).\n3. Consider not using the magnetic disks at all (saves on space, power \nand the cost of the RAID controller for them).\n4. Consider using Intel 710 series rather than 320 (pay for them with \nthe money saved from #3 above). Those devices have much, much higher \nspecified endurance than the 320s and since your DB is quite small you \nonly need to buy one of them.\n\nOn 10/24/2011 8:09 AM, Amitabh Kant wrote:\n> Hello\n>\n> I need to choose between Intel 320 , Intel 510 and OCZ Vertex 3 SSD's \n> for my database server. From recent reading in the list and other \n> places, I have come to understand that OCZ Vertex 3 should not be \n> used, Intel 510 uses a Marvel controller while Intel 320 had a nasty \n> bug which has been rectified. So the list narrows down to only 510 and \n> 320, unless I have understood the OCZ Vertex reviews incorrectly.\n>\n> The server would itself be built along these lines: Dual CPU Xeon \n> 5620, 32 or 48 GB RAM, 2 SAS 10K disk in RAID 1 for OS, 2 SAS 10K disk \n> in RAID 1 for pg_xlog and 4 SSD in RAID 10 for data directory \n> (overkill??). OS would be FreeBSD 8.2 (I would be tuning the sysctl \n> variables). PG version would be 9.1 with replication set to another \n> machine (Dual CPU Xeon 54xx, 32 GB RAM, 6 15K SAS 146 GB: 4 in RAID 10 \n> for data and 2 in RAID 1 for OS + pg_xlog). The second machine hosts \n> my current db , and there is not much of an issue with the \n> performance. We need better redundancy now(current was to take a \n> dump/backup every 12 hours), so the new machine.\n>\n> My database itself is not very big, approx 40 GB as of now, and would \n> not grow beyond 80 GB in the next year or two. There are some tables \n> where insert & updates are fairly frequent. From what I could gather, \n> we are not doing more than 300-400 tps at the moment, and the growth \n> should not be very high in the short term.\n>\n> Hope someone can give some pointers to which SSD I should go for at \n> the moment.\n>\n>\n> Amitabh\n>\n\n",
"msg_date": "Mon, 24 Oct 2011 10:53:34 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choosing between Intel 320, Intel 510 or OCZ Vertex\n\t3 SSD for db server"
},
{
"msg_contents": "On Mon, Oct 24, 2011 at 11:53 AM, David Boreham <[email protected]> wrote:\n>\n> A few quick thoughts:\n>\n> 1. 320 would be the only SSD I'd trust from your short-list. It's the only\n> one with proper protection from unexpected power loss.\n\nyeah.\n\n> 2. Multiple RAID'ed SSDs sounds like (vast) overkill for your workload. A\n> single SSD should be sufficient (will get you several thousand TPS on\n> pgbench for your DB size).\n\nAlso, raid controllers interfere with TRIM.\n\n> 3. Consider not using the magnetic disks at all (saves on space, power and\n> the cost of the RAID controller for them).\n\nAgree. If one SSD did not deliver the tps, I'd consider buying more\nand optimizing with jbod/tablespaces -- really doubt that's necessary\nhowever. Maybe a single large slow magnetic disk is a good idea for\nretaining backups though.\n\n> 4. Consider using Intel 710 series rather than 320 (pay for them with the\n> money saved from #3 above). Those devices have much, much higher specified\n> endurance than the 320s and since your DB is quite small you only need to\n> buy one of them.\n\n710 is good idea if and only if you are worried about write durability\n(in which case it's a great idea).\n\nmerlin\n",
"msg_date": "Mon, 24 Oct 2011 16:31:31 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choosing between Intel 320, Intel 510 or OCZ Vertex 3\n\tSSD for db server"
},
{
"msg_contents": "On Mon, Oct 24, 2011 at 6:31 PM, Merlin Moncure <[email protected]> wrote:\n>> 2. Multiple RAID'ed SSDs sounds like (vast) overkill for your workload. A\n>> single SSD should be sufficient (will get you several thousand TPS on\n>> pgbench for your DB size).\n>\n> Also, raid controllers interfere with TRIM.\n\nWhat about redundancy?\n\nHow do you swap an about-to-die SSD?\n\nSoftware RAID-1?\n",
"msg_date": "Mon, 24 Oct 2011 19:47:01 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choosing between Intel 320, Intel 510 or OCZ Vertex 3\n\tSSD for db server"
},
{
"msg_contents": "On 10/24/2011 3:31 PM, Merlin Moncure wrote:\n>> 4. Consider using Intel 710 series rather than 320 (pay for them with the\n>> > money saved from #3 above). Those devices have much, much higher specified\n>> > endurance than the 320s and since your DB is quite small you only need to\n>> > buy one of them.\n> 710 is good idea if and only if you are worried about write durability\n> (in which case it's a great idea).\n\nI disagree with this (that it is the only reason to select 710 series).\nThe write endurance (specified at least) is orders of magnitude higher.\nDoing 100's of TPS constantly it is possible to burn through the 320's \nendurance\nlifetime in a year or two.\n\n\n",
"msg_date": "Mon, 24 Oct 2011 17:18:18 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choosing between Intel 320, Intel 510 or OCZ Vertex\n\t3 SSD for db server"
},
{
"msg_contents": "On 10/24/2011 4:47 PM, Claudio Freire wrote:\n> What about redundancy?\n>\n> How do you swap an about-to-die SSD?\n>\n> Software RAID-1?\n\nThe approach we take is that we use 710 series devices which have \npredicted reliability similar to all the other components in the \nmachine, therefore the unit of replacement is the entire machine. We \ndon't use trays for example (which saves quite a bit on data center \nspace). If I were running short endurance devices such as 320 series I \nwould be interested in replacing the drives before the machine itself is \nlikely to fail, but I'd do so by migrating the data and load to another \nmachine for the replacement to be done offline. Note that there are \nother operations procedures that need to be done and can not be done \nwithout downtime (e.g. OS upgrade), so some kind of plan to deliver \nservice while a single machine is down for a while will be needed \nregardless of the storage device situation.\n\n\n\n\n",
"msg_date": "Mon, 24 Oct 2011 20:37:05 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choosing between Intel 320, Intel 510 or OCZ Vertex\n\t3 SSD for db server"
},
{
"msg_contents": "Am 24.10.2011 16:09, schrieb Amitabh Kant:\n\n> while Intel 320 had a nasty bug which has been rectified\n\nBe careful with that Intel SSD.\nThis one is still very buggy.\nTake a look at the Intel forums \nhttp://communities.intel.com/community/tech/solidstate?view=discussions#/ about \nusers who are complaining that they�ve lost all their data.\nEven the firmware upgrade didn�t completely resolved the issues.\n\nRegards\nThilo\n",
"msg_date": "Tue, 25 Oct 2011 09:48:20 +0200",
"msg_from": "Thilo Raufeisen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choosing between Intel 320, Intel 510 or OCZ Vertex\n\t3 SSD for db server"
},
{
"msg_contents": "On Mon, Oct 24, 2011 at 11:37 PM, David Boreham <[email protected]> wrote:\n>> What about redundancy?\n>>\n>> How do you swap an about-to-die SSD?\n>>\n>> Software RAID-1?\n>\n> The approach we take is that we use 710 series devices which have predicted\n> reliability similar to all the other components in the machine, therefore\n> the unit of replacement is the entire machine. We don't use trays for\n> example (which saves quite a bit on data center space). If I were running\n> short endurance devices such as 320 series I would be interested in\n> replacing the drives before the machine itself is likely to fail, but I'd do\n> so by migrating the data and load to another machine for the replacement to\n> be done offline. Note that there are other operations procedures that need\n> to be done and can not be done without downtime (e.g. OS upgrade), so some\n> kind of plan to deliver service while a single machine is down for a while\n> will be needed regardless of the storage device situation.\n\nInteresting.\n\nBut what about unexpected failures. Faulty electronics, stuff like that?\n\nI really don't think a production server can work without at least raid-1.\n",
"msg_date": "Tue, 25 Oct 2011 11:55:02 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choosing between Intel 320, Intel 510 or OCZ Vertex 3\n\tSSD for db server"
},
{
"msg_contents": "On 10/25/2011 8:55 AM, Claudio Freire wrote:\n> But what about unexpected failures. Faulty electronics, stuff like \n> that? I really don't think a production server can work without at \n> least raid-1. \n\nSame approach : a server either works or it does not. The transition \nbetween working and not working may be expected or not expected. The \nsolution is the same : use another machine to perform the work the now \nnot working machine was doing. The big benefit of this approach is that \nyou now do not need to worry about specific kinds of failure or \nindividual components, including storage.\n\nIf it helps, think of this architecture as raid-1, but with the whole \nmachine being the \"drive\" rather than individual storage devices.\n\n\n\n\n",
"msg_date": "Tue, 25 Oct 2011 09:00:38 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choosing between Intel 320, Intel 510 or OCZ Vertex\n\t3 SSD for db server"
},
{
"msg_contents": "On Mon, Oct 24, 2011 at 10:23 PM, David Boreham <[email protected]>wrote:\n\n>\n> A few quick thoughts:\n>\n> 1. 320 would be the only SSD I'd trust from your short-list. It's the only\n> one with proper protection from unexpected power loss.\n> 2. Multiple RAID'ed SSDs sounds like (vast) overkill for your workload. A\n> single SSD should be sufficient (will get you several thousand TPS on\n> pgbench for your DB size).\n> 3. Consider not using the magnetic disks at all (saves on space, power and\n> the cost of the RAID controller for them).\n> 4. Consider using Intel 710 series rather than 320 (pay for them with the\n> money saved from #3 above). Those devices have much, much higher specified\n> endurance than the 320s and since your DB is quite small you only need to\n> buy one of them.\n>\n>\n> On 10/24/2011 8:09 AM, Amitabh Kant wrote:\n>\n>> Hello\n>>\n>> I need to choose between Intel 320 , Intel 510 and OCZ Vertex 3 SSD's for\n>> my database server. From recent reading in the list and other places, I have\n>> come to understand that OCZ Vertex 3 should not be used, Intel 510 uses a\n>> Marvel controller while Intel 320 had a nasty bug which has been rectified.\n>> So the list narrows down to only 510 and 320, unless I have understood the\n>> OCZ Vertex reviews incorrectly.\n>>\n>> The server would itself be built along these lines: Dual CPU Xeon 5620, 32\n>> or 48 GB RAM, 2 SAS 10K disk in RAID 1 for OS, 2 SAS 10K disk in RAID 1 for\n>> pg_xlog and 4 SSD in RAID 10 for data directory (overkill??). OS would be\n>> FreeBSD 8.2 (I would be tuning the sysctl variables). PG version would be\n>> 9.1 with replication set to another machine (Dual CPU Xeon 54xx, 32 GB RAM,\n>> 6 15K SAS 146 GB: 4 in RAID 10 for data and 2 in RAID 1 for OS + pg_xlog).\n>> The second machine hosts my current db , and there is not much of an issue\n>> with the performance. We need better redundancy now(current was to take a\n>> dump/backup every 12 hours), so the new machine.\n>>\n>> My database itself is not very big, approx 40 GB as of now, and would not\n>> grow beyond 80 GB in the next year or two. There are some tables where\n>> insert & updates are fairly frequent. From what I could gather, we are not\n>> doing more than 300-400 tps at the moment, and the growth should not be very\n>> high in the short term.\n>>\n>> Hope someone can give some pointers to which SSD I should go for at the\n>> moment.\n>>\n>>\n>> Amitabh\n>>\n>>\nSadly, 710 is not that easily available around here at the moment.\n\n\nAmitabh\n\nOn Mon, Oct 24, 2011 at 10:23 PM, David Boreham <[email protected]> wrote:\n\nA few quick thoughts:\n\n1. 320 would be the only SSD I'd trust from your short-list. It's the only one with proper protection from unexpected power loss.\n2. Multiple RAID'ed SSDs sounds like (vast) overkill for your workload. A single SSD should be sufficient (will get you several thousand TPS on pgbench for your DB size).\n3. Consider not using the magnetic disks at all (saves on space, power and the cost of the RAID controller for them).\n4. Consider using Intel 710 series rather than 320 (pay for them with the money saved from #3 above). Those devices have much, much higher specified endurance than the 320s and since your DB is quite small you only need to buy one of them.\n\n\nOn 10/24/2011 8:09 AM, Amitabh Kant wrote:\n\nHello\n\nI need to choose between Intel 320 , Intel 510 and OCZ Vertex 3 SSD's for my database server. From recent reading in the list and other places, I have come to understand that OCZ Vertex 3 should not be used, Intel 510 uses a Marvel controller while Intel 320 had a nasty bug which has been rectified. So the list narrows down to only 510 and 320, unless I have understood the OCZ Vertex reviews incorrectly.\n\nThe server would itself be built along these lines: Dual CPU Xeon 5620, 32 or 48 GB RAM, 2 SAS 10K disk in RAID 1 for OS, 2 SAS 10K disk in RAID 1 for pg_xlog and 4 SSD in RAID 10 for data directory (overkill??). OS would be FreeBSD 8.2 (I would be tuning the sysctl variables). PG version would be 9.1 with replication set to another machine (Dual CPU Xeon 54xx, 32 GB RAM, 6 15K SAS 146 GB: 4 in RAID 10 for data and 2 in RAID 1 for OS + pg_xlog). The second machine hosts my current db , and there is not much of an issue with the performance. We need better redundancy now(current was to take a dump/backup every 12 hours), so the new machine.\n\nMy database itself is not very big, approx 40 GB as of now, and would not grow beyond 80 GB in the next year or two. There are some tables where insert & updates are fairly frequent. From what I could gather, we are not doing more than 300-400 tps at the moment, and the growth should not be very high in the short term.\n\nHope someone can give some pointers to which SSD I should go for at the moment.\n\n\nAmitabhSadly, 710 is not that easily available around here at the moment.Amitabh",
"msg_date": "Fri, 28 Oct 2011 12:10:10 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Choosing between Intel 320, Intel 510 or OCZ Vertex 3\n\tSSD for db server"
},
{
"msg_contents": "On 10/28/2011 12:40 AM, Amitabh Kant wrote:\n>\n>\n> Sadly, 710 is not that easily available around here at the moment.\n>\n\nAll three sizes are in stock at newegg.com, if you have a way to export \nfrom the US to your location.\n\n\n",
"msg_date": "Fri, 28 Oct 2011 07:22:04 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choosing between Intel 320, Intel 510 or OCZ Vertex\n\t3 SSD for db server"
}
] |
[
{
"msg_contents": "Hi guys,\n\nI have a query that runs a lot slower (~5 minutes) when I run it with\nthe default enable_nestloop=true and enable_nestloop=false (~10 secs).\nThe actual query is available here http://pastie.org/2754424 . It is a\nreporting query with many joins as the database is mainly used for\ntransaction processing.\n\nExplain analyse result for both cases:\n\nMachine A nestloop=true - http://explain.depesz.com/s/nkj0 (~5 minutes)\nMachine A nestloop=false - http://explain.depesz.com/s/wBM (~10 secs)\n\nOn a different slightly slower machine (Machine B), copying the\ndatabase over and leaving the default enable_nestloop=true it takes\n~20 secs.\n\nMachine B nestloop=true - http://explain.depesz.com/s/dYO (~ 20secs)\n\nFor all the cases above I ensured that I did an ANALYZE before running\nthe queries. There were no other queries running in parallel.\nBoth machines are running PostgreSQL 8.4.6. Machine B is using the\ndefault configuration provided by the package while for Machine A we\napplied the changes suggested by pgtune - http://pastie.org/2755113.\n\nMachine A is running Ubuntu 10.04 32 bit while Machine B is running\nUbuntu 8.04 32 bit.\n\nMachine A spec -\nIntel(R) Xeon(R) CPU X3450 @ 2.67GHz (8 Cores)\n8GB RAM (2 x 4GB)\n4 x 300GB 15k SAS\n\nMachine B spec -\nIntel(R) Pentium(R) D CPU 2.80GHz x 2\n2GB RAM\n1 x 80GB SATA HDD\n\n1. For Machine A, what can I do to make the planner choose the faster\nplan without setting enable_nestloop=false ?\n\n2. From the research I have done it seems to be that the reason the\nplanner is choosing the unoptimal query is because of the huge\ndifference between the estimated and actual rows. How can I get this\nfigure closer ?\n\n3. If I should rewrite the query, what should I change ?\n\n4. Why is it that the planner seems to be doing the right thing for\nMachine B without setting enable_nestloop=false. What should I be\ncomparing in both the machines to understand the difference in choice\nthat the planner made ?\n\nI have tried reading through the manual section \"55.1. Row Estimation\nExamples\", \"14.2. Statistics Used by the Planner\". I am still trying\nto fully apply the information to my specific case above and hence any\nhelp or pointers would be greatly appreciated.\n\nIn a last ditch effort we also tried upgrading Machine A to\nPostgresSQL 9.1 and that did not rectify the issue. We have reverted\nthe upgrade for now.\n\nThank you for your time.\n\n\n-- \nMohan\n",
"msg_date": "Tue, 25 Oct 2011 17:09:42 +0800",
"msg_from": "Mohanaraj Gopala Krishnan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query running a lot faster with enable_nestloop=false"
},
{
"msg_contents": "Hi Mohanaraj,\n\nOne thing you should certainly try is to increase the \ndefault_statistics_target value from 50 up to say about 1000 for the \nlarger tables. Large tables tend to go off on estimates with smaller \nvalues here.\n\nI guess I am not helping here, but apart from your query, those \nestimates on Machine B seem odd, coz they shoot up from 10k to the order \nof billions without any big change in row-count. Beats me.\n\n--\nRobins Tharakan\n\n> 1. For Machine A, what can I do to make the planner choose the faster\n> plan without setting enable_nestloop=false ?\n>\n> 2. From the research I have done it seems to be that the reason the\n> planner is choosing the unoptimal query is because of the huge\n> difference between the estimated and actual rows. How can I get this\n> figure closer ?\n>\n> 3. If I should rewrite the query, what should I change ?\n>\n> 4. Why is it that the planner seems to be doing the right thing for\n> Machine B without setting enable_nestloop=false. What should I be\n> comparing in both the machines to understand the difference in choice\n> that the planner made ?\n>\n> I have tried reading through the manual section \"55.1. Row Estimation\n> Examples\", \"14.2. Statistics Used by the Planner\". I am still trying\n> to fully apply the information to my specific case above and hence any\n> help or pointers would be greatly appreciated.\n>\n> In a last ditch effort we also tried upgrading Machine A to\n> PostgresSQL 9.1 and that did not rectify the issue. We have reverted\n> the upgrade for now.\n>\n> Thank you for your time.",
"msg_date": "Tue, 25 Oct 2011 15:11:27 +0530",
"msg_from": "Robins Tharakan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query running a lot faster with enable_nestloop=false"
},
{
"msg_contents": "On Tue, Oct 25, 2011 at 5:09 AM, Mohanaraj Gopala Krishnan\n<[email protected]> wrote:\n> I have a query that runs a lot slower (~5 minutes) when I run it with\n> the default enable_nestloop=true and enable_nestloop=false (~10 secs).\n> The actual query is available here http://pastie.org/2754424 . It is a\n> reporting query with many joins as the database is mainly used for\n> transaction processing.\n>\n> Explain analyse result for both cases:\n>\n> Machine A nestloop=true - http://explain.depesz.com/s/nkj0 (~5 minutes)\n> Machine A nestloop=false - http://explain.depesz.com/s/wBM (~10 secs)\n\nA good start might be to refactor this:\n\nSeq Scan on retailer_categories retailer_category_leaf_nodes\n(cost=0.00..18.52 rows=1 width=16) (actual time=0.016..0.392 rows=194\nloops=1)\n Filter: ((tree_right - tree_left) = 1)\n\nAnd this:\n\nSeq Scan on product_categories product_category_leaf_nodes\n(cost=0.00..148.22 rows=2 width=32) (actual time=0.031..1.109 rows=383\nloops=1)\n Filter: ((tree_right - tree_left) = 1)\n\nThe query planner really has no idea what selectivity to assign to\nthat condition, and so it's basically guessing, and it's way off. You\ncould probably improve the estimate a lot by adding a column that\nstores the values of tree_right - tree_left and is updated manually or\nby triggers as you insert and update data. Then you could just check\ntree_left_right_difference = 1, which should get a much more accurate\nestimate, and hopefully therefore a better plan.\n\nYou've also got a fairly large estimation error here:\n\nIndex Scan using invoices_create_date_idx on invoices (cost=0.00..8.28\nrows=1 width=4) (actual time=0.055..0.305 rows=109 loops=1)\n Index Cond: ((create_date >= '2011-09-15'::date) AND (create_date\n<= '2011-09-15'::date))\n Filter: (status = 128)\n\nApparently, status 128 is considerably more common among rows in that\ndate range than it is overall. Unfortunately, it's not so easy to fix\nthis kind of estimation error, unless you can refactor your schema to\navoid needing to filter on both create_date and status at the same\ntime.\n\nIt might be worth using temporary tables here - factor out sections of\nthe query that are referenced multiple times, like the join between\nsales_order_items and invoices, and create a temporary table. ANALYZE\nit, and then use it to run the main query.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Nov 2011 11:35:00 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query running a lot faster with enable_nestloop=false"
}
] |
[
{
"msg_contents": "\n\n-- \n____________________________________________________________________\nCezariusz Marek mob: +48 608 646 494\nhttp://www.comarch.com/ tel: +48 33 815 0734\n____________________________________________________________________\n\n\n",
"msg_date": "Tue, 25 Oct 2011 15:07:44 +0200",
"msg_from": "\"Cezariusz Marek\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "sub"
}
] |
[
{
"msg_contents": "I'm new to postgres and was wondering how to use EXPLAIN ANALYZE ....\n\nCan I use the output from ANALYZE EXPLAIN to estimate or predict the\nactual time\nit would take for a given query to return?\n\nI ask because I'm writing a typical web app that allows the user to\nbuild and submit a query\nto my DB. Since I don't know how \"simple\" or \"complex\" the user-\ngenerated queries will be\nI thought it might be possible to use the ANALYZE EXPLAIN output to\nmake a \"guestimation\"\nabout the expected return time of the query.\n\nI'd like to use this in my web-app to determine whether to run the\nquery in real-time (user waits\nfor results) or queue up the query (and notify the user once the query\nis finished). E.g.:\n if (the Total runtime\" reported by explain analyze is > n ms) {\n tell the user that his request was submitted for processing, and\nnotify the user once resuilts are available\n } else {\n run the query and wait for the results in real time.\n }\n\nThanks,\nAlan\n",
"msg_date": "Tue, 25 Oct 2011 07:12:55 -0700 (PDT)",
"msg_from": "alan <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to use explain analyze"
},
{
"msg_contents": "Hello, Alan\n\nOn 2011.10.25 17:12, alan wrote:\n> I'm new to postgres and was wondering how to use EXPLAIN ANALYZE ....\n>\n> Can I use the output from ANALYZE EXPLAIN to estimate or predict the\n> actual time\n> it would take for a given query to return?\nExplain analyze executes the query, so you get the actual execution time \n(not always accurate as some extra job must be done while executing the \nquery to compute the rows and loops).\n> I ask because I'm writing a typical web app that allows the user to\n> build and submit a query\nBe carefull about that idea - especially if a user can write custom \nqueries like \"delete from important_table\"\n> Thanks,\n> Alan\n>\n\n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050\n\n",
"msg_date": "Wed, 26 Oct 2011 10:48:33 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to use explain analyze"
}
] |
[
{
"msg_contents": "Hi all,\n i am having any problems with performance of queries that uses CTE, can the\njoin on a CTE use the index of the original table?, suppose two simple tables:\n\nCREATE TABLE employee\n(\n emp_id integer NOT NULL,\n name character varying NOT NULL,\n CONSTRAINT employee_pkey PRIMARY KEY (emp_id )\n);\n\nCREATE TABLE employee_telephone\n(\n emp_id integer NOT NULL,\n phone_type character varying NOT NULL,\n phone_number character varying NOT NULL,\n CONSTRAINT employee_telephone_pkey PRIMARY KEY (emp_id , phone_type ),\n CONSTRAINT employee_telephone_emp_id_fkey FOREIGN KEY (emp_id)\n REFERENCES employee (emp_id)\n);\n\nand this two queries, i know this particular case don't need either a CTE or\nsubquery it is only an example:\n\nWITH phones AS (SELECT emp_id,\n phone_number\n ORDER BY emp_id,\n phone_type)\nSELECT emp.emp_id,\n emp.name,\n array_to_string(array_agg(phones.phone_number)) AS phones\n\nFROM employee AS emp\n JOIN phones ON phones.emp_id = emp.emp_id\n\nVS\n\nSELECT emp.emp_id,\n emp.name,\n\t array_to_string(array_agg(phones.phone_number)) AS phones\n\nFROM employee AS emp\n JOIN (SELECT emp_id,\n phone_number\n ORDER BY emp_id,\n phone_type) AS phones ON phones.emp_id = emp.emp_id\n\nWhy the CTE it is slower in many cases? does the CTE don't use the index for the\njoin and the subquery do? if the CTE it is usually slower where should be used\ninstead of a subquery other than recursive CTE's?\n\nRegards,\nMiguel Angel.\n",
"msg_date": "Tue, 25 Oct 2011 18:22:42 +0200",
"msg_from": "Linos <[email protected]>",
"msg_from_op": true,
"msg_subject": "CTE vs Subquery"
},
{
"msg_contents": "Linos <[email protected]> writes:\n> i am having any problems with performance of queries that uses CTE, can the\n> join on a CTE use the index of the original table?\n\nCTEs act as optimization fences. This is a feature, not a bug. Use\nthem when you want to isolate the evaluation of a subquery.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 Oct 2011 12:43:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CTE vs Subquery "
},
{
"msg_contents": "El 25/10/11 18:43, Tom Lane escribi�:\n> Linos <[email protected]> writes:\n>> i am having any problems with performance of queries that uses CTE, can the\n>> join on a CTE use the index of the original table?\n> \n> CTEs act as optimization fences. This is a feature, not a bug. Use\n> them when you want to isolate the evaluation of a subquery.\n> \n> \t\t\tregards, tom lane\n> \n\nThe truth it is that complex queries seems more readable using them (maybe a\npersonal preference no doubt).\n\nDo have other popular databases the same behavior? SQL Server or Oracle for example?\n\nRegards,\nMiguel �ngel.\n",
"msg_date": "Tue, 25 Oct 2011 18:47:53 +0200",
"msg_from": "Linos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CTE vs Subquery"
},
{
"msg_contents": "On Tue, Oct 25, 2011 at 11:47 AM, Linos <[email protected]> wrote:\n> El 25/10/11 18:43, Tom Lane escribió:\n>> Linos <[email protected]> writes:\n>>> i am having any problems with performance of queries that uses CTE, can the\n>>> join on a CTE use the index of the original table?\n>>\n>> CTEs act as optimization fences. This is a feature, not a bug. Use\n>> them when you want to isolate the evaluation of a subquery.\n>>\n>> regards, tom lane\n>>\n>\n> The truth it is that complex queries seems more readable using them (maybe a\n> personal preference no doubt).\n>\n> Do have other popular databases the same behavior? SQL Server or Oracle for example?\n\nIn my experience, SQL server also materializes them -- basically CTE\nis short hand for 'CREATE TEMP TABLE foo AS SELECT...' then joining to\nfoo. If you want join behavior, use a join (by the way IIRC SQL\nServer is a lot more restrictive about placement of ORDER BY).\n\nI like CTE current behavior -- the main place I find it awkward is in\nuse of recursive queries because the CTE fence forces me to abstract\nthe recursion behind a function, not a view since pushing the view\nqual down into the CTE is pretty horrible:\n\npostgres=# explain select foo.id, (with bar as (select id from foo f\nwhere f.id = foo.id) select * from bar) from foo where foo.id = 11;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Index Scan using foo_idx on foo (cost=0.00..16.57 rows=1 width=4)\n Index Cond: (id = 11)\n SubPlan 2\n -> CTE Scan on bar (cost=8.28..8.30 rows=1 width=4)\n CTE bar\n -> Index Scan using foo_idx on foo f (cost=0.00..8.28\nrows=1 width=4)\n Index Cond: (id = $0)\n(7 rows)\n\nwhereas for function you can inject your qual inside the CTE pretty\neasily. this is a different problem than the one you're describing\nthough. for the most part, CTE execution fence is a very good thing,\nsince it enforces restrictions that other features can leverage, for\nexample 'data modifying with' queries (by far my all time favorite\npostgres enhancement).\n\nmerlin\n",
"msg_date": "Tue, 25 Oct 2011 12:11:23 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CTE vs Subquery"
},
{
"msg_contents": "El 25/10/11 19:11, Merlin Moncure escribi�:\n> On Tue, Oct 25, 2011 at 11:47 AM, Linos <[email protected]> wrote:\n>> El 25/10/11 18:43, Tom Lane escribi�:\n>>> Linos <[email protected]> writes:\n>>>> i am having any problems with performance of queries that uses CTE, can the\n>>>> join on a CTE use the index of the original table?\n>>>\n>>> CTEs act as optimization fences. This is a feature, not a bug. Use\n>>> them when you want to isolate the evaluation of a subquery.\n>>>\n>>> regards, tom lane\n>>>\n>>\n>> The truth it is that complex queries seems more readable using them (maybe a\n>> personal preference no doubt).\n>>\n>> Do have other popular databases the same behavior? SQL Server or Oracle for example?\n> \n> In my experience, SQL server also materializes them -- basically CTE\n> is short hand for 'CREATE TEMP TABLE foo AS SELECT...' then joining to\n> foo. If you want join behavior, use a join (by the way IIRC SQL\n> Server is a lot more restrictive about placement of ORDER BY).\n> \n> I like CTE current behavior -- the main place I find it awkward is in\n> use of recursive queries because the CTE fence forces me to abstract\n> the recursion behind a function, not a view since pushing the view\n> qual down into the CTE is pretty horrible:\n> \n> postgres=# explain select foo.id, (with bar as (select id from foo f\n> where f.id = foo.id) select * from bar) from foo where foo.id = 11;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Index Scan using foo_idx on foo (cost=0.00..16.57 rows=1 width=4)\n> Index Cond: (id = 11)\n> SubPlan 2\n> -> CTE Scan on bar (cost=8.28..8.30 rows=1 width=4)\n> CTE bar\n> -> Index Scan using foo_idx on foo f (cost=0.00..8.28\n> rows=1 width=4)\n> Index Cond: (id = $0)\n> (7 rows)\n> \n> whereas for function you can inject your qual inside the CTE pretty\n> easily. this is a different problem than the one you're describing\n> though. for the most part, CTE execution fence is a very good thing,\n> since it enforces restrictions that other features can leverage, for\n> example 'data modifying with' queries (by far my all time favorite\n> postgres enhancement).\n> \n> merlin\n> \n\nok, i get the idea, but i still don't understand what Tom says about isolate\nevaluation, apart from the performance and the readability, if i am not using\nwritable CTE or recursive CTE, what it is the difference in evaluation (about\nbeing isolate) of a subquery vs CTE with the same text inside.\n\nI have been using this form lately:\n\nWITH inv (SELECT item_id,\n SUM(units) AS units\n FROM invoices),\n\nquo AS (SELECT item_id,\n SUM(units) AS units\n FROM quotes)\n\nSELECT items.item_id,\n CASE WHEN inv.units IS NOT NULL THEN inv.units ELSE 0 END AS\nunits_invoices,\n CASE WHEN quo.units IS NOT NULL THEN quo.units ELSE 0 END AS\nunits_quotes\n\nFROM items\n LEFT JOIN inv ON inv.item_id = items.item_id\n LEFT JOIN quo ON quo.item_id = items.item_id\n\nWell this is oversimplified because i use much more tables and filter based on\ndates, but you get the idea, it seems that this type of query should use\nsubqueries, no?\n\nRegards,\nMiguel Angel.\n",
"msg_date": "Wed, 26 Oct 2011 11:00:48 +0200",
"msg_from": "Linos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CTE vs Subquery"
},
{
"msg_contents": "On Wed, Oct 26, 2011 at 4:00 AM, Linos <[email protected]> wrote:\n> El 25/10/11 19:11, Merlin Moncure escribió:\n>> On Tue, Oct 25, 2011 at 11:47 AM, Linos <[email protected]> wrote:\n>>> El 25/10/11 18:43, Tom Lane escribió:\n>>>> Linos <[email protected]> writes:\n>>>>> i am having any problems with performance of queries that uses CTE, can the\n>>>>> join on a CTE use the index of the original table?\n>>>>\n>>>> CTEs act as optimization fences. This is a feature, not a bug. Use\n>>>> them when you want to isolate the evaluation of a subquery.\n>>>>\n>>>> regards, tom lane\n>>>>\n>>>\n>>> The truth it is that complex queries seems more readable using them (maybe a\n>>> personal preference no doubt).\n>>>\n>>> Do have other popular databases the same behavior? SQL Server or Oracle for example?\n>>\n>> In my experience, SQL server also materializes them -- basically CTE\n>> is short hand for 'CREATE TEMP TABLE foo AS SELECT...' then joining to\n>> foo. If you want join behavior, use a join (by the way IIRC SQL\n>> Server is a lot more restrictive about placement of ORDER BY).\n>>\n>> I like CTE current behavior -- the main place I find it awkward is in\n>> use of recursive queries because the CTE fence forces me to abstract\n>> the recursion behind a function, not a view since pushing the view\n>> qual down into the CTE is pretty horrible:\n>>\n>> postgres=# explain select foo.id, (with bar as (select id from foo f\n>> where f.id = foo.id) select * from bar) from foo where foo.id = 11;\n>> QUERY PLAN\n>> -------------------------------------------------------------------------------------\n>> Index Scan using foo_idx on foo (cost=0.00..16.57 rows=1 width=4)\n>> Index Cond: (id = 11)\n>> SubPlan 2\n>> -> CTE Scan on bar (cost=8.28..8.30 rows=1 width=4)\n>> CTE bar\n>> -> Index Scan using foo_idx on foo f (cost=0.00..8.28\n>> rows=1 width=4)\n>> Index Cond: (id = $0)\n>> (7 rows)\n>>\n>> whereas for function you can inject your qual inside the CTE pretty\n>> easily. this is a different problem than the one you're describing\n>> though. for the most part, CTE execution fence is a very good thing,\n>> since it enforces restrictions that other features can leverage, for\n>> example 'data modifying with' queries (by far my all time favorite\n>> postgres enhancement).\n>>\n>> merlin\n>>\n>\n> ok, i get the idea, but i still don't understand what Tom says about isolate\n> evaluation, apart from the performance and the readability, if i am not using\n> writable CTE or recursive CTE, what it is the difference in evaluation (about\n> being isolate) of a subquery vs CTE with the same text inside.\n>\n> I have been using this form lately:\n>\n> WITH inv (SELECT item_id,\n> SUM(units) AS units\n> FROM invoices),\n>\n> quo AS (SELECT item_id,\n> SUM(units) AS units\n> FROM quotes)\n>\n> SELECT items.item_id,\n> CASE WHEN inv.units IS NOT NULL THEN inv.units ELSE 0 END AS\n> units_invoices,\n> CASE WHEN quo.units IS NOT NULL THEN quo.units ELSE 0 END AS\n> units_quotes\n>\n> FROM items\n> LEFT JOIN inv ON inv.item_id = items.item_id\n> LEFT JOIN quo ON quo.item_id = items.item_id\n>\n> Well this is oversimplified because i use much more tables and filter based on\n> dates, but you get the idea, it seems that this type of query should use\n> subqueries, no?\n\nThink about a query like this:\nwith foo as\n(\n select id, volatile_func() from bar\n)\nselect * from baz join foo using (id) join bla using(id) limit 10;\n\nHow many times does volatile_func() get called? How many times in the\nJOIN version? The answers are different...\n\nOne of the key features of CTEs is controlling how/when query\noperations occur so you can do things like control side effects and\nforce query plans that the server would not otherwise choose (although\nthis is typically an unoptimization).\n\nmerlin\n",
"msg_date": "Wed, 26 Oct 2011 07:23:18 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CTE vs Subquery"
},
{
"msg_contents": "El 26/10/11 14:23, Merlin Moncure escribi�:\n> On Wed, Oct 26, 2011 at 4:00 AM, Linos <[email protected]> wrote:\n>> El 25/10/11 19:11, Merlin Moncure escribi�:\n>>> On Tue, Oct 25, 2011 at 11:47 AM, Linos <[email protected]> wrote:\n>>>> El 25/10/11 18:43, Tom Lane escribi�:\n>>>>> Linos <[email protected]> writes:\n>>>>>> i am having any problems with performance of queries that uses CTE, can the\n>>>>>> join on a CTE use the index of the original table?\n>>>>>\n>>>>> CTEs act as optimization fences. This is a feature, not a bug. Use\n>>>>> them when you want to isolate the evaluation of a subquery.\n>>>>>\n>>>>> regards, tom lane\n>>>>>\n>>>>\n>>>> The truth it is that complex queries seems more readable using them (maybe a\n>>>> personal preference no doubt).\n>>>>\n>>>> Do have other popular databases the same behavior? SQL Server or Oracle for example?\n>>>\n>>> In my experience, SQL server also materializes them -- basically CTE\n>>> is short hand for 'CREATE TEMP TABLE foo AS SELECT...' then joining to\n>>> foo. If you want join behavior, use a join (by the way IIRC SQL\n>>> Server is a lot more restrictive about placement of ORDER BY).\n>>>\n>>> I like CTE current behavior -- the main place I find it awkward is in\n>>> use of recursive queries because the CTE fence forces me to abstract\n>>> the recursion behind a function, not a view since pushing the view\n>>> qual down into the CTE is pretty horrible:\n>>>\n>>> postgres=# explain select foo.id, (with bar as (select id from foo f\n>>> where f.id = foo.id) select * from bar) from foo where foo.id = 11;\n>>> QUERY PLAN\n>>> -------------------------------------------------------------------------------------\n>>> Index Scan using foo_idx on foo (cost=0.00..16.57 rows=1 width=4)\n>>> Index Cond: (id = 11)\n>>> SubPlan 2\n>>> -> CTE Scan on bar (cost=8.28..8.30 rows=1 width=4)\n>>> CTE bar\n>>> -> Index Scan using foo_idx on foo f (cost=0.00..8.28\n>>> rows=1 width=4)\n>>> Index Cond: (id = $0)\n>>> (7 rows)\n>>>\n>>> whereas for function you can inject your qual inside the CTE pretty\n>>> easily. this is a different problem than the one you're describing\n>>> though. for the most part, CTE execution fence is a very good thing,\n>>> since it enforces restrictions that other features can leverage, for\n>>> example 'data modifying with' queries (by far my all time favorite\n>>> postgres enhancement).\n>>>\n>>> merlin\n>>>\n>>\n>> ok, i get the idea, but i still don't understand what Tom says about isolate\n>> evaluation, apart from the performance and the readability, if i am not using\n>> writable CTE or recursive CTE, what it is the difference in evaluation (about\n>> being isolate) of a subquery vs CTE with the same text inside.\n>>\n>> I have been using this form lately:\n>>\n>> WITH inv (SELECT item_id,\n>> SUM(units) AS units\n>> FROM invoices),\n>>\n>> quo AS (SELECT item_id,\n>> SUM(units) AS units\n>> FROM quotes)\n>>\n>> SELECT items.item_id,\n>> CASE WHEN inv.units IS NOT NULL THEN inv.units ELSE 0 END AS\n>> units_invoices,\n>> CASE WHEN quo.units IS NOT NULL THEN quo.units ELSE 0 END AS\n>> units_quotes\n>>\n>> FROM items\n>> LEFT JOIN inv ON inv.item_id = items.item_id\n>> LEFT JOIN quo ON quo.item_id = items.item_id\n>>\n>> Well this is oversimplified because i use much more tables and filter based on\n>> dates, but you get the idea, it seems that this type of query should use\n>> subqueries, no?\n> \n> Think about a query like this:\n> with foo as\n> (\n> select id, volatile_func() from bar\n> )\n> select * from baz join foo using (id) join bla using(id) limit 10;\n> \n> How many times does volatile_func() get called? How many times in the\n> JOIN version? The answers are different...\n> \n> One of the key features of CTEs is controlling how/when query\n> operations occur so you can do things like control side effects and\n> force query plans that the server would not otherwise choose (although\n> this is typically an unoptimization).\n> \n> merlin\n\nOk, i think i understand now the difference, thanks Merlin.\n\nRegards,\nMiguel �ngel.\n",
"msg_date": "Wed, 26 Oct 2011 14:55:36 +0200",
"msg_from": "Linos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CTE vs Subquery"
}
] |
[
{
"msg_contents": "Ive got a lengthy query, that doesn't finish in reasonable time (i.e.\n10min+). I suspect, that the query optimizer miscalculates the number of\nrows for part of the query.\n\nThe suspicious subquery:\n\nSELECT\n\t\t\t\tsv1.sid as sid\n\t\t\tFROM stud_vera sv1 \n\t\t\tLEFT JOIN stud_vera AS sv2 \n\t\t\tON (\n\t\t\t\tsv1.sid=sv2.sid \n\t\t\t\tAND sv2.veraid IN ( 109 )\n\t\t\t\t) \n\t\t\tWHERE sv1.veraid IN ( 3 ) \n\t\t\tAND sv2.veraid IS NULL\n\nThe whole query:\n\nSELECT count(DISTINCT sid) AS Anzahl FROM (SELECT sid\n FROM stud\n WHERE (\n status IN (1,2)\n AND length(vname) > 1\n AND length(nname) > 1\n\t\t\t\t\tAND length(email) > 1\n \n )) AS stud INNER JOIN (SELECT DISTINCT\n sid,\n cast(created AS date) AS tag,\n cast(veradate AS DATE) -\ncast(stud_vera.created AS DATE) AS tage,\n cast(\n floor(\n (cast(veradate AS date) -\ncast(stud_vera.created AS date))/7\n )\n AS integer) AS woche,\n cast(extract(week from stud_vera.created) AS\ninteger) AS kalenderwoche,\n to_char(stud_vera.created, 'YYYY/MM') AS\nmonat,\n to_char(stud_vera.abgemeldet, 'YYYY/MM') AS\nabmeldemonat,\n CASE\n WHEN newsletterid &1 = 1 THEN 'Flag\n1'\n WHEN newsletterid &2 = 2 THEN 'Flag\n2'\n WHEN newsletterid &4 = 2 THEN 'Flag\n3'\n WHEN newsletterid &8 = 8 THEN 'Flag\n4'\n WHEN newsletterid &16 = 16 THEN\n'Flag 5'\n WHEN newsletterid &32 = 32 THEN\n'Flag 6'\n WHEN newsletterid &64 = 64 THEN\n'Flag 7'\n WHEN newsletterid &128 = 128 THEN\n'Flag 8'\n END AS newsletterid\n FROM stud_vera\n JOIN vera USING (veraid)\n WHERE\n\n\n stud_vera.status > 0\n AND abgemeldet is null\n\n\n\n\n AND veraid IN (\n\n 3\n\n )) AS vera USING (sid) INNER JOIN (SELECT\n sid,\n age(date_trunc('MONTH', now()), date_trunc('MONTH',\nbis)) || '' AS months\n FROM\n study\n WHERE status = 1\n\n AND\n\n age(date_trunc('MONTH', now()),\ndate_trunc('MONTH', bis)) < interval '60 months'\n\n\n AND\n\n\n age(date_trunc('MONTH', bis),\ndate_trunc('MONTH', now())) <= interval '-24 months') AS examen USING (sid)\nINNER JOIN (SELECT\n sv1.sid as sid\n FROM stud_vera sv1\n LEFT JOIN stud_vera AS sv2\n ON (\n sv1.sid=sv2.sid\n AND sv2.veraid IN ( 109 )\n )\n WHERE sv1.veraid IN ( 3 )\n AND sv2.veraid IS NULL) AS veraAusschluss USING\n(sid)\n\n\nAnd the explain analyze for the sub query: http://explain.depesz.com/s/8d2 \nAnd the explain for the whole query: http://explain.depesz.com/s/GGf\n(explain analyze doesn't finish in reasonable time)\n\nWhat strucks me, is that in the sub query row numbers for sv1 and sv2 are\ncalculated quite accurat. But the resulting 4 rows after the final join is\nfar from reality. Shouldn't this be as minimum the number of rows for sv1\nminus number of rows for sv2?\n\nIf the optimizer knew, that the number is much bigger, it probably wouldn't\nchoose the nested loop in the next step, which I suspect is the reason for\nthe performance issues.\n\nWe're using postgres 9.0.4. It might be interesting, that the same query\nruns smoothly on our test system with postgres 8.3.7.\n\nThe tables:\n\n Tabelle »public.stud_vera«\n Spalte | Typ |\nAttribute\n-------------------+-----------------------------+--------------------------\n------------------------------------\n svid | integer | not null Vorgabewert\nnextval('stud_vera_svid_seq'::regclass)\n sid | integer |\n veraid | integer |\n modified | timestamp without time zone | not null Vorgabewert\nnow()\n created | timestamp without time zone | not null Vorgabewert\nnow()\n verastep1 | timestamp without time zone |\n kontoinhaber | character varying(64) |\n kontonum | character varying(32) |\n blz | character varying(32) |\n bank | character varying(64) |\n betrag | numeric(5,2) |\n verastep2 | timestamp without time zone |\n deferred | smallint |\n verastep3 | timestamp without time zone |\n status | smallint | not null\n label | character varying(128) |\n kanalid | integer |\n deferredtxt | character varying(256) |\n comment | character varying(64) |\n label2 | character varying(128) |\n dstid | integer |\n abgemeldet | date |\n bsid | integer | Vorgabewert 1\n newsletterid | integer |\n abmeldenewsletter | integer | not null Vorgabewert 0\n kanalcomment | character varying(128) |\nIndexe:\n \"stud_vera_pkey\" PRIMARY KEY, btree (svid)\n \"stud_vera_sid_veraid_idx\" UNIQUE, btree (sid, veraid)\n \"stud_vera_sid_idx\" btree (sid)\n \"stud_vera_veraid_idx\" btree (veraid)\nFremdschlüssel-Constraints:\n \"$1\" FOREIGN KEY (sid) REFERENCES stud(sid) ON UPDATE CASCADE ON DELETE\nCASCADE\n \"$2\" FOREIGN KEY (veraid) REFERENCES vera(veraid) ON UPDATE CASCADE ON\nDELETE SET NULL\n \"stud_vera_dstid\" FOREIGN KEY (dstid) REFERENCES datenschutztext(dstid)\nON UPDATE CASCADE ON DELETE SET NULL\nFremdschlüsselverweise von:\n TABLE \"eingang\" CONSTRAINT \"eingang_svid_fkey\" FOREIGN KEY (svid)\nREFERENCES stud_vera(svid) ON UPDATE CASCADE ON DELETE CASCADE\n\n Tabelle »public.stud«\n Spalte | Typ |\nAttribute\n---------------+-----------------------------+------------------------------\n--------------------------\n sid | integer | not null Vorgabewert\nnextval('stud_sid_seq'::regclass)\n login | character varying(64) | not null\n passwd | character varying(32) |\n modified | timestamp without time zone | not null Vorgabewert now()\n created | timestamp without time zone | not null Vorgabewert now()\n lastlogin | timestamp without time zone |\n mow | smallint |\n titel | character varying(32) |\n vname | character varying(32) |\n nname | character varying(32) |\n birth | date |\n einstieg | date |\n blacksheep | integer |\n studstatusid | integer |\n status | smallint | not null\n studmodified | timestamp without time zone |\n adminmodified | timestamp without time zone |\n comment | character varying(128) |\n dstid | integer |\n linkid | integer |\n beesiteuserid | integer |\n ypdate | date |\n email | character varying(64) |\n flag | smallint |\nIndexe:\n \"stud_pkey\" PRIMARY KEY, btree (sid)\n \"stud_login_idx\" UNIQUE, btree (login)\n \"stud_login_lower\" btree (lower(login::text))\n \"stud_nname_idx\" btree (lower(nname::text))\n \"stud_sid_status_idx\" btree (sid, status)\n \"stud_vname_idx\" btree (lower(vname::text))\nCheck-Constraints:\n \"birth\" CHECK (birth >= '1900-01-01'::date AND birth <=\n'1999-12-31'::date)\nFremdschlüssel-Constraints:\n \"stud_dstid_fkey\" FOREIGN KEY (dstid) REFERENCES datenschutztext(dstid)\nON UPDATE CASCADE ON DELETE SET NULL\nFremdschlüsselverweise von:\n TABLE \"stud_vera\" CONSTRAINT \"$1\" FOREIGN KEY (sid) REFERENCES stud(sid)\nON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"stud_wdwolle\" CONSTRAINT \"$1\" FOREIGN KEY (sid) REFERENCES\nstud(sid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"stud_staats\" CONSTRAINT \"$1\" FOREIGN KEY (sid) REFERENCES\nstud(sid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"locking\" CONSTRAINT \"$1\" FOREIGN KEY (sid) REFERENCES stud(sid)\nON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"stud_ad\" CONSTRAINT \"$1\" FOREIGN KEY (sid) REFERENCES stud(sid)\nON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"gutschein\" CONSTRAINT \"$2\" FOREIGN KEY (sid) REFERENCES stud(sid)\nON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"kontakt\" CONSTRAINT \"kontakt_sid_fkey\" FOREIGN KEY (sid)\nREFERENCES stud(sid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"stud_ae\" CONSTRAINT \"stud_ae_sid_fkey\" FOREIGN KEY (sid)\nREFERENCES stud(sid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"stud_berufsfeld\" CONSTRAINT \"stud_berufsfeld_fk_sid\" FOREIGN KEY\n(sid) REFERENCES stud(sid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"stud_einstiegsbereich\" CONSTRAINT\n\"stud_einstiegsbereich_fkey_sid\" FOREIGN KEY (sid) REFERENCES stud(sid) ON\nUPDATE CASCADE ON DELETE CASCADE\n TABLE \"stud_vakanzen\" CONSTRAINT \"stud_vakanzen_sid_fkey\" FOREIGN KEY\n(sid) REFERENCES stud(sid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"stud_vposition\" CONSTRAINT \"stud_vposition_sid_fkey\" FOREIGN KEY\n(sid) REFERENCES stud(sid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"study\" CONSTRAINT \"study_sid_fkey\" FOREIGN KEY (sid) REFERENCES\nstud(sid) ON UPDATE CASCADE ON DELETE CASCADE\n\n Tabelle »public.vera«\n Spalte | Typ |\nAttribute\n--------------+-----------------------------+-------------------------------\n----------------------------\n veraid | integer | not null Vorgabewert\nnextval('vera_veraid_seq'::regclass)\n vera | character varying(64) |\n verakurz | character varying(32) |\n vera_e | character varying(8) |\n vera_e2 | character varying(8) |\n veratyp | smallint |\n veradate | date |\n veradauer | integer |\n veraort | character varying(32) |\n veraland | character varying(32) |\n veracomment | character varying(255) |\n active | smallint |\n status | smallint |\n landid | integer |\n spontandate | date |\n spontandate2 | date |\n dstid | integer |\n xmlconf | character varying(128) |\n verakurz2 | character varying(32) |\n closingdate | timestamp without time zone |\n url | character varying(128) |\n urltext | character varying(32) | Vorgabewert 'Zum\nEvent'::character varying\n etflag | integer |\nIndexe:\n \"vera_pkey\" PRIMARY KEY, btree (veraid)\n \"vera_verakurz_unique\" UNIQUE, btree (verakurz)\nFremdschlüssel-Constraints:\n \"vera_dstid\" FOREIGN KEY (dstid) REFERENCES datenschutztext(dstid) ON\nUPDATE CASCADE ON DELETE SET NULL\nFremdschlüsselverweise von:\n TABLE \"vera_reihe\" CONSTRAINT \"$1\" FOREIGN KEY (veraid) REFERENCES\nvera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"gutschein\" CONSTRAINT \"$1\" FOREIGN KEY (veraid) REFERENCES\nvera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"prod_vera\" CONSTRAINT \"$2\" FOREIGN KEY (veraid) REFERENCES\nvera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"stud_vera\" CONSTRAINT \"$2\" FOREIGN KEY (veraid) REFERENCES\nvera(veraid) ON UPDATE CASCADE ON DELETE SET NULL\n TABLE \"auswahlevent\" CONSTRAINT \"auswahlevent_veraid_fkey\" FOREIGN KEY\n(veraid) REFERENCES vera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"inside\" CONSTRAINT \"fk_inside_veraid\" FOREIGN KEY (veraid)\nREFERENCES vera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"fprofil\" CONSTRAINT \"fprofil_veraid_fkey\" FOREIGN KEY (veraid)\nREFERENCES vera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"mailversand\" CONSTRAINT \"mailversand_veraid_fkey\" FOREIGN KEY\n(veraid) REFERENCES vera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"mailvorlage\" CONSTRAINT \"mailvorlage_veraid_fkey\" FOREIGN KEY\n(veraid) REFERENCES vera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"raum\" CONSTRAINT \"raum_veraid_fkey\" FOREIGN KEY (veraid)\nREFERENCES vera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"schiene\" CONSTRAINT \"schiene_veraid_fkey\" FOREIGN KEY (veraid)\nREFERENCES vera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"vakanzen\" CONSTRAINT \"vakanzen_veraid_fkey\" FOREIGN KEY (veraid)\nREFERENCES vera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"vposition\" CONSTRAINT \"vposition_veraid_fkey\" FOREIGN KEY\n(veraid) REFERENCES vera(veraid) ON UPDATE CASCADE ON DELETE CASCADE\n\n Tabelle »public.study«\n Spalte | Typ |\nAttribute\n---------------------+--------------------------------+---------------------\n----------------------------------------\n studyid | integer | not null Vorgabewert\nnextval('study_studyid_seq'::regclass)\n sid | integer | not null\n modified | timestamp(0) without time zone | not null Vorgabewert\nnow()\n created | timestamp(0) without time zone | not null Vorgabewert\nnow()\n abschlusstypid | integer |\n uniid | integer |\n von | date |\n bis | date |\n unisonstige | character varying(128) |\n unilandid | integer |\n ausrichtungsonstige | character varying(64) |\n vertiefungsonstige | character varying(64) |\n qnoteid | integer |\n status | smallint | not null Vorgabewert\n1\nIndexe:\n \"study_pkey\" PRIMARY KEY, btree (studyid)\n \"study_sid_idx\" btree (sid)\nFremdschlüssel-Constraints:\n \"study_sid_fkey\" FOREIGN KEY (sid) REFERENCES stud(sid) ON UPDATE\nCASCADE ON DELETE CASCADE\nFremdschlüsselverweise von:\n TABLE \"study_ausrichtung\" CONSTRAINT \"study_ausrichtung_studyid_fkey\"\nFOREIGN KEY (studyid) REFERENCES study(studyid) ON UPDATE CASCADE ON DELETE\nCASCADE\n TABLE \"study_vertiefung\" CONSTRAINT \"study_vertiefung_fkey1\" FOREIGN KEY\n(studyid) REFERENCES study(studyid) ON UPDATE CASCADE ON DELETE CASCADE\n\nMany thanks\n\n-- \nJens Reufsteck\n\n\n",
"msg_date": "Wed, 26 Oct 2011 12:23:54 +0200",
"msg_from": "\"Jens Reufsteck\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Anti join miscalculates row number?"
},
{
"msg_contents": "\"Jens Reufsteck\" <[email protected]> writes:\n> I�ve got a lengthy query, that doesn't finish in reasonable time (i.e.\n> 10min+). I suspect, that the query optimizer miscalculates the number of\n> rows for part of the query.\n> ...\n> We're using postgres 9.0.4.\n\nTry 9.0.5. There was a bug fixed in this area.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 Oct 2011 10:45:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anti join miscalculates row number? "
},
{
"msg_contents": "Hello, Jens\n\nOn 2011.10.26 13:23, Jens Reufsteck wrote:\n> I’ve got a lengthy query, that doesn't finish in reasonable time (i.e.\n> 10min+). I suspect, that the query optimizer miscalculates the number of\n> rows for part of the query.\n>\nI'm sorry for the silly question, but have you tried analyze on your \ndatabase or at least the tables involved in the query ?\n\n\n-- \nJulius Tuskenis\nHead of the programming department\nUAB nSoft\nmob. +37068233050\n",
"msg_date": "Thu, 27 Oct 2011 10:03:30 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anti join miscalculates row number?"
},
{
"msg_contents": "Just tested on 9.0.5, seems ok. Explain for the suspected sub query is now in line with Analyze.\n\nThanks\nJens\n\n\n\"Jens Reufsteck\" <[email protected]> writes:\n> I’ve got a lengthy query, that doesn't finish in reasonable time (i.e.\n> 10min+). I suspect, that the query optimizer miscalculates the number of\n> rows for part of the query.\n> ...\n> We're using postgres 9.0.4.\n\nTry 9.0.5. There was a bug fixed in this area.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Oct 2011 16:44:24 +0000",
"msg_from": "Jens Reufsteck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anti join miscalculates row number? "
}
] |
[
{
"msg_contents": "\nIs there any known problem with slow cursors in PostgreSQL 8.4.5?\n\nI have a following query, which is slow (on my database it takes 11 seconds to execute),\nprobably should be rewritten, but it doesn't matter here. The problem is, that in cursor,\neach fetch takes much longer (even few minutes!), while only the first one should be\nslow. Am I doing something wrong?\n\nExplain analyze: http://explain.depesz.com/s/TDw\n\nMicrosoft Windows XP [Wersja 5.1.2600]\n(C) Copyright 1985-2001 Microsoft Corp.\n\nd:\\Temp>psql dbupdater postgres\npsql (8.4.5)\nWARNING: Console code page (852) differs from Windows code page (1250)\n 8-bit characters might not work correctly. See psql reference\n page \"Notes for Windows users\" for details.\nType \"help\" for help.\n\ndbupdater=# select version();\n version\n-------------------------------------------------------------\n PostgreSQL 8.4.5, compiled by Visual C++ build 1400, 32-bit\n(1 row)\n\n\ndbupdater=# SELECT col.column_name AS nazwa_kolumny, kc.constraint_type,\nkc.fk_table_name, kc.fk_column_name\ndbupdater-# FROM information_schema.columns col\ndbupdater-# LEFT OUTER JOIN (SELECT kcu.column_name, tc.constraint_type, ccu.table_name\nAS fk_table_name, ccu.column_name AS fk_column_name\ndbupdater(# FROM information_schema.table_constraints tc,\ndbupdater(# information_schema.key_column_usage kcu,\ndbupdater(# information_schema.constraint_column_usage AS ccu\ndbupdater(# where tc.table_name = 'bdt_skarpa'\ndbupdater(# AND tc.table_schema = 'prod1'\ndbupdater(# AND tc.constraint_schema = tc.table_schema\ndbupdater(# AND tc.constraint_type IN ('PRIMARY KEY','FOREIGN KEY')\ndbupdater(# AND kcu.constraint_name = tc.constraint_name\ndbupdater(# AND kcu.constraint_schema = tc.constraint_schema\ndbupdater(# AND ccu.constraint_name = tc.constraint_name\ndbupdater(# AND ccu.constraint_schema = tc.table_schema\ndbupdater(# AND ccu.table_schema = tc.table_schema) AS kc ON col.column_name =\nkc.column_name\ndbupdater-# WHERE col.table_name = 'bdt_skarpa'\ndbupdater-# AND col.table_schema = 'prod1';\n nazwa_kolumny | constraint_type | fk_table_name |\nfk_column_name\n--------------------------------+-----------------+------------------------------+-------\n---------\n id | PRIMARY KEY | bdt_skarpa | id\n href | | |\n id_bufora_insert | | |\n id_bufora_update | | |\n id_techniczny_obiektu | | |\n iip_local_id | | |\n iip_name_space | | |\n iip_version_id | | |\n informacja_dodatkowa | | |\n kat_dokladnosci_geom_fk | FOREIGN KEY | bdt_sl_kat_dokladnosci | id\n omg_kat_istnienia_fk | FOREIGN KEY | omg_sl_kat_istnienia | id\n omg_koniec_zycia_obiektu | | |\n omg_rodzaj_repr_geom_fk | FOREIGN KEY | omg_sl_rodzaj_repr_geom | id\n omg_start_zycia_obiektu | | |\n omg_start_zycia_wersji_obiektu | | |\n omg_uwagi | | |\n omg_uzytkownik | | |\n omg_zrodlo_danych_atr_fk | FOREIGN KEY | omg_sl_zrodla_danych | id\n omg_zrodlo_danych_geom_fk | FOREIGN KEY | omg_sl_zrodla_danych | id\n omp_geometria | | |\n omp_koniec_obiekt | | |\n omp_koniec_wersja_obiekt | | |\n omp_nazwa | | |\n omp_referencja_fk | FOREIGN KEY | omp_powiazanie_obiektow_join | id\n omp_rodzaj_geometrii_id | FOREIGN KEY | omg_sl_rodzaj_geometrii | id\n omp_start_obiekt | | |\n omp_start_wersja_obiekt | | |\n(27 rows)\n\n\ndbupdater=# \\i cursor_test.sql\nCREATE FUNCTION\ndbupdater=# select cursor_test();\nNOTICE: begin 2011-10-26 14:23:40.56+02\nNOTICE: in loop id 2011-10-26 14:23:49.828+02\nNOTICE: in loop href 2011-10-26 14:26:36.466+02\nNOTICE: in loop id_bufora_insert 2011-10-26 14:28:04.6+02\nNOTICE: in loop id_bufora_update 2011-10-26 14:29:33.108+02\nNOTICE: in loop id_techniczny_obiektu 2011-10-26 14:31:00.66+02\nNOTICE: in loop iip_local_id 2011-10-26 14:32:27.741+02\nNOTICE: in loop iip_name_space 2011-10-26 14:33:58.383+02\nNOTICE: in loop iip_version_id 2011-10-26 14:35:43.324+02\n...\n\ncreate or replace function cursor_test() returns void as\n$$\ndeclare\n cur cursor for SELECT col.column_name AS nazwa_kolumny, kc.constraint_type,\nkc.fk_table_name, kc.fk_column_name\nFROM information_schema.columns col\nLEFT OUTER JOIN (SELECT kcu.column_name, tc.constraint_type, ccu.table_name AS\nfk_table_name, ccu.column_name AS fk_column_name \nFROM information_schema.table_constraints tc,\ninformation_schema.key_column_usage kcu,\ninformation_schema.constraint_column_usage AS ccu \nwhere tc.table_name = 'bdt_skarpa'\nAND tc.table_schema = 'prod1'\nAND tc.constraint_schema = tc.table_schema\nAND tc.constraint_type IN ('PRIMARY KEY','FOREIGN KEY')\nAND kcu.constraint_name = tc.constraint_name\nAND kcu.constraint_schema = tc.constraint_schema\nAND ccu.constraint_name = tc.constraint_name\nAND ccu.constraint_schema = tc.table_schema\nAND ccu.table_schema = tc.table_schema) AS kc ON col.column_name = kc.column_name\nWHERE col.table_name = 'bdt_skarpa'\nAND col.table_schema = 'prod1';\n rec record;\nbegin\n raise notice 'begin %', clock_timestamp();\n for rec in cur loop\n raise notice 'in loop % %', rec.nazwa_kolumny, clock_timestamp();\n end loop;\n raise notice 'end %', clock_timestamp();\nend;\n$$ language plpgsql;\n\n-- \n____________________________________________________________________\nCezariusz Marek mob: +48 608 646 494\nhttp://www.comarch.com/ tel: +48 33 815 0734\n____________________________________________________________________\n\n\n",
"msg_date": "Wed, 26 Oct 2011 14:43:08 +0200",
"msg_from": "\"Cezariusz Marek\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow cursor"
},
{
"msg_contents": "Do you really need to query the catalogues ? That on its own is not a\ngood idea if you want something to run fast and frequently.\n\n\n-- \nGJ\n",
"msg_date": "Wed, 26 Oct 2011 13:49:17 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow cursor"
},
{
"msg_contents": "Gregg Jaskiewicz wrote:\n> Do you really need to query the catalogues ? That on its own is not a\n> good idea if you want something to run fast and frequently.\n\nI know, but I've used it just to show the problem with cursors. I have the same problem\nwith other slow queries, which execute within x seconds in psql or pgAdmin, but for loops\nin functions it takes at least x seconds to fetch each row. So please just imagine, that\nit's an other slow query.\n\n-- \n____________________________________________________________________\nCezariusz Marek mob: +48 608 646 494\nhttp://www.comarch.com/ tel: +48 33 815 0734\n____________________________________________________________________\n\n",
"msg_date": "Wed, 26 Oct 2011 14:56:51 +0200",
"msg_from": "\"Cezariusz Marek\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow cursor"
},
{
"msg_contents": "Hi,\n\nOn Wednesday 26 Oct 2011 14:43:08 Cezariusz Marek wrote:\n> Is there any known problem with slow cursors in PostgreSQL 8.4.5?\n> \n> I have a following query, which is slow (on my database it takes 11 seconds\n> to execute), probably should be rewritten, but it doesn't matter here. The\n> problem is, that in cursor, each fetch takes much longer (even few\n> minutes!), while only the first one should be slow. Am I doing something\n> wrong?\nDoes the problem persist if you play around with cursor_tuple_fraction?\n\nAndres\n",
"msg_date": "Wed, 26 Oct 2011 15:10:46 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow cursor"
}
] |
[
{
"msg_contents": "We are running into a significant performance issue with\n\"PostgreSQL 9.0.4 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real \n(Debian 4.4.5-8) 4.4.5, 64-bit\"\n(the version string pqAdmin displays).\n\nA fairly complex insert query on an empty destination table will run for \nan indefinite amount of time (we waited literally days for the query to \ncomplete). This does not happen every time we run the query but often. \nNow ordinarily I'd assume we did something wrong with our indices or \nquery, but the execution plan looks sane and, more tellingly, we have:\n- no CPU load\n- no network traffic\n- no disk I/O\n- no other load on the server except this single query\nand strace displaying a seemingly endless list of lseek calls.\n\nSo my assumption is that we are not running into bad Big-O() runtime \nbehavior but rather into some locking problem.\n\nAny ideas what might cause this? Workarounds we could try?\n\nthank you,\n\n S�ren\n",
"msg_date": "Wed, 26 Oct 2011 17:47:08 +0200",
"msg_from": "=?ISO-8859-1?Q?S=F6ren_Meyer-Eppler?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "What does 'select * from pg_stat_activity' say, more precisely - the\n\"waiting\" column.\n",
"msg_date": "Thu, 27 Oct 2011 08:42:24 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "On 2011-10-27 09:42, Gregg Jaskiewicz wrote:\n> What does 'select * from pg_stat_activity' say, more precisely - the\n> \"waiting\" column.\n\nWaiting is \"false\" for all rows. If I use pgadmin to lock at the server \nstatus all locks have been granted for hours.\n\nAlthough the process does in fact use CPU this time, strace still gives \na seemingly endless list of lseek calls.\n",
"msg_date": "Thu, 27 Oct 2011 14:03:46 +0200",
"msg_from": "=?UTF-8?B?U8O2cmVuIE1leWVyLUVwcGxlcg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "On Thu, Oct 27, 2011 at 4:42 AM, Gregg Jaskiewicz <[email protected]> wrote:\n> What does 'select * from pg_stat_activity' say, more precisely - the\n> \"waiting\" column.\n\nWhether that particular process is waiting for it to be granted some\nkind of database-level lock.\n",
"msg_date": "Thu, 27 Oct 2011 11:54:57 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "=?ISO-8859-1?Q?S=F6ren_Meyer-Eppler?= <[email protected]> writes:\n> A fairly complex insert query on an empty destination table will run for \n> an indefinite amount of time (we waited literally days for the query to \n> complete). This does not happen every time we run the query but often. \n> Now ordinarily I'd assume we did something wrong with our indices or \n> query, but the execution plan looks sane and, more tellingly, we have:\n> - no CPU load\n> - no network traffic\n> - no disk I/O\n> - no other load on the server except this single query\n> and strace displaying a seemingly endless list of lseek calls.\n\n> So my assumption is that we are not running into bad Big-O() runtime \n> behavior but rather into some locking problem.\n\nIf it were blocked on a lock, it wouldn't be doing lseeks().\n\nThe most obvious explanation for a backend that's doing repeated lseeks\nand nothing else is that it's repeatedly doing seqscans on a table\nthat's fully cached in shared buffers. I'd wonder about queries\nembedded in often-executed plpgsql functions, for instance. Can you\nidentify which table the lseeks are issued against?\n\n(Now, having said that, I don't see how that type of theory explains no\nCPU load. But you're really going to need to provide more info before\nanyone can explain it, and finding out what the lseeks are on would be\none good step.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Oct 2011 20:32:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek? "
},
{
"msg_contents": "On 10/26/11 8:47 AM, Sören Meyer-Eppler wrote:\n> A fairly complex insert query on an empty destination table will run for\n> an indefinite amount of time (we waited literally days for the query to\n> complete). This does not happen every time we run the query but often.\n> Now ordinarily I'd assume we did something wrong with our indices or\n> query, but the execution plan looks sane and, more tellingly, we have:\n> - no CPU load\n> - no network traffic\n> - no disk I/O\n> - no other load on the server except this single query\n> and strace displaying a seemingly endless list of lseek calls.\n\nHmmmm. If you were on Solaris or OSX, I'd say you'd hit one of their\nI/O bugs which can cause endless lseeks for individual disk pages.\nHowever, I've never seen that particular pattern on Linux (other I/O\nbugs, but not that one).\n\nQuestions:\n(1) is it *only* that query?\n(2) is there some reason you might have excessive disk fragmentation,\nlike running on a VM?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Thu, 27 Oct 2011 19:42:01 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "> (1) is it *only* that query?\n\nNo. There seem to be one or two others exhibiting similarly bad performance.\n\n> (2) is there some reason you might have excessive disk fragmentation,\n> like running on a VM?\n\nNo VM. The database is the only thing running on the server. Filesystem \nis XFS.\n",
"msg_date": "Mon, 31 Oct 2011 14:56:32 +0100",
"msg_from": "=?UTF-8?B?U8O2cmVuIE1leWVyLUVwcGxlcg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "> embedded in often-executed plpgsql functions, for instance. Can you\n> identify which table the lseeks are issued against?\n\nI wouldn't know how? I'm just using htop and \"s\" on the postgres process \nto find these...\n\n> (Now, having said that, I don't see how that type of theory explains no\n> CPU load.\n\nMy bad sorry. I was relaying information from the guy administering the \nserver. It turns out that \"no CPU load\" really meant: only one of the \ncores is being utilized. On a 16 core machine that looks like \"no load\" \nbut of course for the individual query still means 100%.\n\n> But you're really going to need to provide more info before\n> anyone can explain it, and finding out what the lseeks are on would be\n> one good step.)\n\nI have attached two of the offending execution plans. Anything obviously \nwrong with them?\n\nthank you for looking into it!\n\n S�ren",
"msg_date": "Mon, 31 Oct 2011 15:09:37 +0100",
"msg_from": "=?ISO-8859-1?Q?S=F6ren_Meyer-Eppler?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "=?ISO-8859-1?Q?S=F6ren_Meyer-Eppler?= <[email protected]> writes:\n>> embedded in often-executed plpgsql functions, for instance. Can you\n>> identify which table the lseeks are issued against?\n\n> I wouldn't know how? I'm just using htop and \"s\" on the postgres process \n> to find these...\n\nNote the file number appearing in the lseeks, run \"lsof -p PID\" against\nthe backend process to discover the actual filename of that file, then\nlook for a match to the filename in pg_class.relfilenode.\n\n> I have attached two of the offending execution plans. Anything obviously \n> wrong with them?\n\nWhat makes you say these are \"offending execution plans\"? Both of them\nseem to be completing just fine.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Oct 2011 11:28:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek? "
},
{
"msg_contents": "> Note the file number appearing in the lseeks, run \"lsof -p PID\" against\n> the backend process to discover the actual filename of that file, then\n> look for a match to the filename in pg_class.relfilenode.\n\nWill do. I need to reproduce the error first which may take a while.\n\n>> I have attached two of the offending execution plans. Anything obviously\n>> wrong with them?\n>\n> What makes you say these are \"offending execution plans\"? Both of them\n> seem to be completing just fine.\n\nThat's exactly the point. The plan looks good, the execution times will \nusually be good. But sometimes, for no immediately obvious reasons, \nthey'll run for hours.\nI know these are the offending queries because this is what\n\"select now() - query_start, current_query from pg_stat_activity\"\nwill tell me.\n\nWe execute these queries from a Java program via JDBC \n(postgresql-9.1-901.jdbc4.jar). But since little data is being \ntransferred between the Java client and the database I hope that cannot \nbe the issue.\n\nAnything else I should look out for/log during the next test run?\n\n S�ren\n",
"msg_date": "Mon, 31 Oct 2011 17:17:15 +0100",
"msg_from": "=?ISO-8859-1?Q?S=F6ren_Meyer-Eppler?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "Hi everyone,\n\nOn 28/10/2011 02:32, Tom Lane wrote:\n> =?ISO-8859-1?Q?S=F6ren_Meyer-Eppler?= <[email protected]> writes:\n>> A fairly complex insert query on an empty destination table will run for \n>> an indefinite amount of time (we waited literally days for the query to \n>> complete). This does not happen every time we run the query but often. \n>> Now ordinarily I'd assume we did something wrong with our indices or \n>> query, but the execution plan looks sane and, more tellingly, we have:\n>> - no CPU load\n>> - no network traffic\n>> - no disk I/O\n>> - no other load on the server except this single query\n>> and strace displaying a seemingly endless list of lseek calls.\n> \n>> So my assumption is that we are not running into bad Big-O() runtime \n>> behavior but rather into some locking problem.\n> \n> If it were blocked on a lock, it wouldn't be doing lseeks().\n> \n> The most obvious explanation for a backend that's doing repeated lseeks\n> and nothing else is that it's repeatedly doing seqscans on a table\n> that's fully cached in shared buffers. I'd wonder about queries\n> embedded in often-executed plpgsql functions, for instance. Can you\n> identify which table the lseeks are issued against?\n\nI'm resuming the old thread as we've recently been hit twice by a\nsimilar issue, both on 9.1.1 and 9.1.2. Details follow the quoted part.\n\nThe first time we've seen such behaviour was a few weeks ago, using\n9.1.1. I've decided to avoid posting as I messed up with gdb while\ntrying to gather more information. This time, I've kept the query\nrunning, just in case I need to provide more information.\n\nSo, here they are:\n\nLinux (ubuntu oneiric x64) on and EC2 instance (8 virtual cores, 68GB\nRAM). Storage is on EBS using xfs as filesystem. Currently using\nPostgreSQL 9.1.2 manually compiled.\n\nA rather long import process is stuck since ~12hrs on a query that\nnormally returns in a few seconds or minutes. Kust like to OP there's\nalmost no activity on the box, only a single postgres process at 100% CPU.\n\nTwo separate queries, (although similar) have triggered the issue, and\nthey are run one after another. Also, the import is also not always\nfailing, I think it happened twice in about 10 runs. We're expected to\ngo live in about two weeks so these were only test runs.\n\nThe explain looks like this:\nhttp://explain.depesz.com/s/TqD\n\nThe strace output:\n\n...\n2011-12-28 09:33:59.506037500 lseek(64, 0, SEEK_END) = 27623424\n2011-12-28 09:33:59.555167500 lseek(64, 0, SEEK_END) = 27623424\n2011-12-28 09:33:59.604315500 lseek(64, 0, SEEK_END) = 27623424\n2011-12-28 09:33:59.653482500 lseek(64, 0, SEEK_END) = 27623424\n2011-12-28 09:33:59.676128500 lseek(67, 0, SEEK_END) = 65511424\n2011-12-28 09:33:59.676287500 write(67,\n\"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n8192) = 8192\n...\n2011-12-28 09:34:04.935944500 lseek(64, 0, SEEK_END) = 27623424\n2011-12-28 09:34:04.945279500 lseek(67, 0, SEEK_END) = 65519616\n2011-12-28 09:34:04.945394500 write(67,\n\"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n8192) = 8192\n...\n\nso an lseek on fd 64 roughly every 50ms and lseek/write to fd 67 every\n5s. Lsof shows:\n\npostgres 1991 postgres 64u REG 252,0 27623424\n134250744 /data/postgresql/main/base/16387/44859\npostgres 1991 postgres 67u REG 252,0 65732608\n134250602 /data/postgresql/main/base/16387/44675\n\nwith 64 being the carts table and 67 cart_products, according to\npg_class' relfilenode. All the tables involved are regular tables but\nlog_conversion_item, which is unlogged.\n\nI'm also trying an EXPLAIN ANALYZE for the SELECT part, but it seems to\ntake a while too and is seemingly calling only gettimeofday.\n\nHere are the custom values in postgresql.conf:\n\nmax_connections = 200\nmaintenance_work_mem = 1GB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 48GB\nwork_mem = 128MB\nwal_buffers = 16MB\ncheckpoint_segments = 64\nshared_buffers = 8GB\nsynchronous_commit = off\nrandom_page_cost = 2\n\narchive_mode = off\nwal_level = hot_standby\nmax_wal_senders = 2\nwal_keep_segments = 256\n\nStreaming replication was accidentally left enabled during the run, but\nI'm pretty sure that I didn't set it up yet when we had the previous\nfailure.\n\nI'll be happy to provide more information. For now we're going to\ndisconnect the slave and use that for another test import run.\n\n\nCheers\n-- \nMatteo Beccati\n\nDevelopment & Consulting - http://www.beccati.com/\n",
"msg_date": "Wed, 28 Dec 2011 10:57:10 +0100",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "Hi,\n\n> I'm also trying an EXPLAIN ANALYZE for the SELECT part, but it seems to\n> take a while too and is seemingly calling only gettimeofday.\n\nAn update on this. Apart from gettimeofday calls, which I filtered out\nwhen logging, I've seen about 80 lseeks recorded every 1h10m (each 1us\napart):\n\n2011-12-28 11:06:25.546661500 lseek(10, 0, SEEK_END) = 27623424\n2011-12-28 11:06:25.546662500 lseek(10, 0, SEEK_END) = 27623424\n2011-12-28 11:06:25.546663500 lseek(10, 0, SEEK_END) = 27623424\n...\n2011-12-28 12:16:56.144663500 lseek(10, 0, SEEK_END) = 27623424\n...\n2011-12-28 13:28:20.436549500 lseek(10, 0, SEEK_END) = 27623424\n...\n\nI've then decided to interrupt the EXPLAIN ANALYZE. But I suppose that\nthe database as it is now will likely allow to further debug what's\nhappening.\n\n\nCheers\n-- \nMatteo Beccati\n\nDevelopment & Consulting - http://www.beccati.com/\n",
"msg_date": "Wed, 28 Dec 2011 15:20:07 +0100",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "Hi,\n\n> A rather long import process is stuck since ~12hrs on a query that\n> normally returns in a few seconds or minutes. Kust like to OP there's\n> almost no activity on the box, only a single postgres process at 100% CPU.\n> \n> Two separate queries, (although similar) have triggered the issue, and\n> they are run one after another. Also, the import is also not always\n> failing, I think it happened twice in about 10 runs. We're expected to\n> go live in about two weeks so these were only test runs.\n> \n> The explain looks like this:\n> http://explain.depesz.com/s/TqD\n\nThe query eventually completed in more than 18h. For comparison a normal\nrun doesn't take more than 1m for that specific step.\n\nDo you think that bad stats and suboptimal plan alone could explain such\na behaviour?\n\n\nCheers\n-- \nMatteo Beccati\n\nDevelopment & Consulting - http://www.beccati.com/\n",
"msg_date": "Wed, 28 Dec 2011 19:02:36 +0100",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "On Wed, Dec 28, 2011 at 3:02 PM, Matteo Beccati <[email protected]> wrote:\n> The query eventually completed in more than 18h. For comparison a normal\n> run doesn't take more than 1m for that specific step.\n>\n> Do you think that bad stats and suboptimal plan alone could explain such\n> a behaviour?\n\nDid you get the explain analyze output?\n",
"msg_date": "Wed, 28 Dec 2011 15:07:34 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "On 28/12/2011 19:07, Claudio Freire wrote:\n> On Wed, Dec 28, 2011 at 3:02 PM, Matteo Beccati <[email protected]> wrote:\n>> The query eventually completed in more than 18h. For comparison a normal\n>> run doesn't take more than 1m for that specific step.\n>>\n>> Do you think that bad stats and suboptimal plan alone could explain such\n>> a behaviour?\n> \n> Did you get the explain analyze output?\n\nUnfortunately I stopped it as I thought it wasn't going to return\nanything meaningful. I've restarted the import process and it will break\nright before the problematic query. Let's see if I can get any more info\ntomorrow.\n\n\nCheers\n-- \nMatteo Beccati\n\nDevelopment & Consulting - http://www.beccati.com/\n",
"msg_date": "Wed, 28 Dec 2011 19:41:53 +0100",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
},
{
"msg_contents": "On 28/12/2011 19:41, Matteo Beccati wrote:\n> On 28/12/2011 19:07, Claudio Freire wrote:\n>> On Wed, Dec 28, 2011 at 3:02 PM, Matteo Beccati <[email protected]> wrote:\n>>> The query eventually completed in more than 18h. For comparison a normal\n>>> run doesn't take more than 1m for that specific step.\n>>>\n>>> Do you think that bad stats and suboptimal plan alone could explain such\n>>> a behaviour?\n>>\n>> Did you get the explain analyze output?\n> \n> Unfortunately I stopped it as I thought it wasn't going to return\n> anything meaningful. I've restarted the import process and it will break\n> right before the problematic query. Let's see if I can get any more info\n> tomorrow.\n\nSo, I'm running again the EXPLAIN ANALYZE, although I don't expect it to\nreturn anytime soon.\n\nHowever I've discovered a few typos in the index creation. If we add it\nto the fact that row estimates are off for this specific query, I can\nunderstand that the chosen plan might have been way far from optimal\nwith some badly picked statistics.\n\nThis is the explain analyze of the query with proper indexes in place.\nAs you can see estimates are still off, even though run time is ~20s:\n\nhttp://explain.depesz.com/s/1UY\n\nFor comparison, here is the old explain output:\n\nhttp://explain.depesz.com/s/TqD\n\nThe case is closed and as Tom pointed out already the lseek-only\nactivity is due to the fact that the table is fully cached in the shared\nbuffers and a sequential scan inside a nested loop is consistent with it.\n\nSorry for the noise.\n\n\nCheers\n-- \nMatteo Beccati\n\nDevelopment & Consulting - http://www.beccati.com/\n",
"msg_date": "Thu, 29 Dec 2011 10:03:28 +0100",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.0.4 blocking in lseek?"
}
] |
[
{
"msg_contents": "I've got a large mixed-used database, with the data warehouse side of things\nconsisting of several tables at hundreds of millions of rows, plus a number\nof tables with tens of millions. There is partitioning, but as the volume\nof data has risen, individual partitions have gotten quite large. Hardware\nis 2x4 core 2.0Ghz Xeon processors, 176GB of RAM, 4 drives in raid 10 for\nWAL logs and 16 or 20 spindles for data, also in RAID 10. Total database\nsize is currently 399GB - via pg_database_size(). It's also worth noting\nthat we switched from 8.4 to 9.0.4 only about a month ago, and we were not\nseeing this problem on 8.4.x. The database is growing, but not at some kind\nof exponential rate. full backup, compressed, on the old hardware was 6.3GB\nand took about 1:45:00 to be written. Recent backups are 8.3GB and taking 3\nor 4 hours. We were not seeing al queries stall out during the backups on\n8.4, so far as I am aware.\n\nThe time it takes for pg_dump to run has grown from 1 hour to 3 and even 4\nhours over the last 6 months, with more than half of that increase occurring\nsince we upgrade to 9.0.x. In the last several weeks (possibly since the\nupgrade to 9.0.4), we are seeing all connections getting used up (our main\napps use connection pools, but monitoring and some utilities are making\ndirect connections for each query, and some of them don't check for the\nprior query to complete before sending another, which slowly eats up\navailable connections). Even the connection pool apps cease functioning\nduring the backup, however, as all of the connections wind up in parse\nwaiting state. I also see lots of sockets in close wait state for what\nseems to be an indefinite period while the backup is running and all\nconnections are used up. I assume all of this is the result of pg_dump\nstarting a transaction or otherwise blocking other access. I can get\neverything using a pool, that's not a huge problem to solve, but that won't\nfix the fundamental problem of no queries being able to finish while the\nbackup is happening.\n\nI know I'm not the only one running a database of this size. How do others\nhandle backups? At the moment, I don't have replication happening. I can\nuse the old hardware to replicate to. It doesn't have quite the i/o\ncapacity and nowhere near as much RAM, but I wouldn't be looking to use it\nfor querying unless I lost the primary, and it is definitely capable of\nhandling the insert load, at least when the inserts are being done directly.\n I'm not sure if it is easier or harder for it to handle the same inserts\nvia streaming replication. My question is, what are the performance\nrepercussions of running a pg_dump backup off the replicated server. If it\nexperiences the same kind of lockup, will SR get so far behind that it can't\ncatch up? Is there some other preferred way to get a backup of a large db?\n\nAnd finally, is the lockout I'm experiencing actually the result of a bug or\nmisuse of pg_dump in some way?\n\nI've got a large mixed-used database, with the data warehouse side of things consisting of several tables at hundreds of millions of rows, plus a number of tables with tens of millions. There is partitioning, but as the volume of data has risen, individual partitions have gotten quite large. Hardware is 2x4 core 2.0Ghz Xeon processors, 176GB of RAM, 4 drives in raid 10 for WAL logs and 16 or 20 spindles for data, also in RAID 10. Total database size is currently 399GB - via pg_database_size(). It's also worth noting that we switched from 8.4 to 9.0.4 only about a month ago, and we were not seeing this problem on 8.4.x. The database is growing, but not at some kind of exponential rate. full backup, compressed, on the old hardware was 6.3GB and took about 1:45:00 to be written. Recent backups are 8.3GB and taking 3 or 4 hours. We were not seeing al queries stall out during the backups on 8.4, so far as I am aware.\nThe time it takes for pg_dump to run has grown from 1 hour to 3 and even 4 hours over the last 6 months, with more than half of that increase occurring since we upgrade to 9.0.x. In the last several weeks (possibly since the upgrade to 9.0.4), we are seeing all connections getting used up (our main apps use connection pools, but monitoring and some utilities are making direct connections for each query, and some of them don't check for the prior query to complete before sending another, which slowly eats up available connections). Even the connection pool apps cease functioning during the backup, however, as all of the connections wind up in parse waiting state. I also see lots of sockets in close wait state for what seems to be an indefinite period while the backup is running and all connections are used up. I assume all of this is the result of pg_dump starting a transaction or otherwise blocking other access. I can get everything using a pool, that's not a huge problem to solve, but that won't fix the fundamental problem of no queries being able to finish while the backup is happening.\nI know I'm not the only one running a database of this size. How do others handle backups? At the moment, I don't have replication happening. I can use the old hardware to replicate to. It doesn't have quite the i/o capacity and nowhere near as much RAM, but I wouldn't be looking to use it for querying unless I lost the primary, and it is definitely capable of handling the insert load, at least when the inserts are being done directly. I'm not sure if it is easier or harder for it to handle the same inserts via streaming replication. My question is, what are the performance repercussions of running a pg_dump backup off the replicated server. If it experiences the same kind of lockup, will SR get so far behind that it can't catch up? Is there some other preferred way to get a backup of a large db?\nAnd finally, is the lockout I'm experiencing actually the result of a bug or misuse of pg_dump in some way?",
"msg_date": "Thu, 27 Oct 2011 09:47:15 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "backups blocking everything"
},
{
"msg_contents": "On Thu, Oct 27, 2011 at 11:47 AM, Samuel Gendler\n<[email protected]> wrote:\n> I've got a large mixed-used database, with the data warehouse side of things\n> consisting of several tables at hundreds of millions of rows, plus a number\n> of tables with tens of millions. There is partitioning, but as the volume\n> of data has risen, individual partitions have gotten quite large. Hardware\n> is 2x4 core 2.0Ghz Xeon processors, 176GB of RAM, 4 drives in raid 10 for\n> WAL logs and 16 or 20 spindles for data, also in RAID 10. Total database\n> size is currently 399GB - via pg_database_size(). It's also worth noting\n> that we switched from 8.4 to 9.0.4 only about a month ago, and we were not\n> seeing this problem on 8.4.x. The database is growing, but not at some kind\n> of exponential rate. full backup, compressed, on the old hardware was 6.3GB\n> and took about 1:45:00 to be written. Recent backups are 8.3GB and taking 3\n> or 4 hours. We were not seeing al queries stall out during the backups on\n> 8.4, so far as I am aware.\n> The time it takes for pg_dump to run has grown from 1 hour to 3 and even 4\n> hours over the last 6 months, with more than half of that increase occurring\n> since we upgrade to 9.0.x. In the last several weeks (possibly since the\n> upgrade to 9.0.4), we are seeing all connections getting used up (our main\n> apps use connection pools, but monitoring and some utilities are making\n> direct connections for each query, and some of them don't check for the\n> prior query to complete before sending another, which slowly eats up\n> available connections). Even the connection pool apps cease functioning\n> during the backup, however, as all of the connections wind up in parse\n> waiting state. I also see lots of sockets in close wait state for what\n> seems to be an indefinite period while the backup is running and all\n> connections are used up. I assume all of this is the result of pg_dump\n> starting a transaction or otherwise blocking other access. I can get\n> everything using a pool, that's not a huge problem to solve, but that won't\n> fix the fundamental problem of no queries being able to finish while the\n> backup is happening.\n> I know I'm not the only one running a database of this size. How do others\n> handle backups? At the moment, I don't have replication happening. I can\n> use the old hardware to replicate to. It doesn't have quite the i/o\n> capacity and nowhere near as much RAM, but I wouldn't be looking to use it\n> for querying unless I lost the primary, and it is definitely capable of\n> handling the insert load, at least when the inserts are being done directly.\n> I'm not sure if it is easier or harder for it to handle the same inserts\n> via streaming replication. My question is, what are the performance\n> repercussions of running a pg_dump backup off the replicated server. If it\n> experiences the same kind of lockup, will SR get so far behind that it can't\n> catch up? Is there some other preferred way to get a backup of a large db?\n> And finally, is the lockout I'm experiencing actually the result of a bug or\n> misuse of pg_dump in some way?\n\nI can't speak to the slower backups on 9.0.x issue, but if I were you\nI'd be implementing hot standby and moving the backups to the standby\n(just be aware that pg_dump will effectively pause replication and\ncause WAL files to accumulate during the dump).\n\nmerlin\n",
"msg_date": "Thu, 27 Oct 2011 12:39:51 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: backups blocking everything"
},
{
"msg_contents": ">From: [email protected] [mailto:[email protected]] On Behalf Of Samuel Gendler\n>Sent: Thursday, October 27, 2011 12:47 PM\n>To: [email protected]\n>Subject: [PERFORM] backups blocking everything\n>\n>I've got a large mixed-used database, with the data warehouse side of things consisting of several tables at hundreds of millions of rows, plus a number of tables with tens of >millions. There is partitioning, but as the volume of data has risen, individual partitions have gotten quite large. Hardware is 2x4 core 2.0Ghz Xeon processors, 176GB of RAM, 4 drives in >raid 10 for WAL logs and 16 or 20 spindles for data, also in RAID 10. Total database size is currently 399GB - via pg_database_size(). It's also worth noting that we switched from 8.4 to >9.0.4 only about a month ago, and we were not seeing this problem on 8.4.x. The database is growing, but not at some kind of exponential rate. full backup, compressed, on the old hardware >was 6.3GB and took about 1:45:00 to be written. Recent backups are 8.3GB and taking 3 or 4 hours. We were not seeing al queries stall out during the backups on 8.4, so far as I am aware.\n>\n>The time it takes for pg_dump to run has grown from 1 hour to 3 and even 4 hours over the last 6 months, with more than half of that increase occurring since we upgrade to 9.0.x. In the >last several weeks (possibly since the upgrade to 9.0.4), we are seeing all connections getting used up (our main apps use connection pools, but monitoring and some utilities are making >direct connections for each query, and some of them don't check for the prior query to complete before sending another, which slowly eats up available connections). Even the connection >pool apps cease functioning during the backup, however, as all of the connections wind up in parse waiting state. I also see lots of sockets in close wait state for what seems to be an >indefinite period while the backup is running and all connections are used up. I assume all of this is the result of pg_dump starting a transaction or otherwise blocking other access. I >can get everything using a pool, that's not a huge problem to solve, but that won't fix the fundamental problem of no queries being able to finish while the backup is happening.\n\nWhat is the I/O utilization like during the dump? I've seen this situation in the past and it was caused be excessively bloated tables causing I/O starvation while they are getting dumped.\n\nBrad. \n",
"msg_date": "Thu, 27 Oct 2011 20:45:11 +0000",
"msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: backups blocking everything"
},
{
"msg_contents": "On Thu, Oct 27, 2011 at 1:45 PM, Nicholson, Brad (Toronto, ON, CA) <\[email protected]> wrote:\n\n> >From: [email protected] [mailto:\n> [email protected]] On Behalf Of Samuel Gendler\n> >Sent: Thursday, October 27, 2011 12:47 PM\n> >To: [email protected]\n> >Subject: [PERFORM] backups blocking everything\n> >\n> >I've got a large mixed-used database, with the data warehouse side of\n> things consisting of several tables at hundreds of millions of rows, plus a\n> number of tables with tens of >millions. There is partitioning, but as the\n> volume of data has risen, individual partitions have gotten quite large.\n> Hardware is 2x4 core 2.0Ghz Xeon processors, 176GB of RAM, 4 drives in\n> >raid 10 for WAL logs and 16 or 20 spindles for data, also in RAID 10.\n> Total database size is currently 399GB - via pg_database_size(). It's also\n> worth noting that we switched from 8.4 to >9.0.4 only about a month ago, and\n> we were not seeing this problem on 8.4.x. The database is growing, but not\n> at some kind of exponential rate. full backup, compressed, on the old\n> hardware >was 6.3GB and took about 1:45:00 to be written. Recent backups\n> are 8.3GB and taking 3 or 4 hours. We were not seeing al queries stall out\n> during the backups on 8.4, so far as I am aware.\n> >\n> >The time it takes for pg_dump to run has grown from 1 hour to 3 and even 4\n> hours over the last 6 months, with more than half of that increase occurring\n> since we upgrade to 9.0.x. In the >last several weeks (possibly since the\n> upgrade to 9.0.4), we are seeing all connections getting used up (our main\n> apps use connection pools, but monitoring and some utilities are making\n> >direct connections for each query, and some of them don't check for the\n> prior query to complete before sending another, which slowly eats up\n> available connections). Even the connection >pool apps cease functioning\n> during the backup, however, as all of the connections wind up in parse\n> waiting state. I also see lots of sockets in close wait state for what\n> seems to be an >indefinite period while the backup is running and all\n> connections are used up. I assume all of this is the result of pg_dump\n> starting a transaction or otherwise blocking other access. I >can get\n> everything using a pool, that's not a huge problem to solve, but that won't\n> fix the fundamental problem of no queries being able to finish while the\n> backup is happening.\n>\n> What is the I/O utilization like during the dump? I've seen this situation\n> in the past and it was caused be excessively bloated tables causing I/O\n> starvation while they are getting dumped.\n>\n\nThere are definitely no bloated tables. The large tables are all\ninsert-only, and old data is aggregated up and then removed by dropping\nwhole partitions. There should be no bloat whatsoever. The OLTP side of\nthings is pretty minimal, and I can pg_dump those schemas in seconds, so\nthey aren't the problem, either. I don't know what the I/O utilization is\nduring the dump, offhand. I'll be doing a more thorough investigation\ntonight, though I suppose I could go look at the monitoring graphs if I\nweren't in the middle of 6 other things at the moment. the joys of startup\nlife.\n\nOn Thu, Oct 27, 2011 at 1:45 PM, Nicholson, Brad (Toronto, ON, CA) <[email protected]> wrote:\n>From: [email protected] [mailto:[email protected]] On Behalf Of Samuel Gendler\n\n>Sent: Thursday, October 27, 2011 12:47 PM\n>To: [email protected]\n>Subject: [PERFORM] backups blocking everything\n>\n>I've got a large mixed-used database, with the data warehouse side of things consisting of several tables at hundreds of millions of rows, plus a number of tables with tens of >millions. There is partitioning, but as the volume of data has risen, individual partitions have gotten quite large. Hardware is 2x4 core 2.0Ghz Xeon processors, 176GB of RAM, 4 drives in >raid 10 for WAL logs and 16 or 20 spindles for data, also in RAID 10. Total database size is currently 399GB - via pg_database_size(). It's also worth noting that we switched from 8.4 to >9.0.4 only about a month ago, and we were not seeing this problem on 8.4.x. The database is growing, but not at some kind of exponential rate. full backup, compressed, on the old hardware >was 6.3GB and took about 1:45:00 to be written. Recent backups are 8.3GB and taking 3 or 4 hours. We were not seeing al queries stall out during the backups on 8.4, so far as I am aware.\n\n>\n>The time it takes for pg_dump to run has grown from 1 hour to 3 and even 4 hours over the last 6 months, with more than half of that increase occurring since we upgrade to 9.0.x. In the >last several weeks (possibly since the upgrade to 9.0.4), we are seeing all connections getting used up (our main apps use connection pools, but monitoring and some utilities are making >direct connections for each query, and some of them don't check for the prior query to complete before sending another, which slowly eats up available connections). Even the connection >pool apps cease functioning during the backup, however, as all of the connections wind up in parse waiting state. I also see lots of sockets in close wait state for what seems to be an >indefinite period while the backup is running and all connections are used up. I assume all of this is the result of pg_dump starting a transaction or otherwise blocking other access. I >can get everything using a pool, that's not a huge problem to solve, but that won't fix the fundamental problem of no queries being able to finish while the backup is happening.\n\nWhat is the I/O utilization like during the dump? I've seen this situation in the past and it was caused be excessively bloated tables causing I/O starvation while they are getting dumped.\nThere are definitely no bloated tables. The large tables are all insert-only, and old data is aggregated up and then removed by dropping whole partitions. There should be no bloat whatsoever. The OLTP side of things is pretty minimal, and I can pg_dump those schemas in seconds, so they aren't the problem, either. I don't know what the I/O utilization is during the dump, offhand. I'll be doing a more thorough investigation tonight, though I suppose I could go look at the monitoring graphs if I weren't in the middle of 6 other things at the moment. the joys of startup life.",
"msg_date": "Thu, 27 Oct 2011 14:15:28 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: backups blocking everything"
},
{
"msg_contents": "On Thu, Oct 27, 2011 at 2:15 PM, Samuel Gendler\n<[email protected]>wrote:\n>\n>\n> There are definitely no bloated tables. The large tables are all\n> insert-only, and old data is aggregated up and then removed by dropping\n> whole partitions. There should be no bloat whatsoever. The OLTP side of\n> things is pretty minimal, and I can pg_dump those schemas in seconds, so\n> they aren't the problem, either. I don't know what the I/O utilization is\n> during the dump, offhand. I'll be doing a more thorough investigation\n> tonight, though I suppose I could go look at the monitoring graphs if I\n> weren't in the middle of 6 other things at the moment. the joys of startup\n> life.\n>\n>\nDoes pg_dump use work_mem, maintenance_work_mem, or both? I'm seeing a huge\nspike in swap-in during the period when I can't get into the db, then a\nlittle bit of swap out toward the end. We've got very little OLTP traffic -\nlike one or two users logged in and interacting with the system at a time,\nat most, so I've got work_mem set pretty high, as most of our reporting\nqueries do large aggregations that grind to a halt if they go to disk.\nBesides, we've got nearly 200GB of RAM. But it would seem that pg_dump is\nallocating a large number of work_mem (or maintenance_work_mem) segments.\n\n# show work_mem;\n work_mem\n----------\n 512MB\n(1 row)\n\n# show maintenance_work_mem;\n maintenance_work_mem\n----------------------\n 2GB\n\nTo be honest, I'm not entirely certain how to interpret some of the graphs\nI'm looking at in this context.\n\nhere are some pictures of what is going on. The db monitoring itself goes\naway when it eats all of the connections, but you can see what direction\nthey are headed and the values when it finally manages to get a connection\nagain at the end of the period. All of the other numbers are just host\nmonitoring, so they are continuous through the shutout.\n\nMemory usage on the host (shared buffers is set to 8GB):\n\nhttp://photos.smugmug.com/photos/i-sQ4hVCz/0/L/i-sQ4hVCz-L.png\n\nSwap Usage:\n\nhttp://photos.smugmug.com/photos/i-T25vcZ2/0/L/i-T25vcZ2-L.png\n\nSwap rate:\n\nhttp://photos.smugmug.com/photos/i-WDDcN9W/0/L/i-WDDcN9W-L.png\n\nCPU utilization:\n\nhttp://photos.smugmug.com/photos/i-4xkGqjB/0/L/i-4xkGqjB-L.png\n\nLoad Average:\n\nhttp://photos.smugmug.com/photos/i-p4n94X4/0/L/i-p4n94X4-L.png\n\ndisk IO for system disk (where the backup is being written to):\n\nhttp://photos.smugmug.com/photos/i-gbCxrnq/0/M/i-gbCxrnq-M.png\n\ndisk IO for WAL volume:\n\nhttp://photos.smugmug.com/photos/i-5wNwrDX/0/M/i-5wNwrDX-M.png\n\ndisk IO for data volume:\n\nhttp://photos.smugmug.com/photos/i-r7QGngG/0/M/i-r7QGngG-M.png\n\nVarious postgres monitors - the graph names are self explanatory:\n\nhttp://photos.smugmug.com/photos/i-23sTvLP/0/M/i-23sTvLP-M.png\nhttp://photos.smugmug.com/photos/i-73rphrf/0/M/i-73rphrf-M.png\nhttp://photos.smugmug.com/photos/i-rpKvrVJ/0/L/i-rpKvrVJ-L.png\nhttp://photos.smugmug.com/photos/i-QbNQFJM/0/L/i-QbNQFJM-L.png\n\nOn Thu, Oct 27, 2011 at 2:15 PM, Samuel Gendler <[email protected]> wrote:\n\nThere are definitely no bloated tables. The large tables are all insert-only, and old data is aggregated up and then removed by dropping whole partitions. There should be no bloat whatsoever. The OLTP side of things is pretty minimal, and I can pg_dump those schemas in seconds, so they aren't the problem, either. I don't know what the I/O utilization is during the dump, offhand. I'll be doing a more thorough investigation tonight, though I suppose I could go look at the monitoring graphs if I weren't in the middle of 6 other things at the moment. the joys of startup life.\n Does pg_dump use work_mem, maintenance_work_mem, or both? I'm seeing a huge spike in swap-in during the period when I can't get into the db, then a little bit of swap out toward the end. We've got very little OLTP traffic - like one or two users logged in and interacting with the system at a time, at most, so I've got work_mem set pretty high, as most of our reporting queries do large aggregations that grind to a halt if they go to disk. Besides, we've got nearly 200GB of RAM. But it would seem that pg_dump is allocating a large number of work_mem (or maintenance_work_mem) segments.\n# show work_mem; work_mem ---------- 512MB(1 row)# show maintenance_work_mem; maintenance_work_mem ----------------------\n 2GBTo be honest, I'm not entirely certain how to interpret some of the graphs I'm looking at in this context.here are some pictures of what is going on. The db monitoring itself goes away when it eats all of the connections, but you can see what direction they are headed and the values when it finally manages to get a connection again at the end of the period. All of the other numbers are just host monitoring, so they are continuous through the shutout.\nMemory usage on the host (shared buffers is set to 8GB):http://photos.smugmug.com/photos/i-sQ4hVCz/0/L/i-sQ4hVCz-L.png\nSwap Usage:http://photos.smugmug.com/photos/i-T25vcZ2/0/L/i-T25vcZ2-L.png\nSwap rate:http://photos.smugmug.com/photos/i-WDDcN9W/0/L/i-WDDcN9W-L.pngCPU utilization:\nhttp://photos.smugmug.com/photos/i-4xkGqjB/0/L/i-4xkGqjB-L.pngLoad Average:\nhttp://photos.smugmug.com/photos/i-p4n94X4/0/L/i-p4n94X4-L.pngdisk IO for system disk (where the backup is being written to):\nhttp://photos.smugmug.com/photos/i-gbCxrnq/0/M/i-gbCxrnq-M.pngdisk IO for WAL volume:\nhttp://photos.smugmug.com/photos/i-5wNwrDX/0/M/i-5wNwrDX-M.pngdisk IO for data volume:\nhttp://photos.smugmug.com/photos/i-r7QGngG/0/M/i-r7QGngG-M.pngVarious postgres monitors - the graph names are self explanatory:\nhttp://photos.smugmug.com/photos/i-23sTvLP/0/M/i-23sTvLP-M.pnghttp://photos.smugmug.com/photos/i-73rphrf/0/M/i-73rphrf-M.png\nhttp://photos.smugmug.com/photos/i-rpKvrVJ/0/L/i-rpKvrVJ-L.pnghttp://photos.smugmug.com/photos/i-QbNQFJM/0/L/i-QbNQFJM-L.png",
"msg_date": "Thu, 27 Oct 2011 16:29:25 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: backups blocking everything"
},
{
"msg_contents": "On Thu, Oct 27, 2011 at 6:29 PM, Samuel Gendler\n<[email protected]> wrote:\n>\n> On Thu, Oct 27, 2011 at 2:15 PM, Samuel Gendler <[email protected]>\n> wrote:\n>>\n>> There are definitely no bloated tables. The large tables are all\n>> insert-only, and old data is aggregated up and then removed by dropping\n>> whole partitions. There should be no bloat whatsoever. The OLTP side of\n>> things is pretty minimal, and I can pg_dump those schemas in seconds, so\n>> they aren't the problem, either. I don't know what the I/O utilization is\n>> during the dump, offhand. I'll be doing a more thorough investigation\n>> tonight, though I suppose I could go look at the monitoring graphs if I\n>> weren't in the middle of 6 other things at the moment. the joys of startup\n>> life.\n>\n>\n> Does pg_dump use work_mem, maintenance_work_mem, or both? I'm seeing a huge\n> spike in swap-in during the period when I can't get into the db, then a\n> little bit of swap out toward the end. We've got very little OLTP traffic -\n> like one or two users logged in and interacting with the system at a time,\n> at most, so I've got work_mem set pretty high, as most of our reporting\n> queries do large aggregations that grind to a halt if they go to disk.\n> Besides, we've got nearly 200GB of RAM. But it would seem that pg_dump is\n> allocating a large number of work_mem (or maintenance_work_mem) segments.\n> # show work_mem;\n> work_mem\n> ----------\n> 512MB\n> (1 row)\n> # show maintenance_work_mem;\n> maintenance_work_mem\n> ----------------------\n> 2GB\n> To be honest, I'm not entirely certain how to interpret some of the graphs\n> I'm looking at in this context.\n> here are some pictures of what is going on. The db monitoring itself goes\n> away when it eats all of the connections, but you can see what direction\n> they are headed and the values when it finally manages to get a connection\n> again at the end of the period. All of the other numbers are just host\n> monitoring, so they are continuous through the shutout.\n> Memory usage on the host (shared buffers is set to 8GB):\n> http://photos.smugmug.com/photos/i-sQ4hVCz/0/L/i-sQ4hVCz-L.png\n> Swap Usage:\n> http://photos.smugmug.com/photos/i-T25vcZ2/0/L/i-T25vcZ2-L.png\n> Swap rate:\n> http://photos.smugmug.com/photos/i-WDDcN9W/0/L/i-WDDcN9W-L.png\n> CPU utilization:\n> http://photos.smugmug.com/photos/i-4xkGqjB/0/L/i-4xkGqjB-L.png\n> Load Average:\n> http://photos.smugmug.com/photos/i-p4n94X4/0/L/i-p4n94X4-L.png\n> disk IO for system disk (where the backup is being written to):\n> http://photos.smugmug.com/photos/i-gbCxrnq/0/M/i-gbCxrnq-M.png\n> disk IO for WAL volume:\n> http://photos.smugmug.com/photos/i-5wNwrDX/0/M/i-5wNwrDX-M.png\n> disk IO for data volume:\n> http://photos.smugmug.com/photos/i-r7QGngG/0/M/i-r7QGngG-M.png\n> Various postgres monitors - the graph names are self explanatory:\n> http://photos.smugmug.com/photos/i-23sTvLP/0/M/i-23sTvLP-M.png\n> http://photos.smugmug.com/photos/i-73rphrf/0/M/i-73rphrf-M.png\n> http://photos.smugmug.com/photos/i-rpKvrVJ/0/L/i-rpKvrVJ-L.png\n> http://photos.smugmug.com/photos/i-QbNQFJM/0/L/i-QbNQFJM-L.png\n\nhrm -- it doesn't look like you are i/o bound -- postgres is\ndefinitely the bottleneck. taking a dump off of production is\nthrowing something else out of whack which is affecting your other\nprocesses.\n\nband aid solutions might be:\n*) as noted above, implement hot standby and move dumps to the standby\n*) consider adding connection pooling so your system doesn't\naccumulate N processes during dump\n\na better diagnosis might involve:\n*) strace of one of your non-dump proceses to see where the blocking\nis happening\n*) profiling one of your user processes and compare good vs bad time\n\nIs there anything out of the ordinary about your application that's\nworth mentioning? using lots of subtransactions? prepared\ntransactions? tablespaces? huge amounts of tables? etc?\n\nHave you checked syslogs/dmesg/etc for out of the ordinary system events?\n\nmerlin\n",
"msg_date": "Fri, 28 Oct 2011 16:20:34 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: backups blocking everything"
},
{
"msg_contents": "On Fri, Oct 28, 2011 at 2:20 PM, Merlin Moncure <[email protected]> wrote:\n\n>\n> hrm -- it doesn't look like you are i/o bound -- postgres is\n> definitely the bottleneck. taking a dump off of production is\n> throwing something else out of whack which is affecting your other\n> processes.\n>\n> band aid solutions might be:\n> *) as noted above, implement hot standby and move dumps to the standby\n> *) consider adding connection pooling so your system doesn't\n> accumulate N processes during dump\n>\n>\nBoth are in the works. The 1st one is more involved, but I'm going to move\nour monitoring to a pool next week, so I at least stop getting locked out.\nWe'll be moving to a non-superuser user, as well. That's an artifact of the\nvery early days that we never got around to correcting.\n\n\n\n> a better diagnosis might involve:\n> *) strace of one of your non-dump proceses to see where the blocking\n> is happening\n> *) profiling one of your user processes and compare good vs bad time\n>\n\nThis only happens at a particularly anti-social time, so we're taking the\neasy way out up front and just killing various suspected processes each\nnight in order to narrow things down. It looks like it is actually an\ninteraction between a process that runs a bunch of fairly poorly architected\nqueries running on a machine set up with the wrong time zone, which was\ncausing it to run at exactly the same time as the backups. We fixed the\ntime zone problem last night and didn't have symptoms, so that's the\nfundamental problem, but the report generation process has a lot of room for\nimprovement, regardless. There's definitely lots of room for improvement,\nso it's now really about picking the resolutions that offer the most bang\nfor the buck. I think a hot standby for backups and report generation is\nthe biggest win, and I can work on tuning the report generation at a later\ndate.\n\n\n> Is there anything out of the ordinary about your application that's\n> worth mentioning? using lots of subtransactions? prepared\n> transactions? tablespaces? huge amounts of tables? etc?\n>\n\nNope. Pretty normal.\n\n\n>\n> Have you checked syslogs/dmesg/etc for out of the ordinary system events?\n>\n\nnothing.\n\nThanks for taking the time to go through my information and offer up\nsuggestions, everyone!\n\n--sam\n\nOn Fri, Oct 28, 2011 at 2:20 PM, Merlin Moncure <[email protected]> wrote:\n\nhrm -- it doesn't look like you are i/o bound -- postgres is\ndefinitely the bottleneck. taking a dump off of production is\nthrowing something else out of whack which is affecting your other\nprocesses.\n\nband aid solutions might be:\n*) as noted above, implement hot standby and move dumps to the standby\n*) consider adding connection pooling so your system doesn't\naccumulate N processes during dump\nBoth are in the works. The 1st one is more involved, but I'm going to move our monitoring to a pool next week, so I at least stop getting locked out. We'll be moving to a non-superuser user, as well. That's an artifact of the very early days that we never got around to correcting.\n \na better diagnosis might involve:\n*) strace of one of your non-dump proceses to see where the blocking\nis happening\n*) profiling one of your user processes and compare good vs bad timeThis only happens at a particularly anti-social time, so we're taking the easy way out up front and just killing various suspected processes each night in order to narrow things down. It looks like it is actually an interaction between a process that runs a bunch of fairly poorly architected queries running on a machine set up with the wrong time zone, which was causing it to run at exactly the same time as the backups. We fixed the time zone problem last night and didn't have symptoms, so that's the fundamental problem, but the report generation process has a lot of room for improvement, regardless. There's definitely lots of room for improvement, so it's now really about picking the resolutions that offer the most bang for the buck. I think a hot standby for backups and report generation is the biggest win, and I can work on tuning the report generation at a later date.\n\n\nIs there anything out of the ordinary about your application that's\nworth mentioning? using lots of subtransactions? prepared\ntransactions? tablespaces? huge amounts of tables? etc?Nope. Pretty normal. \n\nHave you checked syslogs/dmesg/etc for out of the ordinary system events?nothing.Thanks for taking the time to go through my information and offer up suggestions, everyone!\n--sam",
"msg_date": "Fri, 28 Oct 2011 16:51:07 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: backups blocking everything"
}
] |
[
{
"msg_contents": "Hi All ;\n\nI have code that drops a table, re-create's it (based on a long set of \njoins) and then re-creates the indexes.\n\nIt runs via psql in about 10 seconds. I took the code and simply \nwrapped it into a plpgsql function and the function version takes almost \n60 seconds to run.\n\nI always thought that functions should run faster than psql... am I wrong?\n\nThanks in advance\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n",
"msg_date": "Thu, 27 Oct 2011 22:54:47 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "function slower than the same code in an sql file"
},
{
"msg_contents": "CS DBA <[email protected]> writes:\n> I have code that drops a table, re-create's it (based on a long set of \n> joins) and then re-creates the indexes.\n\n> It runs via psql in about 10 seconds. I took the code and simply \n> wrapped it into a plpgsql function and the function version takes almost \n> 60 seconds to run.\n\n> I always thought that functions should run faster than psql... am I wrong?\n\nDid you really just put the identical queries into a function, or did\nyou parameterize them with values passed to the function?\n\nParameterized queries are often slower due to the planner not knowing\nthe specific constant values that are used in the actual calls. There's\nsome work done for 9.2 to improve that, but in existing releases you\ntypically have to construct dynamic queries and EXECUTE them if you run\ninto this problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 28 Oct 2011 01:10:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function slower than the same code in an sql file "
},
{
"msg_contents": "Hello\n\nplpgsql uses a cached prepared plans for queries - where optimizations\nis based on expected values - not on real values. This feature can do\nperformance problems some times. When you have these problems, then\nyou have to use a dynamic SQL instead. This generate plans for only\none usage and then there optimization can be more exact (but it repeat\na plan generation)\n\nhttp://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\n\nRegards\n\nPavel Stehule\n\n2011/10/28 CS DBA <[email protected]>:\n> Hi All ;\n>\n> I have code that drops a table, re-create's it (based on a long set of\n> joins) and then re-creates the indexes.\n>\n> It runs via psql in about 10 seconds. I took the code and simply wrapped it\n> into a plpgsql function and the function version takes almost 60 seconds to\n> run.\n>\n> I always thought that functions should run faster than psql... am I wrong?\n>\n> Thanks in advance\n>\n> --\n> ---------------------------------------------\n> Kevin Kempter - Constent State\n> A PostgreSQL Professional Services Company\n> www.consistentstate.com\n> ---------------------------------------------\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Fri, 28 Oct 2011 07:10:18 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function slower than the same code in an sql file"
},
{
"msg_contents": "On 10/27/2011 11:10 PM, Tom Lane wrote:\n> CS DBA<[email protected]> writes:\n>> I have code that drops a table, re-create's it (based on a long set of\n>> joins) and then re-creates the indexes.\n>> It runs via psql in about 10 seconds. I took the code and simply\n>> wrapped it into a plpgsql function and the function version takes almost\n>> 60 seconds to run.\n>> I always thought that functions should run faster than psql... am I wrong?\n> Did you really just put the identical queries into a function, or did\n> you parameterize them with values passed to the function?\n>\n> Parameterized queries are often slower due to the planner not knowing\n> the specific constant values that are used in the actual calls. There's\n> some work done for 9.2 to improve that, but in existing releases you\n> typically have to construct dynamic queries and EXECUTE them if you run\n> into this problem.\n>\n> \t\t\tregards, tom lane\n\n\nNo parameters, one of them looks like this:\n\n\nCREATE or REPLACE FUNCTION refresh_xyz_view_m() RETURNS TRIGGER AS $$\nBEGIN\n\nDROP TABLE xyz_view_m ;\nCREATE TABLE xyz_view_m AS\nSELECT\n pp.id, pp.name, pp.description, pp.tariff_url, ppe.account_id, \npp.active, ppe.time_zone\nFROM\n tab1 pp, enrollment ppe\nWHERE\n ((pp.id = ppe.pp_id) AND pp.active);\n\ncreate index xyz_m_id_idx on xyx_view_m(id);\n\n\nanalyze xyz_view_m;\n\nRETURN NULL;\nEND\n$$\nLANGUAGE plpgsql;\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n\n\n\n\n\n\n\n On 10/27/2011 11:10 PM, Tom Lane wrote:\n \nCS DBA <[email protected]> writes:\n\n\nI have code that drops a table, re-create's it (based on a long set of \njoins) and then re-creates the indexes.\n\n\n\n\n\nIt runs via psql in about 10 seconds. I took the code and simply \nwrapped it into a plpgsql function and the function version takes almost \n60 seconds to run.\n\n\n\n\n\nI always thought that functions should run faster than psql... am I wrong?\n\n\n\nDid you really just put the identical queries into a function, or did\nyou parameterize them with values passed to the function?\n\nParameterized queries are often slower due to the planner not knowing\nthe specific constant values that are used in the actual calls. There's\nsome work done for 9.2 to improve that, but in existing releases you\ntypically have to construct dynamic queries and EXECUTE them if you run\ninto this problem.\n\n\t\t\tregards, tom lane\n\n\n\n\n No parameters, one of them looks like this:\n\n\n CREATE or REPLACE FUNCTION refresh_xyz_view_m() RETURNS TRIGGER \n AS\n $$ \n \n BEGIN \n \n \n \n DROP TABLE xyz_view_m\n ; \n CREATE TABLE xyz_view_m\n AS \n SELECT \n \n pp.id, pp.name, pp.description, pp.tariff_url,\n ppe.account_id, pp.active,\n ppe.time_zone \n \n FROM \n \n tab1 pp, enrollment ppe \n WHERE \n \n ((pp.id = ppe.pp_id) AND pp.active); \n \n \n \n create index xyz_m_id_idx on xyx_view_m(id); \n \n \n \n analyze\n xyz_view_m; \n \n \n RETURN\n NULL; \n \n END \n \n $$ \n \n LANGUAGE plpgsql;\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State \nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------",
"msg_date": "Fri, 28 Oct 2011 07:39:40 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: function slower than the same code in an sql file"
},
{
"msg_contents": "On Fri, Oct 28, 2011 at 9:39 AM, CS DBA <[email protected]> wrote:\n> No parameters, one of them looks like this:\n>\n> [ code snippet ]\n\nIt's hard to believe this is the real code, because SELECT without\nINTO will bomb out inside a PL/pgsql function, won't it?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Nov 2011 10:42:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function slower than the same code in an sql file"
},
{
"msg_contents": "El 03/11/11 11:42, Robert Haas escribi�:\n> On Fri, Oct 28, 2011 at 9:39 AM, CS DBA<[email protected]> wrote:\n>> No parameters, one of them looks like this:\n>>\n>> [ code snippet ]\n> It's hard to believe this is the real code, because SELECT without\n> INTO will bomb out inside a PL/pgsql function, won't it?\n>\nBut he's using CREATE TABLE xyz_view_m AS\n\nSo it seems correct to me\n\nRegards\n\nRodrigo\n\n\n\n\n\n\n El 03/11/11 11:42, Robert Haas escribió:\n \nOn Fri, Oct 28, 2011 at 9:39 AM, CS DBA <[email protected]> wrote:\n\n\nNo parameters, one of them looks like this:\n\n[ code snippet ]\n\n\n\nIt's hard to believe this is the real code, because SELECT without\nINTO will bomb out inside a PL/pgsql function, won't it?\n\n\n\n But he's using CREATE TABLE xyz_view_m AS\n\n So it seems correct to me\n\n Regards\n\n Rodrigo",
"msg_date": "Thu, 03 Nov 2011 12:31:59 -0300",
"msg_from": "Rodrigo Gonzalez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function slower than the same code in an sql file"
},
{
"msg_contents": "On Thu, Nov 3, 2011 at 11:31 AM, Rodrigo Gonzalez\n<[email protected]> wrote:\n> El 03/11/11 11:42, Robert Haas escribió:\n>\n> On Fri, Oct 28, 2011 at 9:39 AM, CS DBA <[email protected]> wrote:\n>\n> No parameters, one of them looks like this:\n>\n> [ code snippet ]\n>\n> It's hard to believe this is the real code, because SELECT without\n> INTO will bomb out inside a PL/pgsql function, won't it?\n>\n> But he's using CREATE TABLE xyz_view_m AS\n>\n> So it seems correct to me\n\nOh, right, I missed that.\n\nThat seems pretty mysterious then. But is it possible the function is\ngetting called more times than it should? I notice that it's set up\nas a trigger; is it FOR EACH ROW when it should be a statement-level\ntrigger or something like that? Maybe run EXPLAIN ANALYZE on the\nquery that's invoking the trigger to get some more detail on what's\ngoing on?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Nov 2011 11:40:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function slower than the same code in an sql file"
},
{
"msg_contents": "On 11/03/2011 09:40 AM, Robert Haas wrote:\n> On Thu, Nov 3, 2011 at 11:31 AM, Rodrigo Gonzalez\n> <[email protected]> wrote:\n>> El 03/11/11 11:42, Robert Haas escribi�:\n>>\n>> On Fri, Oct 28, 2011 at 9:39 AM, CS DBA<[email protected]> wrote:\n>>\n>> No parameters, one of them looks like this:\n>>\n>> [ code snippet ]\n>>\n>> It's hard to believe this is the real code, because SELECT without\n>> INTO will bomb out inside a PL/pgsql function, won't it?\n>>\n>> But he's using CREATE TABLE xyz_view_m AS\n>>\n>> So it seems correct to me\n> Oh, right, I missed that.\n>\n> That seems pretty mysterious then. But is it possible the function is\n> getting called more times than it should? I notice that it's set up\n> as a trigger; is it FOR EACH ROW when it should be a statement-level\n> trigger or something like that? Maybe run EXPLAIN ANALYZE on the\n> query that's invoking the trigger to get some more detail on what's\n> going on?\n\nI'll give it a shot ...\n\n\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n",
"msg_date": "Thu, 03 Nov 2011 16:42:13 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: function slower than the same code in an sql file"
}
] |
[
{
"msg_contents": "I have Quadcore server with 8GB RAM\n\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU E5607 @ 2.27GHz\nstepping : 2\ncpu MHz : 1197.000\ncache size : 8192 KB\n\n\nMemTotal: 8148636 kB\nMemFree: 4989116 kB\nBuffers: 8464 kB\nCached: 2565456 kB\nSwapCached: 81196 kB\nActive: 2003796 kB\nInactive: 843896 kB\nActive(anon): 1826176 kB\nInactive(anon): 405964 kB\nActive(file): 177620 kB\nInactive(file): 437932 kB\nUnevictable: 0 kB\nMlocked: 0 kB\nSwapTotal: 16779260 kB\nSwapFree: 16303356 kB\nDirty: 1400 kB\nWriteback: 0 kB\nAnonPages: 208260 kB\nMapped: 1092008 kB\nShmem: 1958368 kB\nSlab: 224964 kB\nSReclaimable: 60136 kB\nSUnreclaim: 164828 kB\nKernelStack: 2864 kB\nPageTables: 35684 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nWritebackTmp: 0 kB\nCommitLimit: 20853576 kB\nCommitted_AS: 3672176 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 303292 kB\nVmallocChunk: 34359429308 kB\nHardwareCorrupted: 0 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugePages_Rsvd: 0\nHugePages_Surp: 0\nHugepagesize: 2048 kB\nDirectMap4k: 6144 kB\nDirectMap2M: 2082816 kB\nDirectMap1G: 6291456 kB\n\nMy database size is\n\npg_size_pretty\n----------------\n 21 GB\n\ni have one table which has data more than 160500460 rows almost.......and i\nhave partioned with yearwise in different schemas\n\n stk_source\n Table \"_100410.stk_source\"\n Column | Type |\nModifiers | Storage | Description\n-----------------------+-----------+-----------------------------------------------------+----------+-------------\n source_id | integer | not null default\nnextval('source_id_seq'::regclass) | plain |\n stock_id | integer\n| | plain |\n source_detail | integer[]\n| | extended |\n transaction_reference | integer\n| | plain |\n is_user_set | boolean | default\nfalse | plain |\nTriggers:\n insert_stk_source_trigger BEFORE INSERT ON stk_source FOR EACH ROW\nEXECUTE PROCEDURE stk_source_insert_trigger()\nChild tables: _100410_200809.stk_source,\n _100410_200910.stk_source,\n _100410_201011.stk_source,\n _100410_201112.stk_source\nHas OIDs: yes\n\nAlso have indexes\n\nss_source_id_pk\" PRIMARY KEY, btree (source_id)\n\"stk_source_stock_id_idx\" btree (stock_id)\n\n\nFirst two years data is very less so no issues\n\nand next two years table size is 2GB & 10 GB respectively.\n\nEXPLAIN select * from stk_source ;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------\n Result (cost=0.00..6575755.39 rows=163132513 width=42)\n -> Append (cost=0.00..6575755.39 rows=163132513 width=42)\n -> Seq Scan on stk_source (cost=0.00..42.40 rows=1080 width=45)\n -> Seq Scan on stk_source (cost=0.00..20928.37 rows=519179\nwidth=42)\n -> Seq Scan on stk_source (cost=0.00..85125.82 rows=2111794\nwidth=42)\n -> Seq Scan on stk_source (cost=0.00..6469658.80 rows=160500460\nwidth=42)\n\n\nbecause of this table my total database performance got affected i want to\noptimize the settings by reading the below blogs i have changed some\nconfigurations but no use still sytem is slow\nhttp://comments.gmane.org/gmane.comp.db.postgresql.performance/29561\n\nActually we are using one *PHP* application in that we have used *Postgresql\n9.0.3* database.The server is accessing 40 -50 users daily....so want to\nhave more performance....my config details are below....\n\nCould any one help how to tune the settings for better performance???\n\nThanks in advance..........\n\n# - Memory -\n\n*shared_buffers = 2GB * # min 128kB\n # (change requires\nrestart)\n#temp_buffers = 8MB # min 800kB\n*max_prepared_transactions = 0 * # zero disables the feature\n # (change requires\nrestart)\n\n# Note: Increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n# It is not advisable to set max_prepared_transactions nonzero unless you\n# actively intend to use prepared transactions.\n\n\n*work_mem = 48MB * # min 64kB\n*maintenance_work_mem = 256MB* # min 1MB\n*max_stack_depth = 6MB * # min 100kB\n\n\n# - Planner Cost Constants -\n\n*seq_page_cost = 1.0 * # measured on an arbitrary scale\n*random_page_cost = 3.0* # same scale as above\n*cpu_tuple_cost = 0.03 * # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\ne*ffective_cache_size = 4GB*\n------------------------------------------------------------------------\n*free -t -m*\n total used free shared buffers cached\nMem: 7957 3111 4845 0 10 2670\n-/+ buffers/cache: 430 7527\nSwap: 16385 458 15927\nTotal: 24343 3570 20773\n\n*ipcs -l*\n\n------ Shared Memory Limits --------\nmax number of segments = 4096\nmax seg size (kbytes) = 18014398509481983\nmax total shared memory (kbytes) = 4611686018427386880\nmin seg size (bytes) = 1\n\n------ Semaphore Limits --------\nmax number of arrays = 1024\nmax semaphores per array = 250\nmax semaphores system wide = 256000\nmax ops per semop call = 32\nsemaphore max value = 32767\n\n------ Messages Limits --------\nmax queues system wide = 3977\nmax size of message (bytes) = 65536\ndefault max size of queue (bytes) = 65536\n\n\n-- \nRegards\nMohamed Hashim.N\nMobile:09894587678\n\nI have Quadcore server with 8GB RAM\n\n\nvendor_id : GenuineIntel\n\ncpu family : 6\n\nmodel : 44\n\nmodel name : Intel(R) Xeon(R) CPU E5607 @ 2.27GHz\n\nstepping : 2\n\ncpu MHz : 1197.000\n\ncache size : 8192 KB\n\n\n\nMemTotal: 8148636 kB\n\nMemFree: 4989116 kB\n\nBuffers: 8464 kB\n\nCached: 2565456 kB\n\nSwapCached: 81196 kB\n\nActive: 2003796 kB\n\nInactive: 843896 kB\n\nActive(anon): 1826176 kB\n\nInactive(anon): 405964 kB\n\nActive(file): 177620 kB\n\nInactive(file): 437932 kB\n\nUnevictable: 0 kB\n\nMlocked: 0 kB\n\nSwapTotal: 16779260 kB\n\nSwapFree: 16303356 kB\n\nDirty: 1400 kB\n\nWriteback: 0 kB\n\nAnonPages: 208260 kB\n\nMapped: 1092008 kB\n\nShmem: 1958368 kB\n\nSlab: 224964 kB\n\nSReclaimable: 60136 kB\n\nSUnreclaim: 164828 kB\n\nKernelStack: 2864 kB\n\nPageTables: 35684 kB\n\nNFS_Unstable: 0 kB\n\nBounce: 0 kB\n\nWritebackTmp: 0 kB\n\nCommitLimit: 20853576 kB\n\nCommitted_AS: 3672176 kB\n\nVmallocTotal: 34359738367 kB\n\nVmallocUsed: 303292 kB\n\nVmallocChunk: 34359429308 kB\n\nHardwareCorrupted: 0 kB\n\nHugePages_Total: 0\n\nHugePages_Free: 0\n\nHugePages_Rsvd: 0\n\nHugePages_Surp: 0\n\nHugepagesize: 2048 kB\n\nDirectMap4k: 6144 kB\n\nDirectMap2M: 2082816 kB\n\nDirectMap1G: 6291456 kB\n\nMy database size is\n\npg_size_pretty \n----------------\n 21 GB\n\ni have one table which has data more than 160500460 rows almost.......and i have partioned with yearwise in different schemas\n\n stk_source\n Table \"_100410.stk_source\"\n Column | Type | Modifiers | Storage | Description \n-----------------------+-----------+-----------------------------------------------------+----------+-------------\n source_id | integer | not null default nextval('source_id_seq'::regclass) | plain | \n stock_id | integer | | plain | \n source_detail | integer[] | | extended | \n transaction_reference | integer | | plain | \n is_user_set | boolean | default false | plain | \nTriggers:\n insert_stk_source_trigger BEFORE INSERT ON stk_source FOR EACH ROW EXECUTE PROCEDURE stk_source_insert_trigger()\nChild tables: _100410_200809.stk_source,\n _100410_200910.stk_source,\n _100410_201011.stk_source,\n _100410_201112.stk_source\nHas OIDs: yesAlso have indexesss_source_id_pk\" PRIMARY KEY, btree (source_id)\"stk_source_stock_id_idx\" btree (stock_id)\n\nFirst two years data is very less so no issues\n\nand next two years table size is 2GB & 10 GB respectively.\n\nEXPLAIN select * from stk_source ;\n QUERY PLAN \n-------------------------------------------------------------------------------------\n Result (cost=0.00..6575755.39 rows=163132513 width=42)\n -> Append (cost=0.00..6575755.39 rows=163132513 width=42)\n -> Seq Scan on stk_source (cost=0.00..42.40 rows=1080 width=45)\n -> Seq Scan on stk_source (cost=0.00..20928.37 rows=519179 width=42)\n -> Seq Scan on stk_source (cost=0.00..85125.82 rows=2111794 width=42)\n -> Seq Scan on stk_source (cost=0.00..6469658.80 rows=160500460 width=42)\n\n\nbecause of this table my total database performance got affected i want \nto optimize the settings by reading the below blogs i have changed some \nconfigurations but no use still sytem is slow\nhttp://comments.gmane.org/gmane.comp.db.postgresql.performance/29561\n \nActually we are using one PHP application in that we have used Postgresql 9.0.3 database.The server is accessing 40 -50 users \ndaily....so want to have more performance....my config details are \nbelow....\n\nCould any one help how to tune the settings for better performance???\n\nThanks in advance..........\n\n# - Memory -\n\nshared_buffers = 2GB # min 128kB\n # (change requires restart)\n#temp_buffers = 8MB # min 800kB\nmax_prepared_transactions = 0 # zero disables the feature\n # (change requires restart)\n\n# Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n# It is not advisable to set max_prepared_transactions nonzero unless you\n# actively intend to use prepared transactions.\n\n\nwork_mem = 48MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nmax_stack_depth = 6MB # min 100kB\n\n\n# - Planner Cost Constants -\n\nseq_page_cost = 1.0 # measured on an arbitrary scale\nrandom_page_cost = 3.0 # same scale as above\ncpu_tuple_cost = 0.03 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\neffective_cache_size = 4GB\n------------------------------------------------------------------------free -t -m\n total used free shared buffers cached\nMem: 7957 3111 4845 0 10 2670\n-/+ buffers/cache: 430 7527\nSwap: 16385 458 15927\nTotal: 24343 3570 20773\n\nipcs -l\n\n------ Shared Memory Limits --------\nmax number of segments = 4096\nmax seg size (kbytes) = 18014398509481983\nmax total shared memory (kbytes) = 4611686018427386880\nmin seg size (bytes) = 1\n\n------ Semaphore Limits --------\nmax number of arrays = 1024\nmax semaphores per array = 250\nmax semaphores system wide = 256000\nmax ops per semop call = 32\nsemaphore max value = 32767\n\n------ Messages Limits --------\nmax queues system wide = 3977\nmax size of message (bytes) = 65536\ndefault max size of queue (bytes) = 65536\n\n-- RegardsMohamed Hashim.N\nMobile:09894587678",
"msg_date": "Fri, 28 Oct 2011 12:32:03 +0530",
"msg_from": "Mohamed Hashim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance Problem with postgresql 9.03, 8GB RAM, Quadcore Processor\n\tServer--Need help!!!!!!!"
},
{
"msg_contents": "what sort of queries you are running against it ? the select * from..\nis not really (hopefully) a query you are running from your php app.\n",
"msg_date": "Fri, 28 Oct 2011 08:58:26 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Actually we are using various views and functions to get the info for\nreporting purpose in that views or functions we have used or joined the\nabove table mentioned.\n\nI thought of will get reply from any one from the lists so only i put anyway\ni will continue with only pgsql-performance mailing lists.\n\nRegards\nHashim\n\nOn Fri, Oct 28, 2011 at 1:28 PM, Gregg Jaskiewicz <[email protected]> wrote:\n\n> what sort of queries you are running against it ? the select * from..\n> is not really (hopefully) a query you are running from your php app.\n>\n\n\n\n-- \nRegards\nMohamed Hashim.N\nMobile:09894587678\n\nActually we are using various views and functions to get the info for reporting purpose in that views or functions we have used or joined the above table mentioned.I thought of will get reply from any one from the lists so only i put anyway i will continue with only pgsql-performance mailing lists.\nRegardsHashimOn Fri, Oct 28, 2011 at 1:28 PM, Gregg Jaskiewicz <[email protected]> wrote:\n\nwhat sort of queries you are running against it ? the select * from..\nis not really (hopefully) a query you are running from your php app.\n-- RegardsMohamed Hashim.N\nMobile:09894587678",
"msg_date": "Fri, 28 Oct 2011 13:51:22 +0530",
"msg_from": "Mohamed Hashim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "On 28 October 2011 09:02, Mohamed Hashim <[email protected]> wrote:\n> EXPLAIN select * from stk_source ;\n> QUERY\n> PLAN\n> -------------------------------------------------------------------------------------\n> Result (cost=0.00..6575755.39 rows=163132513 width=42)\n> -> Append (cost=0.00..6575755.39 rows=163132513 width=42)\n> -> Seq Scan on stk_source (cost=0.00..42.40 rows=1080 width=45)\n> -> Seq Scan on stk_source (cost=0.00..20928.37 rows=519179\n> width=42)\n> -> Seq Scan on stk_source (cost=0.00..85125.82 rows=2111794\n> width=42)\n> -> Seq Scan on stk_source (cost=0.00..6469658.80 rows=160500460\n> width=42)\n\nThat plan gives you the best possible performance given your query.\nYour example probably doesn't fit the problem you're investigating.\n\n-- \nIf you can't see the forest for the trees,\nCut the trees and you'll see there is no forest.\n",
"msg_date": "Fri, 28 Oct 2011 13:37:09 +0200",
"msg_from": "Alban Hertroys <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "On 28 October 2011 13:37, Alban Hertroys <[email protected]> wrote:\n> On 28 October 2011 09:02, Mohamed Hashim <[email protected]> wrote:\n\nPlease don't cross-post to mailing lists for multiple projects.\n\n-- \nIf you can't see the forest for the trees,\nCut the trees and you'll see there is no forest.\n",
"msg_date": "Fri, 28 Oct 2011 14:08:10 +0200",
"msg_from": "Alban Hertroys <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Thanks Alban & Gregg.\n\n\ni will describe little more about that table\n\n\n - We are using PHP application with Apache server & Postgresql 9.0.3 in a\n dedicated server.\n - stk_source table is mainly used to track the transactions from parent\n to child\n\n Table \"_100410.stk_source\"\n Column | Type |\nModifiers\n-----------------------+-----------+-----------------------------------------------------\n source_id | integer | not null default\nnextval('source_id_seq'::regclass)\n stock_id | integer |\n source_detail | integer[] |\n transaction_reference | integer |\n is_user_set | boolean | default false\n\n\nWe store transaction_type and transaction_id in source_detail column which\nis an interger array for each transactions\n\nWe use various functions to get the info based on transaction type\n\nFor eg:\n\nIn function to get the batch details we have used as\n\nFOR batch_id_rec in select distinct(batch_id) from order_status_batches osb\njoin batch_status_stock bss on osb.status_id=bss.batch_status_id where\nstock_id in (select source_detail[2] from stk_source where stock_id IN\n(SELECT\nstd_i.stock_id\n\n FROM order_details_shipments\nods\n\n JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id AND\nods.order_id=sps.order_id AND ods.item_id=sps.item_id\n JOIN stock_transaction_detail_106 std ON\nstd.transaction_id=sps.transaction_id\n JOIN stock_transaction_detail_106 std_i ON std.stock_id =\nstd_i.stock_id AND std_i.transaction_type = 'i'::bpchar\n WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP\n\n...............................\n\n................................\n\n......................................\n\nSimilarly we have used in php pages and views\n\nSELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS\ndate,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS\nrate,\nFROM acc_bill_items_106 abi\n JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id AND\nabd.bill_status='act'\n JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id and\nss.source_detail[1]=1\n JOIN stock_transaction_detail_106_table std ON std.stock_id=ss.stock_id\n JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id\n JOIN master_product_106_table mp ON mp.product_id= sd106.product_id\n JOIN receipt_item_price_106_table rip ON rip.receipt_item_id=abi.item_id\n WHERE abi.bill_id=$bill_id AND std.transaction_type='o' ;\n\nSo where ever we have JOIN or used in functions the performance is very low\nsome times query returns results takes more than 45 mints.\n\nNormally if we fetch Select * from some_table..........it returns very fast\nbecause it has less records.\n\nBut when i put Select * from stk_source or to find the actual_cost\n\nEXPLAIN ANALYZE SELECT * FROM stk_source;\n\ni couln't able to retrieve the planner details waited for more than 50 to 60\nmints\n\nso question is in spite of having good server with high configuration and\nalso changed the postgresql configuration settings then why the system is\ncrawling?\n\n\n*What are the other parameters have to look out or what are the other config\nsettings to be change to have the best performance??*\n\nKindly help to sort out this problem......\n\n\nThanks in advance..................!!!!!!\n\nRegards\nHashim\n\n\n\n\n\n\n\nOn Fri, Oct 28, 2011 at 5:07 PM, Alban Hertroys <[email protected]> wrote:\n\n> On 28 October 2011 09:02, Mohamed Hashim <[email protected]> wrote:\n> > EXPLAIN select * from stk_source ;\n> > QUERY\n> > PLAN\n> >\n> -------------------------------------------------------------------------------------\n> > Result (cost=0.00..6575755.39 rows=163132513 width=42)\n> > -> Append (cost=0.00..6575755.39 rows=163132513 width=42)\n> > -> Seq Scan on stk_source (cost=0.00..42.40 rows=1080\n> width=45)\n> > -> Seq Scan on stk_source (cost=0.00..20928.37 rows=519179\n> > width=42)\n> > -> Seq Scan on stk_source (cost=0.00..85125.82 rows=2111794\n> > width=42)\n> > -> Seq Scan on stk_source (cost=0.00..6469658.80\n> rows=160500460\n> > width=42)\n>\n> That plan gives you the best possible performance given your query.\n> Your example probably doesn't fit the problem you're investigating.\n>\n> --\n> If you can't see the forest for the trees,\n> Cut the trees and you'll see there is no forest.\n>\n\n\n\n-- \nRegards\nMohamed Hashim.N\nMobile:09894587678\n\nThanks Alban & Gregg.i will describe little more about that tableWe are using PHP application with Apache server & Postgresql 9.0.3 in a dedicated server.\nstk_source table is mainly used to track the transactions from parent to child Table \"_100410.stk_source\" Column | Type | Modifiers \n\n-----------------------+-----------+----------------------------------------------------- source_id | integer | not null default nextval('source_id_seq'::regclass) stock_id | integer | \n\n source_detail | integer[] | transaction_reference | integer | is_user_set | boolean | default falseWe store transaction_type and transaction_id in source_detail column which is an interger array for each transactions\nWe use various functions to get the info based on transaction type For eg:In function to get the batch details we have used asFOR batch_id_rec in select distinct(batch_id) from order_status_batches osb join batch_status_stock bss on osb.status_id=bss.batch_status_id where stock_id in (select source_detail[2] from stk_source where stock_id IN (SELECT std_i.stock_id \n\n FROM order_details_shipments ods JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id AND ods.order_id=sps.order_id AND ods.item_id=sps.item_id \n\n JOIN stock_transaction_detail_106 std ON std.transaction_id=sps.transaction_id JOIN stock_transaction_detail_106 std_i ON std.stock_id = std_i.stock_id AND std_i.transaction_type = 'i'::bpchar\n\n WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP.....................................................................................................Similarly we have used in php pages and views\nSELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS rate,FROM acc_bill_items_106 abi JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id AND abd.bill_status='act'\n\n JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id and ss.source_detail[1]=1 JOIN stock_transaction_detail_106_table std ON std.stock_id=ss.stock_id JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id\n\n JOIN master_product_106_table mp ON mp.product_id= sd106.product_id JOIN receipt_item_price_106_table rip ON rip.receipt_item_id=abi.item_id WHERE abi.bill_id=$bill_id AND std.transaction_type='o' ;\nSo where ever we have JOIN or used in functions the performance is very low some times query returns results takes more than 45 mints.Normally if we fetch Select * from some_table..........it returns very fast because it has less records.\nBut when i put Select * from stk_source or to find the actual_costEXPLAIN ANALYZE SELECT * FROM stk_source;i couln't able to retrieve the planner details waited for more than 50 to 60 mints\n\nso question is in spite of having good server with high configuration and also changed the postgresql configuration settings then why the system is crawling?What are the other parameters have to look out or what are the other config settings to be change to have the best performance??\nKindly help to sort out this problem......Thanks in advance..................!!!!!!RegardsHashim On Fri, Oct 28, 2011 at 5:07 PM, Alban Hertroys <[email protected]> wrote:\nOn 28 October 2011 09:02, Mohamed Hashim <[email protected]> wrote:\n\n\n> EXPLAIN select * from stk_source ;\n> QUERY\n> PLAN\n> -------------------------------------------------------------------------------------\n> Result (cost=0.00..6575755.39 rows=163132513 width=42)\n> -> Append (cost=0.00..6575755.39 rows=163132513 width=42)\n> -> Seq Scan on stk_source (cost=0.00..42.40 rows=1080 width=45)\n> -> Seq Scan on stk_source (cost=0.00..20928.37 rows=519179\n> width=42)\n> -> Seq Scan on stk_source (cost=0.00..85125.82 rows=2111794\n> width=42)\n> -> Seq Scan on stk_source (cost=0.00..6469658.80 rows=160500460\n> width=42)\n\nThat plan gives you the best possible performance given your query.\nYour example probably doesn't fit the problem you're investigating.\n\n--\nIf you can't see the forest for the trees,\nCut the trees and you'll see there is no forest.\n-- RegardsMohamed Hashim.N\nMobile:09894587678",
"msg_date": "Sat, 29 Oct 2011 09:40:12 +0530",
"msg_from": "Mohamed Hashim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Any idea or suggestions how to improve my database best\nperformance.................???\n\nRegards\nHashim\n\nOn Sat, Oct 29, 2011 at 9:40 AM, Mohamed Hashim <[email protected]> wrote:\n\n> Thanks Alban & Gregg.\n>\n>\n> i will describe little more about that table\n>\n>\n> - We are using PHP application with Apache server & Postgresql 9.0.3\n> in a dedicated server.\n> - stk_source table is mainly used to track the transactions from\n> parent to child\n>\n> Table \"_100410.stk_source\"\n> Column | Type |\n> Modifiers\n> -----------------------+-----------+-----------------------------------------------------\n>\n> source_id | integer | not null default\n> nextval('source_id_seq'::regclass)\n> stock_id | integer |\n> source_detail | integer[] |\n> transaction_reference | integer |\n> is_user_set | boolean | default false\n>\n>\n> We store transaction_type and transaction_id in source_detail column which\n> is an interger array for each transactions\n>\n> We use various functions to get the info based on transaction type\n>\n> For eg:\n>\n> In function to get the batch details we have used as\n>\n> FOR batch_id_rec in select distinct(batch_id) from order_status_batches\n> osb join batch_status_stock bss on osb.status_id=bss.batch_status_id where\n> stock_id in (select source_detail[2] from stk_source where stock_id IN\n> (SELECT\n> std_i.stock_id\n>\n> FROM order_details_shipments\n> ods\n>\n> JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id AND\n> ods.order_id=sps.order_id AND ods.item_id=sps.item_id\n> JOIN stock_transaction_detail_106 std ON\n> std.transaction_id=sps.transaction_id\n> JOIN stock_transaction_detail_106 std_i ON std.stock_id =\n> std_i.stock_id AND std_i.transaction_type = 'i'::bpchar\n> WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP\n>\n> ...............................\n>\n> ................................\n>\n> ......................................\n>\n> Similarly we have used in php pages and views\n>\n> SELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS\n> date,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS\n> rate,\n> FROM acc_bill_items_106 abi\n> JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id AND\n> abd.bill_status='act'\n> JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id and\n> ss.source_detail[1]=1\n> JOIN stock_transaction_detail_106_table std ON std.stock_id=ss.stock_id\n> JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id\n> JOIN master_product_106_table mp ON mp.product_id= sd106.product_id\n> JOIN receipt_item_price_106_table rip ON\n> rip.receipt_item_id=abi.item_id\n> WHERE abi.bill_id=$bill_id AND std.transaction_type='o' ;\n>\n> So where ever we have JOIN or used in functions the performance is very\n> low some times query returns results takes more than 45 mints.\n>\n> Normally if we fetch Select * from some_table..........it returns very\n> fast because it has less records.\n>\n> But when i put Select * from stk_source or to find the actual_cost\n>\n> EXPLAIN ANALYZE SELECT * FROM stk_source;\n>\n> i couln't able to retrieve the planner details waited for more than 50 to\n> 60 mints\n>\n> so question is in spite of having good server with high configuration and\n> also changed the postgresql configuration settings then why the system is\n> crawling?\n>\n>\n> *What are the other parameters have to look out or what are the other\n> config settings to be change to have the best performance??*\n>\n> Kindly help to sort out this problem......\n>\n>\n> Thanks in advance..................!!!!!!\n>\n> Regards\n> Hashim\n>\n>\n>\n>\n>\n>\n>\n>\n> On Fri, Oct 28, 2011 at 5:07 PM, Alban Hertroys <[email protected]>wrote:\n>\n>> On 28 October 2011 09:02, Mohamed Hashim <[email protected]> wrote:\n>> > EXPLAIN select * from stk_source ;\n>> > QUERY\n>> > PLAN\n>> >\n>> -------------------------------------------------------------------------------------\n>> > Result (cost=0.00..6575755.39 rows=163132513 width=42)\n>> > -> Append (cost=0.00..6575755.39 rows=163132513 width=42)\n>> > -> Seq Scan on stk_source (cost=0.00..42.40 rows=1080\n>> width=45)\n>> > -> Seq Scan on stk_source (cost=0.00..20928.37 rows=519179\n>> > width=42)\n>> > -> Seq Scan on stk_source (cost=0.00..85125.82 rows=2111794\n>> > width=42)\n>> > -> Seq Scan on stk_source (cost=0.00..6469658.80\n>> rows=160500460\n>> > width=42)\n>>\n>> That plan gives you the best possible performance given your query.\n>> Your example probably doesn't fit the problem you're investigating.\n>>\n>> --\n>> If you can't see the forest for the trees,\n>> Cut the trees and you'll see there is no forest.\n>>\n>\n>\n>\n> --\n> Regards\n> Mohamed Hashim.N\n> Mobile:09894587678\n>\n\n\n\n-- \nRegards\nMohamed Hashim.N\nMobile:09894587678\n\nAny idea or suggestions how to improve my database best performance.................???RegardsHashimOn Sat, Oct 29, 2011 at 9:40 AM, Mohamed Hashim <[email protected]> wrote:\nThanks Alban & Gregg.i will describe little more about that table\nWe are using PHP application with Apache server & Postgresql 9.0.3 in a dedicated server.\nstk_source table is mainly used to track the transactions from parent to child Table \"_100410.stk_source\" Column | Type | Modifiers \n\n-----------------------+-----------+----------------------------------------------------- source_id | integer | not null default nextval('source_id_seq'::regclass) stock_id | integer | \n\n\n source_detail | integer[] | transaction_reference | integer | is_user_set | boolean | default falseWe store transaction_type and transaction_id in source_detail column which is an interger array for each transactions\nWe use various functions to get the info based on transaction type For eg:In function to get the batch details we have used asFOR batch_id_rec in select distinct(batch_id) from order_status_batches osb join batch_status_stock bss on osb.status_id=bss.batch_status_id where stock_id in (select source_detail[2] from stk_source where stock_id IN (SELECT std_i.stock_id \n\n\n FROM order_details_shipments ods JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id AND ods.order_id=sps.order_id AND ods.item_id=sps.item_id \n\n\n JOIN stock_transaction_detail_106 std ON std.transaction_id=sps.transaction_id JOIN stock_transaction_detail_106 std_i ON std.stock_id = std_i.stock_id AND std_i.transaction_type = 'i'::bpchar\n\n\n WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP.....................................................................................................Similarly we have used in php pages and views\nSELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS rate,FROM acc_bill_items_106 abi JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id AND abd.bill_status='act'\n\n\n JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id and ss.source_detail[1]=1 JOIN stock_transaction_detail_106_table std ON std.stock_id=ss.stock_id JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id\n\n\n JOIN master_product_106_table mp ON mp.product_id= sd106.product_id JOIN receipt_item_price_106_table rip ON rip.receipt_item_id=abi.item_id WHERE abi.bill_id=$bill_id AND std.transaction_type='o' ;\nSo where ever we have JOIN or used in functions the performance is very low some times query returns results takes more than 45 mints.Normally if we fetch Select * from some_table..........it returns very fast because it has less records.\nBut when i put Select * from stk_source or to find the actual_costEXPLAIN ANALYZE SELECT * FROM stk_source;i couln't able to retrieve the planner details waited for more than 50 to 60 mints\n\n\nso question is in spite of having good server with high configuration and also changed the postgresql configuration settings then why the system is crawling?What are the other parameters have to look out or what are the other config settings to be change to have the best performance??\nKindly help to sort out this problem......Thanks in advance..................!!!!!!RegardsHashim \n\nOn Fri, Oct 28, 2011 at 5:07 PM, Alban Hertroys <[email protected]> wrote:\nOn 28 October 2011 09:02, Mohamed Hashim <[email protected]> wrote:\n\n\n\n> EXPLAIN select * from stk_source ;\n> QUERY\n> PLAN\n> -------------------------------------------------------------------------------------\n> Result (cost=0.00..6575755.39 rows=163132513 width=42)\n> -> Append (cost=0.00..6575755.39 rows=163132513 width=42)\n> -> Seq Scan on stk_source (cost=0.00..42.40 rows=1080 width=45)\n> -> Seq Scan on stk_source (cost=0.00..20928.37 rows=519179\n> width=42)\n> -> Seq Scan on stk_source (cost=0.00..85125.82 rows=2111794\n> width=42)\n> -> Seq Scan on stk_source (cost=0.00..6469658.80 rows=160500460\n> width=42)\n\nThat plan gives you the best possible performance given your query.\nYour example probably doesn't fit the problem you're investigating.\n\n--\nIf you can't see the forest for the trees,\nCut the trees and you'll see there is no forest.\n-- RegardsMohamed Hashim.N\nMobile:09894587678\n-- RegardsMohamed Hashim.N\nMobile:09894587678",
"msg_date": "Tue, 1 Nov 2011 08:33:51 +0530",
"msg_from": "Mohamed Hashim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Hi Hashim,\n\nAfter I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of \nproblems with queries with many joins. Queries that used to take 1ms \nsuddenly take half a minute for no apparent reason.\n\nI have 72GB which I think makes the planner go bonkers and be too eager \ndoing a seq scan. I tried to compensate with ridiculously low \ncpu_index_tuple_cost but it had little effect.\n\nIf I were you, I would try to remove some of the joined tables and see \nwhat happens. When does it start to run very slowly? How does the plan \nlook right before it's super slow?\n\n\nOne workaround I've done is if something looking like this....\n\nselect\n ...\nfrom\n table_linking_massive_table tlmt\n ,massive_table mt\n ,some_table1 st1\n ,some_table2 st2\n ,some_table3 st3\n ,some_table4 st4\nwhere\n tlmt.group_id = 123223 AND\n mt.id = tmlt.massive_table AND\n st1.massive_table = mt.id AND\n st2.massive_table = mt.id AND\n st3.massive_table = mt.id AND\n st4.massive_table = mt.id\n\n...suddenly gets slow, it has helped to rewrite it as....\n\nselect\n ...\nfrom\n (\n select\n ...\n from\n table_linking_massive_table tlmt\n ,massive_table mt\n where\n tlmt.group_id = 123223 AND\n mt.id = tmlt.massive_table AND\n ) as mt\n ,some_table1 st1\n ,some_table2 st2\n ,some_table3 st3\n ,some_table4 st4\nwhere\n tlmt.group_id = 123223 AND\n mt.id = tmlt.massive_table AND\n st1.massive_table = mt.id AND\n st2.massive_table = mt.id AND\n st3.massive_table = mt.id AND\n st4.massive_table = mt.id\n\nThis seems to force Postgres to evaluate the mt subselect first and not \nget ideas about how to join. It was a few years ago since I used Oracle \nbut if I remember correctly Oracle looked at the order of the things in \nthe where section. In this example Oracle would be encourage to use tlmt \nas base table and take it from there. It doesn't seem to me that \nPostgres cares about this order. Not caring would possibly be more \nforgiving with automatically generated sql but it also implies the \nplanner always makes the best decisions which it obviously is not. I \nmight be talking rubbish here, these are my empirical observations.\n\nI'm sure you'll get better answers, but this is what I've done.\n\nI assume you have done your analyze & indexing correctly etc.\n\nBest regards,\nMarcus\n\nOn 11/1/11 4:03 , Mohamed Hashim wrote:\n> Any idea or suggestions how to improve my database best \n> performance.................???\n>\n> Regards\n> Hashim\n>\n> On Sat, Oct 29, 2011 at 9:40 AM, Mohamed Hashim <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Thanks Alban & Gregg.\n>\n>\n> i will describe little more about that table\n>\n> * We are using PHP application with Apache server & Postgresql\n> 9.0.3 in a dedicated server.\n> * stk_source table is mainly used to track the transactions\n> from parent to child\n>\n> Table \"_100410.stk_source\"\n> Column | Type | Modifiers\n> -----------------------+-----------+-----------------------------------------------------\n>\n>\n> source_id | integer | not null default\n> nextval('source_id_seq'::regclass)\n> stock_id | integer |\n> source_detail | integer[] |\n> transaction_reference | integer |\n> is_user_set | boolean | default false\n>\n>\n> We store transaction_type and transaction_id in source_detail\n> column which is an interger array for each transactions\n>\n> We use various functions to get the info based on transaction type\n>\n> For eg:\n>\n> In function to get the batch details we have used as\n>\n> FOR batch_id_rec in select distinct(batch_id) from\n> order_status_batches osb join batch_status_stock bss on\n> osb.status_id=bss.batch_status_id where stock_id in (select\n> source_detail[2] from stk_source where stock_id IN (SELECT\n> std_i.stock_id\n> FROM order_details_shipments ods\n> JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id\n> AND ods.order_id=sps.order_id AND ods.item_id=sps.item_id\n> JOIN stock_transaction_detail_106 std ON\n> std.transaction_id=sps.transaction_id\n> JOIN stock_transaction_detail_106 std_i ON std.stock_id =\n> std_i.stock_id AND std_i.transaction_type = 'i'::bpchar\n> WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP\n>\n> ...............................\n>\n> ................................\n>\n> ......................................\n>\n> Similarly we have used in php pages and views\n>\n> SELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/yyyy')\n> AS date,mp.product_desc as\n> product_desc,std.quantity,std.area,rip.price AS rate,\n> FROM acc_bill_items_106 abi\n> JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id\n> AND abd.bill_status='act'\n> JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id\n> and ss.source_detail[1]=1\n> JOIN stock_transaction_detail_106_table std ON\n> std.stock_id=ss.stock_id\n> JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id\n> JOIN master_product_106_table mp ON mp.product_id=\n> sd106.product_id\n> JOIN receipt_item_price_106_table rip ON\n> rip.receipt_item_id=abi.item_id\n> WHERE abi.bill_id=$bill_id AND std.transaction_type='o' ;\n>\n> So where ever we have JOIN or used in functions the performance is\n> very low some times query returns results takes more than 45 mints.\n>\n> Normally if we fetch Select * from some_table..........it returns\n> very fast because it has less records.\n>\n> But when i put Select * from stk_source or to find the actual_cost\n>\n> EXPLAIN ANALYZE SELECT * FROM stk_source;\n>\n> i couln't able to retrieve the planner details waited for more\n> than 50 to 60 mints\n>\n> so question is in spite of having good server with high\n> configuration and also changed the postgresql configuration\n> settings then why the system is crawling?\n>\n>\n> *What are the other parameters have to look out or what are the\n> other config settings to be change to have the best performance??*\n>\n> Kindly help to sort out this problem......\n>\n>\n> Thanks in advance..................!!!!!!\n>\n> Regards\n> Hashim\n>\n>\n>\n>\n>\n>\n>\n>\n> On Fri, Oct 28, 2011 at 5:07 PM, Alban Hertroys\n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> On 28 October 2011 09:02, Mohamed Hashim <[email protected]\n> <mailto:[email protected]>> wrote:\n> > EXPLAIN select * from stk_source ;\n> > QUERY\n> > PLAN\n> >\n> -------------------------------------------------------------------------------------\n> > Result (cost=0.00..6575755.39 rows=163132513 width=42)\n> > -> Append (cost=0.00..6575755.39 rows=163132513 width=42)\n> > -> Seq Scan on stk_source (cost=0.00..42.40\n> rows=1080 width=45)\n> > -> Seq Scan on stk_source (cost=0.00..20928.37\n> rows=519179\n> > width=42)\n> > -> Seq Scan on stk_source (cost=0.00..85125.82\n> rows=2111794\n> > width=42)\n> > -> Seq Scan on stk_source (cost=0.00..6469658.80\n> rows=160500460\n> > width=42)\n>\n> That plan gives you the best possible performance given your\n> query.\n> Your example probably doesn't fit the problem you're\n> investigating.\n>\n> --\n> If you can't see the forest for the trees,\n> Cut the trees and you'll see there is no forest.\n>\n>\n>\n>\n> -- \n> Regards\n> Mohamed Hashim.N\n> Mobile:09894587678\n>\n>\n>\n>\n> -- \n> Regards\n> Mohamed Hashim.N\n> Mobile:09894587678\n\n\n\n\n\n\n\n\nHi Hashim,\n\nAfter I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of\nproblems with queries with many joins. Queries that used to take 1ms\nsuddenly take half a minute for no apparent reason.\n\nI have 72GB which I think makes the planner go bonkers and be too eager\ndoing a seq scan. I tried to compensate with ridiculously low\ncpu_index_tuple_cost but it had little effect.\n\nIf I were you, I would try to remove some of the joined tables and see\nwhat happens. When does it start to run very slowly? How does the plan\nlook right before it's super slow?\n\n\nOne workaround I've done is if something looking like this....\n\nselect\n ...\nfrom\n table_linking_massive_table tlmt\n ,massive_table mt\n ,some_table1 st1\n ,some_table2 st2\n ,some_table3 st3\n ,some_table4 st4\nwhere\n tlmt.group_id = 123223 AND\n mt.id = tmlt.massive_table AND\n st1.massive_table = mt.id AND\n st2.massive_table = mt.id AND\n st3.massive_table = mt.id AND\n st4.massive_table = mt.id \n \n...suddenly gets slow, it has helped to rewrite it as....\n\nselect\n ...\nfrom\n ( \n select\n ...\n from\n table_linking_massive_table tlmt\n ,massive_table mt\n where\n tlmt.group_id = 123223 AND\n mt.id = tmlt.massive_table AND\n ) as mt\n ,some_table1 st1\n ,some_table2 st2\n ,some_table3 st3\n ,some_table4 st4\nwhere\n tlmt.group_id = 123223 AND\n mt.id = tmlt.massive_table AND\n st1.massive_table = mt.id AND\n st2.massive_table = mt.id AND\n st3.massive_table = mt.id AND\n st4.massive_table = mt.id \n\nThis seems to force Postgres to evaluate the mt subselect first and not\nget ideas about how to join. It was a few years ago since I used Oracle\nbut if I remember correctly Oracle looked at the order of the things in\nthe where section. In this example Oracle would be encourage to use\ntlmt as base table and take it from there. It doesn't seem to me that\nPostgres cares about this order. Not caring would possibly be more\nforgiving with automatically generated sql but it also implies the\nplanner always makes the best decisions which it obviously is not. I\nmight be talking rubbish here, these are my empirical observations.\n\nI'm sure you'll get better answers, but this is what I've done.\n\nI assume you have done your analyze & indexing correctly etc.\n\nBest regards,\nMarcus\n\nOn 11/1/11 4:03 , Mohamed Hashim wrote:\nAny idea or\nsuggestions how to improve my database best\nperformance.................???\n\nRegards\nHashim\n\nOn Sat, Oct 29, 2011 at 9:40 AM, Mohamed\nHashim <[email protected]>\nwrote:\nThanks Alban & Gregg.\n\n\ni will describe little more about that table\n\n\n\nWe are using PHP application with Apache server &\nPostgresql 9.0.3 in a dedicated server.\nstk_source table is mainly used to track the transactions\nfrom parent to child\n\n Table\n\"_100410.stk_source\"\n Column | Type | \nModifiers \n\n-----------------------+-----------+-----------------------------------------------------\n \n source_id | integer | not null default\nnextval('source_id_seq'::regclass)\n\n stock_id | integer | \n source_detail | integer[] | \n transaction_reference | integer | \n is_user_set | boolean | default false\n\n\nWe store transaction_type and transaction_id in source_detail column\nwhich is an interger array for each transactions\n\nWe use various functions to get the info based on transaction type \n\nFor eg:\n\nIn function to get the batch details we have used as\n\nFOR batch_id_rec in select distinct(batch_id) from order_status_batches\nosb join batch_status_stock bss on osb.status_id=bss.batch_status_id\nwhere stock_id in (select source_detail[2] from stk_source where\nstock_id IN (SELECT\nstd_i.stock_id \n \n FROM order_details_shipments\nods \n \n JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id AND\nods.order_id=sps.order_id AND ods.item_id=sps.item_id \n JOIN stock_transaction_detail_106 std ON\nstd.transaction_id=sps.transaction_id\n JOIN stock_transaction_detail_106 std_i ON std.stock_id =\nstd_i.stock_id AND std_i.transaction_type = 'i'::bpchar\n WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP\n\n...............................\n\n................................\n\n......................................\n\nSimilarly we have used in php pages and views\n\nSELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS\ndate,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS\nrate,\nFROM acc_bill_items_106 abi\n JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id AND\nabd.bill_status='act'\n JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id and\nss.source_detail[1]=1\n JOIN stock_transaction_detail_106_table std ON\nstd.stock_id=ss.stock_id\n JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id\n JOIN master_product_106_table mp ON mp.product_id= sd106.product_id\n JOIN receipt_item_price_106_table rip ON\nrip.receipt_item_id=abi.item_id\n WHERE abi.bill_id=$bill_id AND std.transaction_type='o' ;\n\nSo where ever we have JOIN or used in functions the performance is very\nlow some times query returns results takes more than 45 mints.\n\nNormally if we fetch Select * from some_table..........it returns very\nfast because it has less records.\n\nBut when i put Select * from stk_source or to find the actual_cost\n\nEXPLAIN ANALYZE SELECT * FROM stk_source;\n\ni couln't able to retrieve the planner details waited for more than 50\nto 60 mints\n\nso question is in spite of having good server with high configuration\nand also changed the postgresql configuration settings then why the\nsystem is crawling?\n\n\nWhat are the other parameters have to look out or what are the\nother config settings to be change to have the best performance??\n\nKindly help to sort out this problem......\n\n\nThanks in advance..................!!!!!!\n\nRegards\nHashim\n\n\n\n\n\n\n \n\n\nOn Fri, Oct 28, 2011 at 5:07 PM, Alban\nHertroys <[email protected]>\nwrote:\n\nOn 28 October 2011 09:02, Mohamed Hashim <[email protected]> wrote:\n> EXPLAIN select * from stk_source ;\n> QUERY\n> PLAN\n>\n-------------------------------------------------------------------------------------\n> Result (cost=0.00..6575755.39 rows=163132513 width=42)\n> -> Append (cost=0.00..6575755.39 rows=163132513 width=42)\n> -> Seq Scan on stk_source (cost=0.00..42.40\nrows=1080 width=45)\n> -> Seq Scan on stk_source (cost=0.00..20928.37\nrows=519179\n> width=42)\n> -> Seq Scan on stk_source (cost=0.00..85125.82\nrows=2111794\n> width=42)\n> -> Seq Scan on stk_source (cost=0.00..6469658.80\nrows=160500460\n> width=42)\n\n\nThat plan gives you the best possible performance given your query.\nYour example probably doesn't fit the problem you're investigating.\n\n--\nIf you can't see the forest for the trees,\nCut the trees and you'll see there is no forest.\n\n\n\n\n\n\n\n\n-- \nRegards\nMohamed Hashim.N\nMobile:09894587678\n\n\n\n\n\n\n\n-- \nRegards\nMohamed Hashim.N\nMobile:09894587678",
"msg_date": "Tue, 01 Nov 2011 10:57:46 +0100",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore\n\tProcessor Server--Need help!!!!!!!"
},
{
"msg_contents": "On Tue, Nov 01, 2011 at 08:33:51AM +0530, Mohamed Hashim wrote:\n> Any idea or suggestions how to improve my database best\n> performance.................???\n> \n> Regards\n> Hashim\n> \nHi Hashim,\n\nIgnoring the description of your tables, you should probably try\nupdating to the latest release 9.0.5. You are two point releases\nback and they really, really, really fix bugs in each release or\nthey do not bother releasing.\n\nRegards,\nKen\n",
"msg_date": "Tue, 1 Nov 2011 07:53:42 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "On 1 Listopad 2011, 10:57, Marcus Engene wrote:\n> Hi Hashim,\n>\n> One workaround I've done is if something looking like this....\n>\n> select\n> ...\n> from\n> table_linking_massive_table tlmt\n> ,massive_table mt\n> ,some_table1 st1\n> ,some_table2 st2\n> ,some_table3 st3\n> ,some_table4 st4\n> where\n> tlmt.group_id = 123223 AND\n> mt.id = tmlt.massive_table AND\n> st1.massive_table = mt.id AND\n> st2.massive_table = mt.id AND\n> st3.massive_table = mt.id AND\n> st4.massive_table = mt.id\n>\n> ...suddenly gets slow, it has helped to rewrite it as....\n>\n> select\n> ...\n> from\n> (\n> select\n> ...\n> from\n> table_linking_massive_table tlmt\n> ,massive_table mt\n> where\n> tlmt.group_id = 123223 AND\n> mt.id = tmlt.massive_table AND\n> ) as mt\n> ,some_table1 st1\n> ,some_table2 st2\n> ,some_table3 st3\n> ,some_table4 st4\n> where\n> tlmt.group_id = 123223 AND\n> mt.id = tmlt.massive_table AND\n> st1.massive_table = mt.id AND\n> st2.massive_table = mt.id AND\n> st3.massive_table = mt.id AND\n> st4.massive_table = mt.id\n>\n\nCan you please post EXPLAIN ANALYZE of those queries? It's difficult to\nsee what's wrong when we don't know the plan (and the actual stats\ngathered during execution). Use explain.depesz.com to post the output.\n\nTomas\n\n",
"msg_date": "Tue, 1 Nov 2011 14:13:34 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Marcus Engene <[email protected]> writes:\n> After I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of \n> problems with queries with many joins. Queries that used to take 1ms \n> suddenly take half a minute for no apparent reason.\n\nCould we see a concrete test case, rather than hand waving? If there's\nreally a problem in 9.0, it's impossible to fix it on so little detail.\n\n> One workaround I've done is if something looking like this....\n\nThe only way that should make a difference is if the total number\nof tables in the query exceeds from_collapse_limit (or maybe\njoin_collapse_limit, depending on exactly how you wrote the query).\nPerhaps you'd been running with nonstandard values of those settings\nin 8.x, and forgot to transfer them into the new DB?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Nov 2011 09:43:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB RAM,\n\tQuadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Dear All\n\nThanks for your suggestions & replies.\n\nThe below are the sample query which i put for particular one bill_id\n\nEXPLAIN ANALYZE SELECT abd.bill_no as\nbill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as\nproduct_desc,std.quantity,std.area,rip.price AS rate\nFROM acc_bill_items_106 abi\n JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id\n JOIN stk_source ss ON ss.source_detail[1]=1 and\nss.source_detail[2]=abi.item_id\n JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id\n JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id\n JOIN master_product_106 mp ON mp.product_id= sd106.product_id\n JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id\n WHERE abi.bill_id=12680;\n\n\n\nQUERY\nPLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..7230339.59 rows=54369 width=39) (actual\ntime=158156.895..158157.206 rows=1 loops=1)\n -> Nested Loop (cost=0.00..7149579.10 rows=8192 width=32) (actual\ntime=158156.863..158157.172 rows=1 loops=1)\n -> Nested Loop (cost=0.00..7119922.60 rows=8192 width=27)\n(actual time=158156.855..158157.164 rows=1 loops=1)\n -> Nested Loop (cost=0.00..7086865.70 rows=8192 width=19)\n(actual time=158156.835..158157.143 rows=1 loops=1)\n Join Filter: (abi.item_id = ss.source_detail[2])\n -> Nested Loop (cost=0.00..604.54 rows=2 width=23)\n(actual time=2.782..2.786 rows=1 loops=1)\n -> Index Scan using acc_bill_details_106_pkey\non acc_bill_details_106 abd (cost=0.00..6.29 rows=1 width=12) (actual\ntime=0.010..0.012 rows=1 loops=1)\n Index Cond: (bill_id = 12680)\n -> Nested Loop (cost=0.00..598.19 rows=2\nwidth=19) (actual time=2.770..2.772 rows=1 loops=1)\n Join Filter: (abi.item_id =\nrip.receipt_item_id)\n -> Seq Scan on receipt_item_price_106\nrip (cost=0.00..162.48 rows=4216 width=11) (actual time=0.005..0.562\nrows=4218 loops=1)\n -> Materialize (cost=0.00..140.59 rows=2\nwidth=8) (actual time=0.000..0.000 rows=1 loops=4218)\n -> Seq Scan on acc_bill_items_106\nabi (cost=0.00..140.58 rows=2 width=8) (actual time=0.412..0.412 rows=1\nloops=1)\n Filter: (bill_id = 12680)\n -> Materialize (cost=0.00..7024562.68 rows=819222\nwidth=33) (actual time=0.035..153869.575 rows=19010943 loops=1)\n -> Append (cost=0.00..7014065.57 rows=819222\nwidth=33) (actual time=0.034..145403.828 rows=19010943 loops=1)\n -> Seq Scan on stk_source ss\n(cost=0.00..45.10 rows=5 width=36) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (source_detail[1] = 1)\n -> Seq Scan on stk_source ss\n(cost=0.00..22226.32 rows=2596 width=33) (actual time=0.033..118.019\nrows=66356 loops=1)\n Filter: (source_detail[1] = 1)\n -> Seq Scan on stk_source ss\n(cost=0.00..90405.31 rows=10559 width=33) (actual time=0.010..490.712\nrows=288779 loops=1)\n Filter: (source_detail[1] = 1)\n -> Seq Scan on stk_source ss\n(cost=0.00..6901388.84 rows=806062 width=33) (actual\ntime=13.382..142493.302 rows=18655808 loops=1)\n Filter: (source_detail[1] = 1)\n -> Index Scan using sd106_stock_id_idx on stock_details_106\nsd106 (cost=0.00..4.00 rows=1 width=8) (actual time=0.014..0.014 rows=1\nloops=1)\n Index Cond: (sd106.stock_id = ss.stock_id)\n -> Index Scan using master_product_pkey on master_product_106 mp\n(cost=0.00..3.59 rows=1 width=13) (actual time=0.006..0.006 rows=1 loops=1)\n Index Cond: (mp.product_id = sd106.product_id)\n -> Index Scan using std106_stock_id_idx on stock_transaction_detail_106\nstd (cost=0.00..9.70 rows=4 width=19) (actual time=0.007..0.009 rows=1\nloops=1)\n Index Cond: (std.stock_id = ss.stock_id)\n Total runtime: 158240.795 ms\n\n\n<http://goog_1591150719>*http://explain.depesz.com/s/Tyc\n\n\n*Similarly i have used the queries on various details pages and views that\ntoo if i go for one month transactions its taking so much times.\n\nI will try to upgrade to latest version and will try to tune more my\nqueries so changing the conf settings wouldn't help for better performance??\n\n\n\nThanks & Regards\nHashim\n\nOn Tue, Nov 1, 2011 at 7:13 PM, Tom Lane <[email protected]> wrote:\n\n> Marcus Engene <[email protected]> writes:\n> > After I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of\n> > problems with queries with many joins. Queries that used to take 1ms\n> > suddenly take half a minute for no apparent reason.\n>\n> Could we see a concrete test case, rather than hand waving? If there's\n> really a problem in 9.0, it's impossible to fix it on so little detail.\n>\n> > One workaround I've done is if something looking like this....\n>\n> The only way that should make a difference is if the total number\n> of tables in the query exceeds from_collapse_limit (or maybe\n> join_collapse_limit, depending on exactly how you wrote the query).\n> Perhaps you'd been running with nonstandard values of those settings\n> in 8.x, and forgot to transfer them into the new DB?\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nRegards\nMohamed Hashim.N\nMobile:09894587678\n\nDear AllThanks for your suggestions & replies.The below are the sample query which i put for particular one bill_idEXPLAIN ANALYZE SELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS rate\n\n\nFROM acc_bill_items_106 abi JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id JOIN stk_source ss ON ss.source_detail[1]=1 and ss.source_detail[2]=abi.item_id JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id\n\n\n JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id JOIN master_product_106 mp ON mp.product_id= sd106.product_id JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id WHERE abi.bill_id=12680;\n QUERY PLAN -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Nested Loop (cost=0.00..7230339.59 rows=54369 width=39) (actual time=158156.895..158157.206 rows=1 loops=1) -> Nested Loop (cost=0.00..7149579.10 rows=8192 width=32) (actual time=158156.863..158157.172 rows=1 loops=1)\n\n\n -> Nested Loop (cost=0.00..7119922.60 rows=8192 width=27) (actual time=158156.855..158157.164 rows=1 loops=1) -> Nested Loop (cost=0.00..7086865.70 rows=8192 width=19) (actual time=158156.835..158157.143 rows=1 loops=1)\n\n\n Join Filter: (abi.item_id = ss.source_detail[2]) -> Nested Loop (cost=0.00..604.54 rows=2 width=23) (actual time=2.782..2.786 rows=1 loops=1) -> Index Scan using acc_bill_details_106_pkey on acc_bill_details_106 abd (cost=0.00..6.29 rows=1 width=12) (actual time=0.010..0.012 rows=1 loops=1)\n\n\n Index Cond: (bill_id = 12680) -> Nested Loop (cost=0.00..598.19 rows=2 width=19) (actual time=2.770..2.772 rows=1 loops=1) Join Filter: (abi.item_id = rip.receipt_item_id)\n\n\n -> Seq Scan on receipt_item_price_106 rip (cost=0.00..162.48 rows=4216 width=11) (actual time=0.005..0.562 rows=4218 loops=1) -> Materialize (cost=0.00..140.59 rows=2 width=8) (actual time=0.000..0.000 rows=1 loops=4218)\n\n\n -> Seq Scan on acc_bill_items_106 abi (cost=0.00..140.58 rows=2 width=8) (actual time=0.412..0.412 rows=1 loops=1) Filter: (bill_id = 12680)\n\n\n -> Materialize (cost=0.00..7024562.68 rows=819222 width=33) (actual time=0.035..153869.575 rows=19010943 loops=1) -> Append (cost=0.00..7014065.57 rows=819222 width=33) (actual time=0.034..145403.828 rows=19010943 loops=1)\n\n\n -> Seq Scan on stk_source ss (cost=0.00..45.10 rows=5 width=36) (actual time=0.001..0.001 rows=0 loops=1) Filter: (source_detail[1] = 1)\n\n -> Seq Scan on stk_source ss (cost=0.00..22226.32 rows=2596 width=33) (actual time=0.033..118.019 rows=66356 loops=1)\n Filter: (source_detail[1] = 1) -> Seq Scan on stk_source ss (cost=0.00..90405.31 rows=10559 width=33) (actual time=0.010..490.712 rows=288779 loops=1)\n\n\n Filter: (source_detail[1] = 1) -> Seq Scan on stk_source ss (cost=0.00..6901388.84 rows=806062 width=33) (actual time=13.382..142493.302 rows=18655808 loops=1)\n\n\n Filter: (source_detail[1] = 1) -> Index Scan using sd106_stock_id_idx on stock_details_106 sd106 (cost=0.00..4.00 rows=1 width=8) (actual time=0.014..0.014 rows=1 loops=1)\n\n\n Index Cond: (sd106.stock_id = ss.stock_id) -> Index Scan using master_product_pkey on master_product_106 mp (cost=0.00..3.59 rows=1 width=13) (actual time=0.006..0.006 rows=1 loops=1)\n\n\n Index Cond: (mp.product_id = sd106.product_id) -> Index Scan using std106_stock_id_idx on stock_transaction_detail_106 std (cost=0.00..9.70 rows=4 width=19) (actual time=0.007..0.009 rows=1 loops=1)\n\n\n Index Cond: (std.stock_id = ss.stock_id) Total runtime: 158240.795 mshttp://explain.depesz.com/s/Tyc\nSimilarly i have used the queries on various details pages and views that too if i go for one month transactions its taking so much times.\nI will try to upgrade to latest version and will try to tune more my queries so changing the conf settings wouldn't help for better performance??Thanks & RegardsHashim\n\nOn Tue, Nov 1, 2011 at 7:13 PM, Tom Lane <[email protected]> wrote:\nMarcus Engene <[email protected]> writes:\n> After I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of\n> problems with queries with many joins. Queries that used to take 1ms\n> suddenly take half a minute for no apparent reason.\n\nCould we see a concrete test case, rather than hand waving? If there's\nreally a problem in 9.0, it's impossible to fix it on so little detail.\n\n> One workaround I've done is if something looking like this....\n\nThe only way that should make a difference is if the total number\nof tables in the query exceeds from_collapse_limit (or maybe\njoin_collapse_limit, depending on exactly how you wrote the query).\nPerhaps you'd been running with nonstandard values of those settings\nin 8.x, and forgot to transfer them into the new DB?\n\n regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- RegardsMohamed Hashim.N\nMobile:09894587678",
"msg_date": "Wed, 2 Nov 2011 12:42:20 +0530",
"msg_from": "Mohamed Hashim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Mohamed Hashim <[email protected]> writes:\n> The below are the sample query which i put for particular one bill_id\n\n> EXPLAIN ANALYZE SELECT abd.bill_no as\n> bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as\n> product_desc,std.quantity,std.area,rip.price AS rate\n> FROM acc_bill_items_106 abi\n> JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id\n> JOIN stk_source ss ON ss.source_detail[1]=1 and\n> ss.source_detail[2]=abi.item_id\n> JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id\n> JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id\n> JOIN master_product_106 mp ON mp.product_id= sd106.product_id\n> JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id\n> WHERE abi.bill_id=12680;\n\nAll the time seems to be going into the seqscan on stk_source and its\nchild tables. It looks like it would help if \"ss.source_detail[1]=1 and\nss.source_detail[2]=abi.item_id\" were indexable (particularly the\nlatter). Which probably means you need to rethink your data\nrepresentation. Putting things that you need to index on into an array\nis not a very good design. I suppose you can do it if you're absolutely\nset on it (functional indexes on (source_detail[1]) and (source_detail[2]))\nbut it appears to suck from a notational point of view too. Six months\nfrom now, when you look at this code, are you going to remember what's\nthe difference between source_detail[1] and source_detail[2]? Not\nwithout consulting your notes, I bet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Nov 2011 10:26:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB RAM,\n\tQuadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Am 02.11.2011 08:12, schrieb Mohamed Hashim:\n> Dear All\n>\n> Thanks for your suggestions & replies.\n>\n> The below are the sample query which i put for particular one bill_id\n>\n> EXPLAIN ANALYZE SELECT abd.bill_no as \n> bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as \n> product_desc,std.quantity,std.area,rip.price AS rate\n> FROM acc_bill_items_106 abi\n> JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id\n> JOIN stk_source ss ON ss.source_detail[1]=1 and \n> ss.source_detail[2]=abi.item_id\n> JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id\n> JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id\n> JOIN master_product_106 mp ON mp.product_id= sd106.product_id\n> JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id\n> WHERE abi.bill_id=12680;\n\nFirst I would try this:\nexplain analyze select * from stk_source where source_detail[1] = 1;\nexplain analyze select * from stk_source where source_detail[2] = 12356;\n\nBoth times you'll get sequential scans, and that's the root of the \nproblem. Oh, you mentioned that you use partitioning, but there seems to \nbe no condition for that.\n\nYou should really rethink your database schema, at least try to pull out \nall indexable fields out of that int[] into columns, and use indices on \nthose fields.\n\nRegards\nMario\n\n\n\n\n\n\n\n\n\n\n\n Am 02.11.2011 08:12, schrieb Mohamed Hashim:\n Dear All\n\n Thanks for your suggestions & replies.\n\n The below are the sample query which i put for particular one\n bill_id\n\n EXPLAIN ANALYZE SELECT abd.bill_no as\n bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS\n date,mp.product_desc as\n product_desc,std.quantity,std.area,rip.price AS rate\n FROM acc_bill_items_106 abi\n JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id\n JOIN stk_source ss ON ss.source_detail[1]=1 and\n ss.source_detail[2]=abi.item_id\n JOIN stock_transaction_detail_106 std ON\n std.stock_id=ss.stock_id\n JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id\n JOIN master_product_106 mp ON mp.product_id=\n sd106.product_id\n JOIN receipt_item_price_106 rip ON\n rip.receipt_item_id=abi.item_id\n WHERE abi.bill_id=12680;\n\n\n First I would try this: \n explain analyze select * from stk_source where source_detail[1] = 1;\n explain analyze select * from stk_source where source_detail[2] =\n 12356;\n\n Both times you'll get sequential scans, and that's the root of the\n problem. Oh, you mentioned that you use partitioning, but there\n seems to be no condition for that.\n\n You should really rethink your database schema, at least try to pull\n out all indexable fields out of that int[] into columns, and use\n indices on those fields.\n\n Regards\n Mario",
"msg_date": "Thu, 03 Nov 2011 16:02:07 +0100",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore\n\tProcessor Server--Need help!!!!!!!"
},
{
"msg_contents": "On 3 Listopad 2011, 16:02, Mario Weilguni wrote:\n> Am 02.11.2011 08:12, schrieb Mohamed Hashim:\n>> Dear All\n>>\n>> Thanks for your suggestions & replies.\n>>\n>> The below are the sample query which i put for particular one bill_id\n>>\n>> EXPLAIN ANALYZE SELECT abd.bill_no as\n>> bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as\n>> product_desc,std.quantity,std.area,rip.price AS rate\n>> FROM acc_bill_items_106 abi\n>> JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id\n>> JOIN stk_source ss ON ss.source_detail[1]=1 and\n>> ss.source_detail[2]=abi.item_id\n>> JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id\n>> JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id\n>> JOIN master_product_106 mp ON mp.product_id= sd106.product_id\n>> JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id\n>> WHERE abi.bill_id=12680;\n>\n> First I would try this:\n> explain analyze select * from stk_source where source_detail[1] = 1;\n> explain analyze select * from stk_source where source_detail[2] = 12356;\n>\n> Both times you'll get sequential scans, and that's the root of the\n> problem. Oh, you mentioned that you use partitioning, but there seems to\n> be no condition for that.\n>\n> You should really rethink your database schema, at least try to pull out\n> all indexable fields out of that int[] into columns, and use indices on\n> those fields.\n\nNo doubt about that, querying tables using conditions on array columns is\nnot the best direction in most cases, especially when those tables are\nhuge.\n\nStill, the interesting part here is that the OP claims this worked just\nfine in the older version and after an upgrade the performance suddenly\ndropped. This could be caused by many things, and we're just guessing\nbecause we don't have any plans from the old version.\n\nTomas\n\n",
"msg_date": "Thu, 3 Nov 2011 17:08:11 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Am 03.11.2011 17:08, schrieb Tomas Vondra:\n> On 3 Listopad 2011, 16:02, Mario Weilguni wrote:\n> <snip>\n> No doubt about that, querying tables using conditions on array columns is\n> not the best direction in most cases, especially when those tables are\n> huge.\n>\n> Still, the interesting part here is that the OP claims this worked just\n> fine in the older version and after an upgrade the performance suddenly\n> dropped. This could be caused by many things, and we're just guessing\n> because we don't have any plans from the old version.\n>\n> Tomas\n>\n>\n\nNot really, Mohamed always said he has 9.0.3, Marcus Engene wrote about \nproblems after the migration from 8.x to 9.x. Or did I miss something here?\n\nRegards,\nMario\n\n",
"msg_date": "Thu, 03 Nov 2011 21:07:59 +0100",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore\n\tProcessor Server--Need help!!!!!!!"
},
{
"msg_contents": "Hi all,\n\nThanks for all your responses.\n\nSorry for late response\n\nEarlier we used Postgres8.3.10 with Desktop computer (as server) and\nconfiguration of the system (I2 core with 4GB RAM) and also the application\nwas slow i dint change any postgres config settings.\n\nMay be because of low config We thought the aplication is slow so we opted\nto go for higher configuration server(with RAID 1) which i mentioned\nearlier.\n\nI thought the application will go fast but unfortunately there is no\nimprovement so i tried to change the postgres config settings and trying to\ntune my queries wherever possible but still i was not able\nto..........improve the performance..\n\n\nSo will it helpful if we try GIST or GIN for integer array[] colum\n(source_detail) with enable_seqscan=off and default_statistics_target=1000?\n\nRegards\nHashim\n\n\n\nOn Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni <[email protected]> wrote:\n\n> Am 03.11.2011 17:08, schrieb Tomas Vondra:\n>\n>> On 3 Listopad 2011, 16:02, Mario Weilguni wrote:\n>> <snip>\n>>\n>> No doubt about that, querying tables using conditions on array columns is\n>> not the best direction in most cases, especially when those tables are\n>> huge.\n>>\n>> Still, the interesting part here is that the OP claims this worked just\n>> fine in the older version and after an upgrade the performance suddenly\n>> dropped. This could be caused by many things, and we're just guessing\n>> because we don't have any plans from the old version.\n>>\n>> Tomas\n>>\n>>\n>>\n> Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about\n> problems after the migration from 8.x to 9.x. Or did I miss something here?\n>\n> Regards,\n> Mario\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\n\n\n-- \nRegards\nMohamed Hashim.N\nMobile:09894587678\n\nHi all,Thanks for all your responses.Sorry for late responseEarlier we used Postgres8.3.10 with Desktop computer (as server) and configuration of the system (I2 core with 4GB RAM) and also the application was slow i dint change any postgres config settings.\nMay be because of low config We thought the aplication is slow so we opted to go for higher configuration server(with RAID 1) which i mentioned earlier.I thought the application will go fast but unfortunately there is no improvement so i tried to change the postgres config settings and trying to tune my queries wherever possible but still i was not able to..........improve the performance..\nSo will it helpful if we try GIST or GIN for integer array[] colum (source_detail) with enable_seqscan=off and default_statistics_target=1000?RegardsHashim\n\nOn Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni <[email protected]> wrote:\n\nAm 03.11.2011 17:08, schrieb Tomas Vondra:\n\nOn 3 Listopad 2011, 16:02, Mario Weilguni wrote:\n<snip>\nNo doubt about that, querying tables using conditions on array columns is\nnot the best direction in most cases, especially when those tables are\nhuge.\n\nStill, the interesting part here is that the OP claims this worked just\nfine in the older version and after an upgrade the performance suddenly\ndropped. This could be caused by many things, and we're just guessing\nbecause we don't have any plans from the old version.\n\nTomas\n\n\n\n\nNot really, Mohamed always said he has 9.0.3, Marcus Engene wrote about problems after the migration from 8.x to 9.x. Or did I miss something here?\n\nRegards,\nMario\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- RegardsMohamed Hashim.N\nMobile:09894587678",
"msg_date": "Tue, 8 Nov 2011 08:51:35 +0530",
"msg_from": "Mohamed Hashim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Sent from my iPhone\n\nOn Nov 7, 2011, at 7:21 PM, Mohamed Hashim <[email protected]> wrote:\n\n> Hi all,\n> \n> Thanks for all your responses.\n> \n> Sorry for late response\n> \n> Earlier we used Postgres8.3.10 with Desktop computer (as server) and configuration of the system (I2 core with 4GB RAM) and also the application was slow i dint change any postgres config settings.\n> \n> May be because of low config We thought the aplication is slow so we opted to go for higher configuration server(with RAID 1) which i mentioned earlier.\n> \n> I thought the application will go fast but unfortunately there is no improvement so i tried to change the postgres config settings and trying to tune my queries wherever possible but still i was not able to..........improve the performance..\n> \n> \n> So will it helpful if we try GIST or GIN for integer array[] colum (source_detail) with enable_seqscan=off and default_statistics_target=1000?\n\nOh dear! Where to even begin? There is no way to suggest possible solutions without knowing a lot more about how things are currently configured and what, exactly, about your application is slow. Just to address your particular suggestions, increasing the default statistics target would only help if an explain analyze for a slow query indicates that the query planner is using inaccurate row count estimates for one or more steps in a query plan. Depending upon the frequency of this problem it may be better to increase statistics target just for individual columns rather than across the entire db cluster. Setting enable_seqscan to off is almost never a good solution to a problem, especially db-wide. If the planner is selecting a sequential scan when an alternative strategy would perform much better, then it is doing so because your configuration is not telling the query planner accurate values for the cost of sequential access vs random access - or else the statistics are inaccurate causing it to select a seq scan because it thinks it will traverse more rows than it actually will.\n\nIn short, you need to read a lot more about performance tuning Postgres rather than taking stab-in-the-dark guesses for solutions. I believe it was pointed out that at least one query that is problematic for you is filtering based on the value of individual indexes of an array column - which means you actually need break those values into separate columns with indexes on them or create an index on column[x] so that the planner can use that. But if the problem is general slowness across your whole app, it is possible that the way your app uses the db access API is inefficient or you may have a misconfiguration that causes all db access to be slow. Depending on your hardware and platform, using the default configuration will result in db performance that is far from optimal. The default config is pretty much a minimal config.\n\nI'd suggest you spend at least a day or two reading up on Postgres performance tuning and investigating your particular problems. You may make quite a bit of improvement without our help and you'll be much more knowledgable about your db installation when you are done. At the very least, please look at the mailing list page on the Postgres website and read the links about how to ask performance questions so that you at least provide the list with enough information about your problems that others can offer useful feedback. I'd provide a link, but I'm on a phone.\n\n--sam\n\n\n> \n> Regards\n> Hashim\n> \n> \n> \n> On Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni <[email protected]> wrote:\n> Am 03.11.2011 17:08, schrieb Tomas Vondra:\n> On 3 Listopad 2011, 16:02, Mario Weilguni wrote:\n> <snip>\n> \n> No doubt about that, querying tables using conditions on array columns is\n> not the best direction in most cases, especially when those tables are\n> huge.\n> \n> Still, the interesting part here is that the OP claims this worked just\n> fine in the older version and after an upgrade the performance suddenly\n> dropped. This could be caused by many things, and we're just guessing\n> because we don't have any plans from the old version.\n> \n> Tomas\n> \n> \n> \n> Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about problems after the migration from 8.x to 9.x. Or did I miss something here?\n> \n> Regards,\n> Mario\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> -- \n> Regards\n> Mohamed Hashim.N\n> Mobile:09894587678\n\nSent from my iPhoneOn Nov 7, 2011, at 7:21 PM, Mohamed Hashim <[email protected]> wrote:Hi all,Thanks for all your responses.Sorry for late responseEarlier we used Postgres8.3.10 with Desktop computer (as server) and configuration of the system (I2 core with 4GB RAM) and also the application was slow i dint change any postgres config settings.\nMay be because of low config We thought the aplication is slow so we opted to go for higher configuration server(with RAID 1) which i mentioned earlier.I thought the application will go fast but unfortunately there is no improvement so i tried to change the postgres config settings and trying to tune my queries wherever possible but still i was not able to..........improve the performance..\nSo will it helpful if we try GIST or GIN for integer array[] colum (source_detail) with enable_seqscan=off and default_statistics_target=1000?Oh dear! Where to even begin? There is no way to suggest possible solutions without knowing a lot more about how things are currently configured and what, exactly, about your application is slow. Just to address your particular suggestions, increasing the default statistics target would only help if an explain analyze for a slow query indicates that the query planner is using inaccurate row count estimates for one or more steps in a query plan. Depending upon the frequency of this problem it may be better to increase statistics target just for individual columns rather than across the entire db cluster. Setting enable_seqscan to off is almost never a good solution to a problem, especially db-wide. If the planner is selecting a sequential scan when an alternative strategy would perform much better, then it is doing so because your configuration is not telling the query planner accurate values for the cost of sequential access vs random access - or else the statistics are inaccurate causing it to select a seq scan because it thinks it will traverse more rows than it actually will.In short, you need to read a lot more about performance tuning Postgres rather than taking stab-in-the-dark guesses for solutions. I believe it was pointed out that at least one query that is problematic for you is filtering based on the value of individual indexes of an array column - which means you actually need break those values into separate columns with indexes on them or create an index on column[x] so that the planner can use that. But if the problem is general slowness across your whole app, it is possible that the way your app uses the db access API is inefficient or you may have a misconfiguration that causes all db access to be slow. Depending on your hardware and platform, using the default configuration will result in db performance that is far from optimal. The default config is pretty much a minimal config.I'd suggest you spend at least a day or two reading up on Postgres performance tuning and investigating your particular problems. You may make quite a bit of improvement without our help and you'll be much more knowledgable about your db installation when you are done. At the very least, please look at the mailing list page on the Postgres website and read the links about how to ask performance questions so that you at least provide the list with enough information about your problems that others can offer useful feedback. I'd provide a link, but I'm on a phone.--samRegardsHashim\n\nOn Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni <[email protected]> wrote:\n\nAm 03.11.2011 17:08, schrieb Tomas Vondra:\n\nOn 3 Listopad 2011, 16:02, Mario Weilguni wrote:\n<snip>\nNo doubt about that, querying tables using conditions on array columns is\nnot the best direction in most cases, especially when those tables are\nhuge.\n\nStill, the interesting part here is that the OP claims this worked just\nfine in the older version and after an upgrade the performance suddenly\ndropped. This could be caused by many things, and we're just guessing\nbecause we don't have any plans from the old version.\n\nTomas\n\n\n\n\nNot really, Mohamed always said he has 9.0.3, Marcus Engene wrote about problems after the migration from 8.x to 9.x. Or did I miss something here?\n\nRegards,\nMario\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- RegardsMohamed Hashim.N\nMobile:09894587678",
"msg_date": "Mon, 7 Nov 2011 23:19:08 -0800",
"msg_from": "Sam Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB RAM,\n\tQuadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "how about your harddisks??\n\nyou could get a little help from a RAID10 SAS 15k disks. if you don't even\nhave RAID, it would help a lot!\n\nLucas.\n\n2011/11/8 Sam Gendler <[email protected]>\n\n>\n>\n> Sent from my iPhone\n>\n> On Nov 7, 2011, at 7:21 PM, Mohamed Hashim <[email protected]> wrote:\n>\n> Hi all,\n>\n> Thanks for all your responses.\n>\n> Sorry for late response\n>\n> Earlier we used Postgres8.3.10 with Desktop computer (as server) and\n> configuration of the system (I2 core with 4GB RAM) and also the application\n> was slow i dint change any postgres config settings.\n>\n> May be because of low config We thought the aplication is slow so we opted\n> to go for higher configuration server(with RAID 1) which i mentioned\n> earlier.\n>\n> I thought the application will go fast but unfortunately there is no\n> improvement so i tried to change the postgres config settings and trying to\n> tune my queries wherever possible but still i was not able\n> to..........improve the performance..\n>\n>\n> So will it helpful if we try GIST or GIN for integer array[] colum\n> (source_detail) with enable_seqscan=off and default_statistics_target=1000?\n>\n>\n> Oh dear! Where to even begin? There is no way to suggest possible\n> solutions without knowing a lot more about how things are currently\n> configured and what, exactly, about your application is slow. Just to\n> address your particular suggestions, increasing the default statistics\n> target would only help if an explain analyze for a slow query indicates\n> that the query planner is using inaccurate row count estimates for one or\n> more steps in a query plan. Depending upon the frequency of this problem it\n> may be better to increase statistics target just for individual columns\n> rather than across the entire db cluster. Setting enable_seqscan to off is\n> almost never a good solution to a problem, especially db-wide. If the\n> planner is selecting a sequential scan when an alternative strategy would\n> perform much better, then it is doing so because your configuration is\n> not telling the query planner accurate values for the cost of sequential\n> access vs random access - or else the statistics are inaccurate causing it\n> to select a seq scan because it thinks it will traverse more rows than it\n> actually will.\n>\n> In short, you need to read a lot more about performance tuning Postgres\n> rather than taking stab-in-the-dark guesses for solutions. I believe it was\n> pointed out that at least one query that is problematic for you is\n> filtering based on the value of individual indexes of an array column -\n> which means you actually need break those values into separate columns\n> with indexes on them or create an index on column[x] so that the planner\n> can use that. But if the problem is general slowness across your whole app,\n> it is possible that the way your app uses the db access API is inefficient\n> or you may have a misconfiguration that causes all db access to be slow.\n> Depending on your hardware and platform, using the default configuration\n> will result in db performance that is far from optimal. The default config\n> is pretty much a minimal config.\n>\n> I'd suggest you spend at least a day or two reading up on Postgres\n> performance tuning and investigating your particular problems. You may make\n> quite a bit of improvement without our help and you'll be much more knowledgable\n> about your db installation when you are done. At the very least, please\n> look at the mailing list page on the Postgres website and read the links\n> about how to ask performance questions so that you at least provide the\n> list with enough information about your problems that others can offer\n> useful feedback. I'd provide a link, but I'm on a phone.\n>\n> --sam\n>\n>\n>\n> Regards\n> Hashim\n>\n>\n>\n> On Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni <[email protected]> wrote:\n>\n>> Am 03.11.2011 17:08, schrieb Tomas Vondra:\n>>\n>>> On 3 Listopad 2011, 16:02, Mario Weilguni wrote:\n>>> <snip>\n>>>\n>>> No doubt about that, querying tables using conditions on array columns is\n>>> not the best direction in most cases, especially when those tables are\n>>> huge.\n>>>\n>>> Still, the interesting part here is that the OP claims this worked just\n>>> fine in the older version and after an upgrade the performance suddenly\n>>> dropped. This could be caused by many things, and we're just guessing\n>>> because we don't have any plans from the old version.\n>>>\n>>> Tomas\n>>>\n>>>\n>>>\n>> Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about\n>> problems after the migration from 8.x to 9.x. Or did I miss something here?\n>>\n>> Regards,\n>> Mario\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n>> org <[email protected]>)\n>> To make changes to your subscription:\n>> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>>\n>\n>\n>\n> --\n> Regards\n> Mohamed Hashim.N\n> Mobile:09894587678\n>\n>\n\nhow about your harddisks??you could get a little help from a RAID10 SAS 15k disks. if you don't even have RAID, it would help a lot!Lucas.\n\n2011/11/8 Sam Gendler <[email protected]>\nSent from my iPhoneOn Nov 7, 2011, at 7:21 PM, Mohamed Hashim <[email protected]> wrote:\nHi all,Thanks for all your responses.Sorry for late responseEarlier we used Postgres8.3.10 with Desktop computer (as server) and configuration of the system (I2 core with 4GB RAM) and also the application was slow i dint change any postgres config settings.\nMay be because of low config We thought the aplication is slow so we opted to go for higher configuration server(with RAID 1) which i mentioned earlier.I thought the application will go fast but unfortunately there is no improvement so i tried to change the postgres config settings and trying to tune my queries wherever possible but still i was not able to..........improve the performance..\nSo will it helpful if we try GIST or GIN for integer array[] colum (source_detail) with enable_seqscan=off and default_statistics_target=1000?Oh dear! Where to even begin? There is no way to suggest possible solutions without knowing a lot more about how things are currently configured and what, exactly, about your application is slow. Just to address your particular suggestions, increasing the default statistics target would only help if an explain analyze for a slow query indicates that the query planner is using inaccurate row count estimates for one or more steps in a query plan. Depending upon the frequency of this problem it may be better to increase statistics target just for individual columns rather than across the entire db cluster. Setting enable_seqscan to off is almost never a good solution to a problem, especially db-wide. If the planner is selecting a sequential scan when an alternative strategy would perform much better, then it is doing so because your configuration is not telling the query planner accurate values for the cost of sequential access vs random access - or else the statistics are inaccurate causing it to select a seq scan because it thinks it will traverse more rows than it actually will.\nIn short, you need to read a lot more about performance tuning Postgres rather than taking stab-in-the-dark guesses for solutions. I believe it was pointed out that at least one query that is problematic for you is filtering based on the value of individual indexes of an array column - which means you actually need break those values into separate columns with indexes on them or create an index on column[x] so that the planner can use that. But if the problem is general slowness across your whole app, it is possible that the way your app uses the db access API is inefficient or you may have a misconfiguration that causes all db access to be slow. Depending on your hardware and platform, using the default configuration will result in db performance that is far from optimal. The default config is pretty much a minimal config.\nI'd suggest you spend at least a day or two reading up on Postgres performance tuning and investigating your particular problems. You may make quite a bit of improvement without our help and you'll be much more knowledgable about your db installation when you are done. At the very least, please look at the mailing list page on the Postgres website and read the links about how to ask performance questions so that you at least provide the list with enough information about your problems that others can offer useful feedback. I'd provide a link, but I'm on a phone.\n--samRegards\n\nHashim\n\nOn Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni <[email protected]> wrote:\n\n\n\nAm 03.11.2011 17:08, schrieb Tomas Vondra:\n\nOn 3 Listopad 2011, 16:02, Mario Weilguni wrote:\n<snip>\nNo doubt about that, querying tables using conditions on array columns is\nnot the best direction in most cases, especially when those tables are\nhuge.\n\nStill, the interesting part here is that the OP claims this worked just\nfine in the older version and after an upgrade the performance suddenly\ndropped. This could be caused by many things, and we're just guessing\nbecause we don't have any plans from the old version.\n\nTomas\n\n\n\n\nNot really, Mohamed always said he has 9.0.3, Marcus Engene wrote about problems after the migration from 8.x to 9.x. Or did I miss something here?\n\nRegards,\nMario\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- RegardsMohamed Hashim.N\nMobile:09894587678",
"msg_date": "Tue, 8 Nov 2011 08:34:10 -0200",
"msg_from": "Lucas Mocellin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "On 8 Listopad 2011, 4:21, Mohamed Hashim wrote:\n> Hi all,\n>\n> Thanks for all your responses.\n>\n> Sorry for late response\n>\n> Earlier we used Postgres8.3.10 with Desktop computer (as server) and\n> configuration of the system (I2 core with 4GB RAM) and also the\n> application\n> was slow i dint change any postgres config settings.\n>\n> May be because of low config We thought the aplication is slow so we opted\n> to go for higher configuration server(with RAID 1) which i mentioned\n> earlier.\n>\n> I thought the application will go fast but unfortunately there is no\n> improvement so i tried to change the postgres config settings and trying\n> to\n> tune my queries wherever possible but still i was not able\n> to..........improve the performance..\n\nAs Sam Gendler already wrote, we really can't help you until you post all\nthe relevant info. So far we've seen a single EXPLAIN ANALYZE output and\nvery vague description of the hardware.\n\nWe need to know more about the hardware and the basic config options\n(shared buffers, effective cache size, work mem, etc.). We need to know\nhow much memory is actually available to PostgreSQL and page cache (how\nmuch is consumed by the application - as I understand it it runs on the\nsame machine). We need to know what OS it's running on, and we need to see\niostat/vmstat output collected when the app is slow.\n\nPlease read this and perform the basic tuning (and let us know what values\nyou've used):\n\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nAlso post\n\n> So will it helpful if we try GIST or GIN for integer array[] colum\n> (source_detail) with enable_seqscan=off and\n> default_statistics_target=1000?\n\nThis is severely premature - it might help, but you should do the basic\ntuning first. It might actually cause you more trouble. You've already\ndone this mistake - fixing something withouth veryfying it's actually a\nproblem - by requesting a RAID1 config. Don't do that mistake again.\n\nTomas\n\n",
"msg_date": "Tue, 8 Nov 2011 12:50:08 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Hi Sam,Tomas\n\nIn my first post i have mentioned all how much shared (shared buffers,\neffective cache size, work mem, etc.) and my OS and hardware information\nand what are the basic settings i have changed\n\nand regarding Explain analyze i gave one sample query because if i tune\nthat particular table which has records almost 16crore i thought my problem\nwill solve...\n\nRegards\nHashim\n\nOn Tue, Nov 8, 2011 at 5:20 PM, Tomas Vondra <[email protected]> wrote:\n\n> On 8 Listopad 2011, 4:21, Mohamed Hashim wrote:\n> > Hi all,\n> >\n> > Thanks for all your responses.\n> >\n> > Sorry for late response\n> >\n> > Earlier we used Postgres8.3.10 with Desktop computer (as server) and\n> > configuration of the system (I2 core with 4GB RAM) and also the\n> > application\n> > was slow i dint change any postgres config settings.\n> >\n> > May be because of low config We thought the aplication is slow so we\n> opted\n> > to go for higher configuration server(with RAID 1) which i mentioned\n> > earlier.\n> >\n> > I thought the application will go fast but unfortunately there is no\n> > improvement so i tried to change the postgres config settings and trying\n> > to\n> > tune my queries wherever possible but still i was not able\n> > to..........improve the performance..\n>\n> As Sam Gendler already wrote, we really can't help you until you post all\n> the relevant info. So far we've seen a single EXPLAIN ANALYZE output and\n> very vague description of the hardware.\n>\n> We need to know more about the hardware and the basic config options\n> (shared buffers, effective cache size, work mem, etc.). We need to know\n> how much memory is actually available to PostgreSQL and page cache (how\n> much is consumed by the application - as I understand it it runs on the\n> same machine). We need to know what OS it's running on, and we need to see\n> iostat/vmstat output collected when the app is slow.\n>\n> Please read this and perform the basic tuning (and let us know what values\n> you've used):\n>\n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>\n> Also post\n>\n> > So will it helpful if we try GIST or GIN for integer array[] colum\n> > (source_detail) with enable_seqscan=off and\n> > default_statistics_target=1000?\n>\n> This is severely premature - it might help, but you should do the basic\n> tuning first. It might actually cause you more trouble. You've already\n> done this mistake - fixing something withouth veryfying it's actually a\n> problem - by requesting a RAID1 config. Don't do that mistake again.\n>\n> Tomas\n>\n>\n\n\n-- \nRegards\nMohamed Hashim.N\nMobile:09894587678\n\nHi Sam,Tomas In my first post i have mentioned all how much shared (shared buffers, effective cache size, work mem, etc.) and my OS and hardware information and what are the basic settings i have changed \nand regarding Explain analyze i gave one sample query because if i tune that particular table which has records almost 16crore i thought my problem will solve...RegardsHashim\n\nOn Tue, Nov 8, 2011 at 5:20 PM, Tomas Vondra <[email protected]> wrote:\nOn 8 Listopad 2011, 4:21, Mohamed Hashim wrote:\n> Hi all,\n>\n> Thanks for all your responses.\n>\n> Sorry for late response\n>\n> Earlier we used Postgres8.3.10 with Desktop computer (as server) and\n> configuration of the system (I2 core with 4GB RAM) and also the\n> application\n> was slow i dint change any postgres config settings.\n>\n> May be because of low config We thought the aplication is slow so we opted\n> to go for higher configuration server(with RAID 1) which i mentioned\n> earlier.\n>\n> I thought the application will go fast but unfortunately there is no\n> improvement so i tried to change the postgres config settings and trying\n> to\n> tune my queries wherever possible but still i was not able\n> to..........improve the performance..\n\nAs Sam Gendler already wrote, we really can't help you until you post all\nthe relevant info. So far we've seen a single EXPLAIN ANALYZE output and\nvery vague description of the hardware.\n\nWe need to know more about the hardware and the basic config options\n(shared buffers, effective cache size, work mem, etc.). We need to know\nhow much memory is actually available to PostgreSQL and page cache (how\nmuch is consumed by the application - as I understand it it runs on the\nsame machine). We need to know what OS it's running on, and we need to see\niostat/vmstat output collected when the app is slow.\n\nPlease read this and perform the basic tuning (and let us know what values\nyou've used):\n\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nAlso post\n\n> So will it helpful if we try GIST or GIN for integer array[] colum\n> (source_detail) with enable_seqscan=off and\n> default_statistics_target=1000?\n\nThis is severely premature - it might help, but you should do the basic\ntuning first. It might actually cause you more trouble. You've already\ndone this mistake - fixing something withouth veryfying it's actually a\nproblem - by requesting a RAID1 config. Don't do that mistake again.\n\nTomas\n\n-- RegardsMohamed Hashim.N\nMobile:09894587678",
"msg_date": "Tue, 8 Nov 2011 17:45:50 +0530",
"msg_from": "Mohamed Hashim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "On 8 Listopad 2011, 13:15, Mohamed Hashim wrote:\n> Hi Sam,Tomas\n>\n> In my first post i have mentioned all how much shared (shared buffers,\n> effective cache size, work mem, etc.) and my OS and hardware information\n> and what are the basic settings i have changed\n\nSorry, I've missed that first message - the archive did not list it for\nsome reason.\n\n> and regarding Explain analyze i gave one sample query because if i tune\n> that particular table which has records almost 16crore i thought my\n> problem\n> will solve...\n\nOK, so the problem is either the WHERE conditions referencing the array\ncolumn, or a bug in 9.0.x.\n\nCan't help you with the upgrade issue, but using an array like this is a\nbad design and will cause you all sorts of problems - why are you not\nusing regular columns, anyway?\n\nI see you usually reference source_detail[1] - what is the expected\nselectivity of this condition? What portion of the table matches it? How\nmany possible values are there?\n\nAnd it does not make sense to me to partition by date when you're not\nquerying the data by date.\n\nTry this (one by one):\n\n1) CREATE INDEX src_idx ON stk_source(source_detail[1]) for each partition\n2) add a regular column source_detail_val with the value of\nsource_detail[1] and create an index on it\n3) repartition the table by source_detail[1] instead of date\n\nTomas\n\n",
"msg_date": "Tue, 8 Nov 2011 13:40:25 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB\n\tRAM,Quadcore Processor Server--Need help!!!!!!!"
},
{
"msg_contents": "Am 08.11.2011 13:15, schrieb Mohamed Hashim:\n> Hi Sam,Tomas\n>\n> In my first post i have mentioned all how much shared (shared buffers, \n> effective cache size, work mem, etc.) and my OS and hardware \n> information and what are the basic settings i have changed\n>\n> and regarding Explain analyze i gave one sample query because if i \n> tune that particular table which has records almost 16crore i thought \n> my problem will solve...\n>\nJust curios, are those array items [1] and [2] just samples and you \nactually use more which are performance-related (used as condition)? If \njust those two are relevant I would change the schema to use real \ncolumns instead. And you seem to use partitioning, but you have no \npartition condition?\n\n\n\n\n\n\n Am 08.11.2011 13:15, schrieb Mohamed Hashim:\n Hi Sam,Tomas \n\n In my first post i have mentioned all how much shared (shared\n buffers, effective cache size, work mem, etc.) and my OS and\n hardware information and what are the basic settings i have\n changed \n\n and regarding Explain analyze i gave one sample query because if i\n tune that particular table which has records almost 16crore i\n thought my problem will solve...\n\n\n Just curios, are those array items [1] and [2] just samples and you\n actually use more which are performance-related (used as condition)?\n If just those two are relevant I would change the schema to use real\n columns instead. And you seem to use partitioning, but you have no\n partition condition?",
"msg_date": "Tue, 08 Nov 2011 13:49:06 +0100",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore\n\tProcessor Server--Need help!!!!!!!"
}
] |
[
{
"msg_contents": "We selected a 30MB bytea with psql connected with\n\"-h localhost\" and found that it makes a huge\ndifference whether we have SSL encryption on or off.\n\nWithout SSL the SELECT finished in about a second,\nwith SSL it took over 23 seconds (measured with\n\\timing in psql).\nDuring that time, the CPU is 100% busy.\nAll data are cached in memory.\n\nIs this difference as expected?\n\nYours,\nLaurenz Albe\n",
"msg_date": "Fri, 28 Oct 2011 13:02:59 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "On 28.10.2011 14:02, Albe Laurenz wrote:\n> We selected a 30MB bytea with psql connected with\n> \"-h localhost\" and found that it makes a huge\n> difference whether we have SSL encryption on or off.\n>\n> Without SSL the SELECT finished in about a second,\n> with SSL it took over 23 seconds (measured with\n> \\timing in psql).\n> During that time, the CPU is 100% busy.\n> All data are cached in memory.\n>\n> Is this difference as expected?\n\nI tried to reproduce that, but only saw about 4x difference in the \ntiming, not 23x.\n\n$ PGSSLMODE=disable ~/pgsql.master/bin/psql -h localhost postgres\npsql (9.2devel)\nType \"help\" for help.\n\npostgres=# \\o foo\npostgres=# \\timing\nTiming is on.\npostgres=# SELECT repeat(xx,65536)::bytea FROM (SELECT \nstring_agg(lpad(to_hex(x),2, '0' ),'') AS xx FROM generate_series(0,255) \nx) AS xx;\nTime: 460,782 ms\n\n$ PGSSLMODE=require ~/pgsql.master/bin/psql -h localhost postgres\npsql (9.2devel)\nSSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)\nType \"help\" for help.\n\npostgres=# \\o foo\npostgres=# \\timing\nTiming is on.\npostgres=# SELECT repeat(xx,65536)::bytea FROM (SELECT \nstring_agg(lpad(to_hex(x),2, '0' ),'') AS xx FROM generate_series(0,255) \nx) AS xx;\nTime: 1874,276 ms\n\n\noprofile suggests that all that overhead is coming from compression. \nApparently SSL does compression automatically. Oprofile report of the \nabove test case with SSL enabled:\n\nsamples % image name symbol name\n28177 74.4753 libz.so.1.2.3.4 /usr/lib/libz.so.1.2.3.4\n1814 4.7946 postgres byteain\n1459 3.8563 libc-2.13.so __memcpy_ssse3_back\n1437 3.7982 libcrypto.so.0.9.8 /usr/lib/libcrypto.so.0.9.8\n896 2.3682 postgres hex_encode\n304 0.8035 vmlinux-3.0.0-1-amd64 clear_page_c\n271 0.7163 libc-2.13.so __strlen_sse42\n222 0.5868 libssl.so.0.9.8 /usr/lib/libssl.so.0.9.8\n\nAnd without:\n\nsamples % image name symbol name\n1601 27.4144 postgres byteain\n865 14.8116 postgres hex_encode\n835 14.2979 libc-2.13.so __memcpy_ssse3_back\n290 4.9658 vmlinux-3.0.0-1-amd64 clear_page_c\n280 4.7945 libc-2.13.so __strlen_sse42\n184 3.1507 vmlinux-3.0.0-1-amd64 page_fault\n174 2.9795 vmlinux-3.0.0-1-amd64 put_mems_allowed\n\n\nMaybe your data is very expensive to compress for some reason?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sat, 29 Oct 2011 11:51:09 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "Heikki Linnakangas wrote:\n>> We selected a 30MB bytea with psql connected with\n>> \"-h localhost\" and found that it makes a huge\n>> difference whether we have SSL encryption on or off.\n>>\n>> Without SSL the SELECT finished in about a second,\n>> with SSL it took over 23 seconds (measured with\n>> \\timing in psql).\n>> During that time, the CPU is 100% busy.\n>> All data are cached in memory.\n>>\n>> Is this difference as expected?\n\nThanks for looking at that.\n\n> I tried to reproduce that, but only saw about 4x difference in the\n> timing, not 23x.\n\nI tried more tests on an idle server, and the factor I observe here is\n3 or 4 as you say. The original measurements were taken on a server\nunder load.\n\n> oprofile suggests that all that overhead is coming from compression.\n> Apparently SSL does compression automatically. Oprofile report of the\n> above test case with SSL enabled:\n> \n> samples % image name symbol name\n> 28177 74.4753 libz.so.1.2.3.4 /usr/lib/libz.so.1.2.3.4\n> 1814 4.7946 postgres byteain\n> 1459 3.8563 libc-2.13.so __memcpy_ssse3_back\n> 1437 3.7982 libcrypto.so.0.9.8 /usr/lib/libcrypto.so.0.9.8\n> 896 2.3682 postgres hex_encode\n> 304 0.8035 vmlinux-3.0.0-1-amd64 clear_page_c\n> 271 0.7163 libc-2.13.so __strlen_sse42\n> 222 0.5868 libssl.so.0.9.8 /usr/lib/libssl.so.0.9.8\n> \n> And without:\n> \n> samples % image name symbol name\n> 1601 27.4144 postgres byteain\n> 865 14.8116 postgres hex_encode\n> 835 14.2979 libc-2.13.so __memcpy_ssse3_back\n> 290 4.9658 vmlinux-3.0.0-1-amd64 clear_page_c\n> 280 4.7945 libc-2.13.so __strlen_sse42\n> 184 3.1507 vmlinux-3.0.0-1-amd64 page_fault\n> 174 2.9795 vmlinux-3.0.0-1-amd64 put_mems_allowed\n> \n> \n> Maybe your data is very expensive to compress for some reason?\n\nFunny, I cannot see any calls to libz. On my system (RHEL 3, PostgreSQL\n8.4.8,\nopenssl 0.9.7a) the oprofile reports of the server process look like\nthis:\n\nWith SSL:\n\nsamples % symbol name image name\n5326 77.6611 (no symbol) /lib/libcrypto.so.0.9.7a\n755 11.009 byteaout\n/magwien/postgres-8.4.8/bin/postgres\n378 5.51181 __GI_memcpy /lib/tls/libc-2.3.2.so\n220 3.20793 printtup\n/magwien/postgres-8.4.8/bin/postgres\n\nWithout SSL:\n\nsamples % symbol name image name\n765 55.8394 byteaout\n/magwien/postgres-8.4.8/bin/postgres\n293 21.3869 __GI_memcpy /lib/tls/libc-2.3.2.so\n220 16.0584 printtup\n/magwien/postgres-8.4.8/bin/postgres\n\n\nCould that still be compression?\n\nThe test I am running is:\n\n$ psql \"host=localhost sslmode=... dbname=test\"\ntest=> \\o /dev/null\ntest=> select val from images where id=2;\ntest=> \\q\n\nYours,\nLaurenz Albe\n",
"msg_date": "Mon, 31 Oct 2011 16:34:55 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "On Mon, Oct 31, 2011 at 10:34 AM, Albe Laurenz <[email protected]> wrote:\n> Heikki Linnakangas wrote:\n>>> We selected a 30MB bytea with psql connected with\n>>> \"-h localhost\" and found that it makes a huge\n>>> difference whether we have SSL encryption on or off.\n>>>\n>>> Without SSL the SELECT finished in about a second,\n>>> with SSL it took over 23 seconds (measured with\n>>> \\timing in psql).\n>>> During that time, the CPU is 100% busy.\n>>> All data are cached in memory.\n>>>\n>>> Is this difference as expected?\n>\n> Thanks for looking at that.\n>\n>> I tried to reproduce that, but only saw about 4x difference in the\n>> timing, not 23x.\n>\n> I tried more tests on an idle server, and the factor I observe here is\n> 3 or 4 as you say. The original measurements were taken on a server\n> under load.\n>\n>> oprofile suggests that all that overhead is coming from compression.\n>> Apparently SSL does compression automatically. Oprofile report of the\n>> above test case with SSL enabled:\n>>\n>> samples % image name symbol name\n>> 28177 74.4753 libz.so.1.2.3.4 /usr/lib/libz.so.1.2.3.4\n>> 1814 4.7946 postgres byteain\n>> 1459 3.8563 libc-2.13.so __memcpy_ssse3_back\n>> 1437 3.7982 libcrypto.so.0.9.8 /usr/lib/libcrypto.so.0.9.8\n>> 896 2.3682 postgres hex_encode\n>> 304 0.8035 vmlinux-3.0.0-1-amd64 clear_page_c\n>> 271 0.7163 libc-2.13.so __strlen_sse42\n>> 222 0.5868 libssl.so.0.9.8 /usr/lib/libssl.so.0.9.8\n>>\n>> And without:\n>>\n>> samples % image name symbol name\n>> 1601 27.4144 postgres byteain\n>> 865 14.8116 postgres hex_encode\n>> 835 14.2979 libc-2.13.so __memcpy_ssse3_back\n>> 290 4.9658 vmlinux-3.0.0-1-amd64 clear_page_c\n>> 280 4.7945 libc-2.13.so __strlen_sse42\n>> 184 3.1507 vmlinux-3.0.0-1-amd64 page_fault\n>> 174 2.9795 vmlinux-3.0.0-1-amd64 put_mems_allowed\n>>\n>>\n>> Maybe your data is very expensive to compress for some reason?\n>\n> Funny, I cannot see any calls to libz. On my system (RHEL 3, PostgreSQL\n> 8.4.8,\n> openssl 0.9.7a) the oprofile reports of the server process look like\n> this:\n>\n> With SSL:\n>\n> samples % symbol name image name\n> 5326 77.6611 (no symbol) /lib/libcrypto.so.0.9.7a\n\nthat's a pretty ancient crypto you got there...it may not compress by\ndefault. Heikki's test data will compress super well which would\ntotally skew performance testing to libz since the amount of data\nactually encrypted will be fairly tiny. real world high entropy cases\noften show crypto as the worse offender in my experience.\n\nmerlin\n",
"msg_date": "Mon, 31 Oct 2011 13:50:06 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "Merlin Moncure wrote:\n>>>> We selected a 30MB bytea with psql connected with\n>>>> \"-h localhost\" and found that it makes a huge\n>>>> difference whether we have SSL encryption on or off.\n>>>>\n>>>> Without SSL the SELECT finished in about a second,\n>>>> with SSL it took over 23 seconds (measured with\n>>>> \\timing in psql).\n>>>> During that time, the CPU is 100% busy.\n>>>> All data are cached in memory.\n>>>>\n>>>> Is this difference as expected?\n\n>>> I tried to reproduce that, but only saw about 4x difference in the\n>>> timing, not 23x.\n\n>>> oprofile suggests that all that overhead is coming from compression.\n>>> Apparently SSL does compression automatically. Oprofile report of the\n>>> above test case with SSL enabled:\n[...]\n\n>> Funny, I cannot see any calls to libz. On my system (RHEL 3, PostgreSQL\n>> 8.4.8, openssl 0.9.7a) the oprofile reports of the server process look\n>> like this:\n\n>> samples % symbol name image name\n>> 5326 77.6611 (no symbol) /lib/libcrypto.so.0.9.7a\n\n> that's a pretty ancient crypto you got there...it may not compress by\n> default. Heikki's test data will compress super well which would\n> totally skew performance testing to libz since the amount of data\n> actually encrypted will be fairly tiny. real world high entropy cases\n> often show crypto as the worse offender in my experience.\n\nI experimented some more on a recent system (RHEL6, OpenSSL 1.0.0-fips),\nand it is as you say. Disabling OpenSSL compression in the source (which\nis possible since OpenSSL 1.0.0) does not give me any performance\nimprovement.\n\nSeems you pretty much have to live with at most 1/4 of the performance\nif you want to SELECT large images using SSL.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Thu, 3 Nov 2011 15:48:11 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "On Thu, Nov 03, 2011 at 03:48:11PM +0100, Albe Laurenz wrote:\n> \n> I experimented some more on a recent system (RHEL6, OpenSSL 1.0.0-fips),\n> and it is as you say. Disabling OpenSSL compression in the source (which\n> is possible since OpenSSL 1.0.0) does not give me any performance\n> improvement.\n> \n> Seems you pretty much have to live with at most 1/4 of the performance\n> if you want to SELECT large images using SSL.\n> \n> Yours,\n> Laurenz Albe\n> \n\nHave you tried different ciphers? RC4 is much lighter weight CPU-wise\nthen the typically negotiated cipher. AES128 is also not bad if you\nhave the newer Intel chips with the hardware encryption support. Just\nanother thing to check.\n\nRegards,\nKen\n",
"msg_date": "Thu, 3 Nov 2011 10:01:53 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "On Fri, Oct 28, 2011 at 14:02, Albe Laurenz <[email protected]> wrote:\n> Without SSL the SELECT finished in about a second,\n> with SSL it took over 23 seconds (measured with\n> \\timing in psql).\n\nWhen you query with psql, it requests columns in text format. Since\nbytea hex-encodes its value if output is text, this means it's\ntransmitting 60 MB for a 30 MB bytea value.\n\nIf you could make sure that your app is requesting binary output, then\nyou could cut 50% off this time. As others mentioned, most of the\noverhead is in SSL compression (not encryption), which can be\ndisabled, but is not very easy to do.\n\nBut 23 seconds for 60 MB is still *very* slow, so something else could\nbe going wrong. What kind of CPU is this?\n\nOn Thu, Nov 3, 2011 at 16:48, Albe Laurenz <[email protected]> wrote:\n> Disabling OpenSSL compression in the source (which\n> is possible since OpenSSL 1.0.0) does not give me any performance\n> improvement.\n\nIf it doesn't give you any performance improvement then you haven't\ndisabled compression. Modern CPUs can easily saturate 1 GbitE with\nAES256-encrypted connections. Compression is usually the bottleneck,\nat 20-30 MB/s.\n\nRegards,\nMarti\n",
"msg_date": "Thu, 3 Nov 2011 18:30:00 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "Marti Raudsepp wrote:\r\n>> Disabling OpenSSL compression in the source (which\r\n>> is possible since OpenSSL 1.0.0) does not give me any performance\r\n>> improvement.\r\n> \r\n> If it doesn't give you any performance improvement then you haven't\r\n> disabled compression. Modern CPUs can easily saturate 1 GbitE with\r\n> AES256-encrypted connections. Compression is usually the bottleneck,\r\n> at 20-30 MB/s.\r\n\r\nHmm, my knowledge of OpenSSL is so little that it is well possible that\r\nI did it wrong. I have attached the small patch I used; can you see\r\nwhere I went wrong?\r\n\r\nYours,\r\nLaurenz Albe",
"msg_date": "Fri, 4 Nov 2011 09:43:47 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "On 04.11.2011 10:43, Albe Laurenz wrote:\n> Marti Raudsepp wrote:\n>>> Disabling OpenSSL compression in the source (which\n>>> is possible since OpenSSL 1.0.0) does not give me any performance\n>>> improvement.\n>>\n>> If it doesn't give you any performance improvement then you haven't\n>> disabled compression. Modern CPUs can easily saturate 1 GbitE with\n>> AES256-encrypted connections. Compression is usually the bottleneck,\n>> at 20-30 MB/s.\n>\n> Hmm, my knowledge of OpenSSL is so little that it is well possible that\n> I did it wrong. I have attached the small patch I used; can you see\n> where I went wrong?\n\nThat only works with OpenSSL 1.0.0 - did you upgrade? I thought you were \nusing 0.9.7a earlier.\n\nFWIW, it would be better to test \"#ifdef SSL_OP_NO_COMPRESSION\" \ndirectly, rather than the version number.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 04 Nov 2011 23:32:47 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "Heikki Linnakangas wrote:\r\n>>>> Disabling OpenSSL compression in the source (which\r\n>>>> is possible since OpenSSL 1.0.0) does not give me any performance\r\n>>>> improvement.\r\n\r\n>>> If it doesn't give you any performance improvement then you haven't\r\n>>> disabled compression. Modern CPUs can easily saturate 1 GbitE with\r\n>>> AES256-encrypted connections. Compression is usually the bottleneck,\r\n>>> at 20-30 MB/s.\r\n\r\n>> Hmm, my knowledge of OpenSSL is so little that it is well possible that\r\n>> I did it wrong. I have attached the small patch I used; can you see\r\n>> where I went wrong?\r\n\r\n> That only works with OpenSSL 1.0.0 - did you upgrade? I thought you were\r\n> using 0.9.7a earlier.\r\n> \r\n> FWIW, it would be better to test \"#ifdef SSL_OP_NO_COMPRESSION\"\r\n> directly, rather than the version number.\r\n\r\nYes, I ran these tests with RHEL6 and OpenSSL 1.0.0.\r\n\r\nI guess I have hit the wall here.\r\n\r\nI can't get oprofile to run on this RHEL6 box, it doesn't record\r\nanything, so all I can test is total query duration.\r\n\r\nI tried to disable compression as above, but cannot verify that\r\nI was successful.\r\n\r\nI also tried different ciphers, but no matter what I did, the\r\nduration on the server stayed pretty much the same, 4 to 5 times\r\nmore than without SSL.\r\n\r\nThanks everybody for the help.\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Tue, 8 Nov 2011 11:25:29 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "On Tue, Nov 8, 2011 at 12:25, Albe Laurenz <[email protected]> wrote:\n> I can't get oprofile to run on this RHEL6 box, it doesn't record\n> anything, so all I can test is total query duration.\n\nMaybe this helps you with OProfile?\n\nhttp://people.planetpostgresql.org/andrew/index.php?/archives/224-The-joy-of-Vx.html\n\nRegards,\nMarti\n",
"msg_date": "Tue, 8 Nov 2011 12:51:45 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "Marti Raudsepp wrote:\r\n>> I can't get oprofile to run on this RHEL6 box, it doesn't record\r\n>> anything, so all I can test is total query duration.\r\n\r\n> Maybe this helps you with OProfile?\r\n> \r\n> http://people.planetpostgresql.org/andrew/index.php?/archives/224-The-joy-of-Vx.html\r\n\r\nDang, you're right, I wasn't aware that I was on a virtual machine....\r\n\r\nNow it seems that you were right before as well, and I failed to\r\ndisable SSL compression. At any rate this is what oprofile gives me:\r\n\r\nsamples % image name symbol name\r\n6754 83.7861 libz.so.1.2.3 /lib64/libz.so.1.2.3\r\n618 7.6665 libcrypto.so.1.0.0 /usr/lib64/libcrypto.so.1.0.0\r\n534 6.6245 postgres hex_encode\r\n95 1.1785 libc-2.12.so memcpy\r\n\r\nUnfortunately there is hardly any documentation for OpenSSL, but I'll try to\r\nfigure out what I did wrong.\r\n\r\nIf I managed to disable compression, I think that would be good for pretty\r\nmuch everybody who uses SSL with PostgreSQL.\r\n\r\nYours,\r\nLaurez Albe\r\n",
"msg_date": "Tue, 8 Nov 2011 13:35:29 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
},
{
"msg_contents": "Marti Raudsepp wrote:\r\n>> Disabling OpenSSL compression in the source (which\r\n>> is possible since OpenSSL 1.0.0) does not give me any performance\r\n>> improvement.\r\n\r\n> If it doesn't give you any performance improvement then you haven't\r\n> disabled compression. Modern CPUs can easily saturate 1 GbitE with\r\n> AES256-encrypted connections. Compression is usually the bottleneck,\r\n> at 20-30 MB/s.\r\n\r\nI finally managed to disable compression, and the performance\r\nimprovement is dramatic.\r\n\r\nNow I have \"only\" 100% penalty for using SSL, as seen in this\r\noprofile report:\r\n\r\nsamples % image name symbol name\r\n751 50.1670 libcrypto.so.1.0.0 /usr/lib64/libcrypto.so.1.0.0\r\n594 39.6794 postgres hex_encode\r\n83 5.5444 libc-2.12.so memcpy\r\n\r\nI'll post to hackers and see if I can get this into core.\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Tue, 8 Nov 2011 14:18:18 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSL encryption makes bytea transfer slow"
}
] |
[
{
"msg_contents": "Hi list,\n\nEvery now and then I have write peaks which causes annoying delay on my \nwebsite. No particular reason it seems, just that laws of probability \ndictates that there will be peaks every now and then.\n\nAnyway, thinking of ways to make the peaks more bareable, I saw the new \n9.1 feature to bypass WAL. Problems is mainly that some statistics \ntables (\"x users clicked this link this month\") clog the write cache, \nnot more important writes. I could live with restoring a nightly dump of \nthese tables and loose a days worth of logs.\n\nThough not keen on jumping over to early major versions an old idea of \nputting WAL in RAM came back. Not RAM in main memory but some thingie \npretending to be a drive with proper battery backup.\n\na) It seems to exist odd hardware with RAM modules and if lucky also battery\nb) Some drive manufactureres have done hybird ram-spindle drives \n(compare with possibly more common ssd-spindle hybrides).\n\nb) sounds slightly more appealing since it basically means I put \neverything on those drives and it magically is faster. The a) \nalternatives also seemed to be non ECC which is a no-no and disturbing.\n\nDoes anyone here have any recommendations here?\n\nPricing is not very important but reliability is.\n\nThanks,\nMarcus\n",
"msg_date": "Fri, 28 Oct 2011 17:28:23 +0200",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "WAL in RAM"
},
{
"msg_contents": "Marcus Engene <[email protected]> wrote:\n \n> Every now and then I have write peaks which causes annoying delay\n> on my website\n \n> Does anyone here have any recommendations here?\n \nFor our largest machines we put WAL on a RAID1 drive pair dedicated\nto that task, on its own controller with battery-backed cache\nconfigured for write-back. It does make a big difference, because\nwhen a DBA accidentally got this wrong once, we saw the problem you\ndescribe, and moving WAL to the dedicated drives/controller caused\nthe problem to go away.\n \nIf problems remain, look for posts by Greg Smith on how to tune\nthis. You may want to extend your checkpoint completion target,\nmake the background writer more aggressive, reduce shared buffers,\nor tune the OS. But if you can afford to put WAL on a dedicated\nfile system something like the above, that would be a better place\nto start, IMO.\n \n-Kevin\n",
"msg_date": "Fri, 28 Oct 2011 10:45:44 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL in RAM"
},
{
"msg_contents": "On Fri, Oct 28, 2011 at 10:28 AM, Marcus Engene <[email protected]> wrote:\n> Hi list,\n>\n> Every now and then I have write peaks which causes annoying delay on my\n> website. No particular reason it seems, just that laws of probability\n> dictates that there will be peaks every now and then.\n>\n> Anyway, thinking of ways to make the peaks more bareable, I saw the new 9.1\n> feature to bypass WAL. Problems is mainly that some statistics tables (\"x\n> users clicked this link this month\") clog the write cache, not more\n> important writes. I could live with restoring a nightly dump of these tables\n> and loose a days worth of logs.\n>\n> Though not keen on jumping over to early major versions an old idea of\n> putting WAL in RAM came back. Not RAM in main memory but some thingie\n> pretending to be a drive with proper battery backup.\n>\n> a) It seems to exist odd hardware with RAM modules and if lucky also battery\n> b) Some drive manufactureres have done hybird ram-spindle drives (compare\n> with possibly more common ssd-spindle hybrides).\n>\n> b) sounds slightly more appealing since it basically means I put everything\n> on those drives and it magically is faster. The a) alternatives also seemed\n> to be non ECC which is a no-no and disturbing.\n>\n> Does anyone here have any recommendations here?\n>\n> Pricing is not very important but reliability is.\n\nHave you ruled out SSD? They are a little new, but I'd be looking at\nthe Intel 710. In every case I've seen SSD permanently ends I/O\nissues. DRAM storage solutions I find to be pricey and complicated\nwhen there are so many workable flash options out now.\n\nmerlin\n",
"msg_date": "Fri, 28 Oct 2011 11:11:41 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL in RAM"
},
{
"msg_contents": "On 28 Říjen 2011, 18:11, Merlin Moncure wrote:\n> On Fri, Oct 28, 2011 at 10:28 AM, Marcus Engene <[email protected]> wrote:\n>> Hi list,\n>>\n>> Every now and then I have write peaks which causes annoying delay on my\n>> website. No particular reason it seems, just that laws of probability\n>> dictates that there will be peaks every now and then.\n>>\n>> Anyway, thinking of ways to make the peaks more bareable, I saw the new\n>> 9.1\n>> feature to bypass WAL. Problems is mainly that some statistics tables\n>> (\"x\n>> users clicked this link this month\") clog the write cache, not more\n>> important writes. I could live with restoring a nightly dump of these\n>> tables\n>> and loose a days worth of logs.\n>>\n>> Though not keen on jumping over to early major versions an old idea of\n>> putting WAL in RAM came back. Not RAM in main memory but some thingie\n>> pretending to be a drive with proper battery backup.\n>>\n>> a) It seems to exist odd hardware with RAM modules and if lucky also\n>> battery\n>> b) Some drive manufactureres have done hybird ram-spindle drives\n>> (compare\n>> with possibly more common ssd-spindle hybrides).\n>>\n>> b) sounds slightly more appealing since it basically means I put\n>> everything\n>> on those drives and it magically is faster. The a) alternatives also\n>> seemed\n>> to be non ECC which is a no-no and disturbing.\n>>\n>> Does anyone here have any recommendations here?\n>>\n>> Pricing is not very important but reliability is.\n>\n> Have you ruled out SSD? They are a little new, but I'd be looking at\n> the Intel 710. In every case I've seen SSD permanently ends I/O\n> issues. DRAM storage solutions I find to be pricey and complicated\n> when there are so many workable flash options out now.\n\nAre you sure SSDs are a reasonable option for WAL? I personally don't\nthink it's a good option, because WAL is written in a sequential manner,\nand that's not an area where SSDs beat spinners really badly.\n\nFor example the Intel 710 SSD has a sequential write speed of 210MB/s,\nwhile a simple SATA 7.2k drive can write about 50-100 MB/s for less than\n1/10 of the 710 price.\n\nI'm not saying SSDs are a bad thing, but I think it's a waste of money to\nuse them for WAL.\n\nTomas\n\n",
"msg_date": "Fri, 28 Oct 2011 20:26:19 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL in RAM"
},
{
"msg_contents": "On Fri, Oct 28, 2011 at 1:26 PM, Tomas Vondra <[email protected]> wrote:\n> On 28 Říjen 2011, 18:11, Merlin Moncure wrote:\n>> On Fri, Oct 28, 2011 at 10:28 AM, Marcus Engene <[email protected]> wrote:\n>>> Hi list,\n>>>\n>>> Every now and then I have write peaks which causes annoying delay on my\n>>> website. No particular reason it seems, just that laws of probability\n>>> dictates that there will be peaks every now and then.\n>>>\n>>> Anyway, thinking of ways to make the peaks more bareable, I saw the new\n>>> 9.1\n>>> feature to bypass WAL. Problems is mainly that some statistics tables\n>>> (\"x\n>>> users clicked this link this month\") clog the write cache, not more\n>>> important writes. I could live with restoring a nightly dump of these\n>>> tables\n>>> and loose a days worth of logs.\n>>>\n>>> Though not keen on jumping over to early major versions an old idea of\n>>> putting WAL in RAM came back. Not RAM in main memory but some thingie\n>>> pretending to be a drive with proper battery backup.\n>>>\n>>> a) It seems to exist odd hardware with RAM modules and if lucky also\n>>> battery\n>>> b) Some drive manufactureres have done hybird ram-spindle drives\n>>> (compare\n>>> with possibly more common ssd-spindle hybrides).\n>>>\n>>> b) sounds slightly more appealing since it basically means I put\n>>> everything\n>>> on those drives and it magically is faster. The a) alternatives also\n>>> seemed\n>>> to be non ECC which is a no-no and disturbing.\n>>>\n>>> Does anyone here have any recommendations here?\n>>>\n>>> Pricing is not very important but reliability is.\n>>\n>> Have you ruled out SSD? They are a little new, but I'd be looking at\n>> the Intel 710. In every case I've seen SSD permanently ends I/O\n>> issues. DRAM storage solutions I find to be pricey and complicated\n>> when there are so many workable flash options out now.\n>\n> Are you sure SSDs are a reasonable option for WAL? I personally don't\n> think it's a good option, because WAL is written in a sequential manner,\n> and that's not an area where SSDs beat spinners really badly.\n>\n> For example the Intel 710 SSD has a sequential write speed of 210MB/s,\n> while a simple SATA 7.2k drive can write about 50-100 MB/s for less than\n> 1/10 of the 710 price.\n>\n> I'm not saying SSDs are a bad thing, but I think it's a waste of money to\n> use them for WAL.\n\nsure, but then you have to have a more complicated setup with a\ndrive(s) designated for WAL, another for storage, etc. Also, your\nargument falls away if the WAL is shared with another drive. The era\nof the SSD is here. All new systems I plan will have SSD storage\nunless cost pressures are extreme -- often with a single drive unless\nyou need the extra storage. If I need availability, instead of RAID,\nI'll just build hot standby in.\n\nmerlin\n",
"msg_date": "Fri, 28 Oct 2011 13:40:17 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL in RAM"
},
{
"msg_contents": "Hi,\n\nOn 28 Říjen 2011, 17:28, Marcus Engene wrote:\n> Hi list,\n>\n> Every now and then I have write peaks which causes annoying delay on my\n> website. No particular reason it seems, just that laws of probability\n> dictates that there will be peaks every now and then.\n>\n> Anyway, thinking of ways to make the peaks more bareable, I saw the new\n> 9.1 feature to bypass WAL. Problems is mainly that some statistics\n> tables (\"x users clicked this link this month\") clog the write cache,\n> not more important writes. I could live with restoring a nightly dump of\n> these tables and loose a days worth of logs.\n\nWhy do you think the write activity is related to WAL? Does that mean bulk\nloading of data by users or what? Have you measured how many WAL segments\nthat creates? What triggers that write activity?\n\nWrite peaks usually mean a checkpoint is in progress, and that has nothing\nto do with WAL. More precisely - it does not write data to WAL but to data\nfiles, so moving WAL to a separate device won't help at all.\n\nThe common scenario is about this:\n\n(1) The users do some changes (INSERT/UPDATE/DELETE) to the database, it's\nwritten to the WAL (fsynced to the device). This is not a lot of work, as\nwriting to WAL is a sequential access and the actual modifications are\nstored in the shared buffers (not forced to the disk yet).\n\n(2) A checkpoint is triggered, i.e. either a checkpoint_timeout expires or\nall available WAL segments are filled - this means all the dirty buffers\nhas to be actually written from shared buffers to the datafiles. This is a\nPITA, as it's a rather random access.\n\nAnyway there are options to tune the write performance - most notably\ncheckpoint_segments, checkpoint_completion_target, checkpoint_timeout.\n\n> Though not keen on jumping over to early major versions an old idea of\n> putting WAL in RAM came back. Not RAM in main memory but some thingie\n> pretending to be a drive with proper battery backup.\n>\n> a) It seems to exist odd hardware with RAM modules and if lucky also\n> battery\n> b) Some drive manufactureres have done hybird ram-spindle drives\n> (compare with possibly more common ssd-spindle hybrides).\n>\n> b) sounds slightly more appealing since it basically means I put\n> everything on those drives and it magically is faster. The a)\n> alternatives also seemed to be non ECC which is a no-no and disturbing.\n>\n> Does anyone here have any recommendations here?\n\nThe thing to look for when talking about WAL is a sequential write speed.\nThe solutions you've mentioned above are great for random access, but when\nyou need a sequential speed it's a waste of money (IMHO).\n\n> Pricing is not very important but reliability is.\n\nMy recommendation is to find out what's wrong before buying anything. My\nimpression is that you don't know the actual cause, so you really don't\nknow what features should the device have.\n\nIf you're willing to spend money without proper analysis, send the money\nto me - the result will be about the same. You'll spend money without\nactually solving the issue, plus it will make me a bit happier.\n\nAnyway my recommendation would be these:\n\n(1) Find out what actually happens, i.e. check if it's a checkpoint issue,\nor what is going on. Enable log_checkpoints etc.\n\n(2) Try to tune the db a bit - not sure what version are you using or what\nare the important values, but generally this is a good starting point for\nwrite-heavy databases\n\ncheckpoint_segments = 64\ncheckpoint_completion_target = 0.9\ncheckpoint_timeout = 30 minutes\n\n(3) Provide more details - Pg version, important config values, hardware\ninfo etc.\n\nOnly if I knew what's wrong and if the above things did not help, I'd\nconsider buying a new hw. I'd probably go with one of those options:\n\n(a) Move the WAL to a separate disk (not SSD), or maybe a RAID1/RAID10 of\nsuch drives. Start with one, use more if needed and a controller with a\nBBWC.\n\n(b) Move the database to a SSD drive, leave the WAL on the original\nlocation (not SSD). This might be signoficantly more expensive, especially\nif you want to make it reliable (building RAID1 of SSDs or something like\nthat).\n\nTomas\n\n",
"msg_date": "Fri, 28 Oct 2011 21:03:32 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL in RAM"
},
{
"msg_contents": "On 28 Říjen 2011, 20:40, Merlin Moncure wrote:\n> sure, but then you have to have a more complicated setup with a\n> drive(s) designated for WAL, another for storage, etc. Also, your\n> argument falls away if the WAL is shared with another drive. The era\n> of the SSD is here. All new systems I plan will have SSD storage\n> unless cost pressures are extreme -- often with a single drive unless\n> you need the extra storage. If I need availability, instead of RAID,\n> I'll just build hot standby in.\n\nWell, sure - I'm actually a fan of SSDs. Using an SSDs for the datafiles,\nor using an SSD for the whole database (including WAL) makes sense, but my\nimpression was that the OP wants to buy a new drive and use it for WAL\nonly and that's not really cost effective I guess.\n\nTomas\n\n",
"msg_date": "Fri, 28 Oct 2011 21:07:44 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL in RAM"
},
{
"msg_contents": "On Fri, Oct 28, 2011 at 12:28 PM, Marcus Engene <[email protected]> wrote:\n> Hi list,\n>\n> Every now and then I have write peaks which causes annoying delay on my\n> website. No particular reason it seems, just that laws of probability\n> dictates that there will be peaks every now and then.\n>\n> Anyway, thinking of ways to make the peaks more bareable, I saw the new 9.1\n> feature to bypass WAL. Problems is mainly that some statistics tables (\"x\n> users clicked this link this month\") clog the write cache, not more\n> important writes. I could live with restoring a nightly dump of these tables\n> and loose a days worth of logs.\n> ...\n> Does anyone here have any recommendations here?\n\nYou didn't post configuration details.\n\nJust OTOMH, I'd say you have a low shared_buffers setting and that\nincreasing it could help.\n\nThat's assuming the updates you mention on statistic tables update\nheavily the same rows over and over, case in which shared buffers\nwould tremendously help.\n",
"msg_date": "Fri, 28 Oct 2011 16:09:04 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL in RAM"
},
{
"msg_contents": "On 10/28/2011 12:26 PM, Tomas Vondra wrote:\n> For example the Intel 710 SSD has a sequential write speed of 210MB/s,\n> while a simple SATA 7.2k drive can write about 50-100 MB/s for less than\n> 1/10 of the 710 price.\nBulk data transfer rates mean almost nothing in the context of a database\n(unless you're for example backing it up by copying the files to another \nmachine...)\nThe key factor typically is small block writes/s (for WAL) and random \nsmall block\nreads/s (for data). 710 or similar performance SSDs will deliver on the \norder\nof 20-50x the performance of a traditional hard drive in these areas.\n\n\n",
"msg_date": "Fri, 28 Oct 2011 13:55:49 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL in RAM"
},
{
"msg_contents": "On 10/28/11 5:45 , Kevin Grittner wrote:\n> Marcus Engene<[email protected]> wrote:\n>\n> \n>> Every now and then I have write peaks which causes annoying delay\n>> on my website\n>> \n>\n> \n>> Does anyone here have any recommendations here?\n>> \n>\n> For our largest machines we put WAL on a RAID1 drive pair dedicated\n> to that task, on its own controller with battery-backed cache\n> configured for write-back. It does make a big difference, because\n> when a DBA accidentally got this wrong once, we saw the problem you\n> describe, and moving WAL to the dedicated drives/controller caused\n> the problem to go away.\n>\n> If problems remain, look for posts by Greg Smith on how to tune\n> this. You may want to extend your checkpoint completion target,\n> make the background writer more aggressive, reduce shared buffers,\n> or tune the OS. But if you can afford to put WAL on a dedicated\n> file system something like the above, that would be a better place\n> to start, IMO.\n>\n> -Kevin\n>\n> \n\nThe problem I have with battery backed raid controllers is the battery \npart. They're simply not reliable and requires testing etc which I as a \nrather insignificant customer at a generic datacenter cannot have done \nproperly. I have however found this thing which I find primising:\nhttp://news.cnet.com/8301-21546_3-10273658-10253464.html\nAn Adaptec 5z-controller which has a supercap and flushes to a SSD drive \non mishap. Perhaps that's the answer to everything?\n\nAs per others suggestions I don't feel encouraged to put WAL on SSD from \nfinding several texts by Greg Smith and others warning about this. I do \nhave 2x OCI Sandforce 1500 drives (with supercap) for some burst load \ntables.\n\nThe reason I started to think about putting WAL on a RAM drive to begin \nwith was that performance figures for unlogged tables looked very \npromising indeed. And the test were of the sort that's occupying my \nbandwidth; accumulating statistical writes.\n\nThe present pg9 computer is a Pg 9.0.4, Debian Squeeze, 2xXeon, 72GB, \nsoftware 4xRAID6(sorry) + 2xSSD. It's OLTP website with 10M products and \nSOLR for FTS. During peak it's using ~3-4% CPU, and it's 99.9% reads or \nthereabouts. It's the peaks we want to take down. RAID6 or not, with a \nspindle as bottleneck there is just a certain max# of writes/s.\n\nThanks for your answers so far!\n\nBest regards,\nMarcus\n\n",
"msg_date": "Sat, 29 Oct 2011 19:54:21 +0200",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL in RAM"
},
{
"msg_contents": "On Sat, Oct 29, 2011 at 11:54 AM, Marcus Engene <[email protected]> wrote:\n\n> The problem I have with battery backed raid controllers is the battery part.\n> They're simply not reliable and requires testing etc which I as a rather\n> insignificant customer at a generic datacenter cannot have done properly. I\n> have however found this thing which I find primising:\n> http://news.cnet.com/8301-21546_3-10273658-10253464.html\n> An Adaptec 5z-controller which has a supercap and flushes to a SSD drive on\n> mishap. Perhaps that's the answer to everything?\n\nIn over 10 years of using hardware RAID controllers with battery\nbackup on many many machines, I have had exactly zero data loss due to\na failed battery backup. Of course proper monitoring is important, to\nmake sure the batteries aren't old and dead, but every single BBU RAID\ncontroller I have used automatically switched from write back to write\nthrough when they detected a bad battery pack.\n\nProper testing is essential whether it's BBU Caching or using an SSD,\nand failure to do so is inconceivable if your data is at all\nimportant. Given the current high failure rate of SSDs due to\nfirmware issues (and it's not just the intel drives experiencing such\nfailures) I'm much more confident in Areca, 3Ware, and LSI BBU RAID\ncontrollers right now than I am in SSDs.\n\n> As per others suggestions I don't feel encouraged to put WAL on SSD from\n> finding several texts by Greg Smith and others warning about this. I do have\n> 2x OCI Sandforce 1500 drives (with supercap) for some burst load tables.\n>\n> The reason I started to think about putting WAL on a RAM drive to begin with\n> was that performance figures for unlogged tables looked very promising\n> indeed. And the test were of the sort that's occupying my bandwidth;\n> accumulating statistical writes.\n>\n> The present pg9 computer is a Pg 9.0.4, Debian Squeeze, 2xXeon, 72GB,\n> software 4xRAID6(sorry) + 2xSSD. It's OLTP website with 10M products and\n> SOLR for FTS. During peak it's using ~3-4% CPU, and it's 99.9% reads or\n> thereabouts. It's the peaks we want to take down. RAID6 or not, with a\n> spindle as bottleneck there is just a certain max# of writes/s.\n\nFirst things first, get off RAID-6. A 4 drive RAID-6 gives no more\nstorage than a 4 drive RAID-10, and is painfully slow by comparison.\nLooking at SSDs for WAL is putting the cart about 1,000 miles ahead of\nthe horse at this point. You'd be much better off migrating to a\nsingle SSD for everything than running on a 4 disk RAID-6.\n",
"msg_date": "Sat, 29 Oct 2011 14:11:42 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL in RAM"
},
{
"msg_contents": "On 10/29/11 10:11 , Scott Marlowe wrote:\n> In over 10 years of using hardware RAID controllers with battery\n> backup on many many machines, I have had exactly zero data loss due to\n> a failed battery backup. Of course proper monitoring is important, to\n> make sure the batteries aren't old and dead, but every single BBU RAID\n> controller I have used automatically switched from write back to write\n> through when they detected a bad battery pack.\n>\n> Proper testing is essential whether it's BBU Caching or using an SSD,\n> and failure to do so is inconceivable if your data is at all\n> important. Given the current high failure rate of SSDs due to\n> firmware issues (and it's not just the intel drives experiencing such\n> failures) I'm much more confident in Areca, 3Ware, and LSI BBU RAID\n> controllers right now than I am in SSDs.\n> \n\nRimu got me a setup with 2x5805 BBU configured as two RAID10 with\nSAS 15k rpm drives and on top of that 2x Xeon E5645 (the hex core).\nSince I heard warnings that with non software raids, the machine could be\nunresponsive during boot when doing a rebuild, I took small 300G drives.\nNot that SAS 15k come in that much bigger sizes, but still.\n\nI chickened out from pg 9.1 due to the low minor number.\n\nI also set...\nwal_buffers = 16MB\n...which used to be default 64kB which possibly could explain some of\nthe choke problems at write bursts.\n> \n>> As per others suggestions I don't feel encouraged to put WAL on SSD from\n>> finding several texts by Greg Smith and others warning about this. I do have\n>> 2x OCI Sandforce 1500 drives (with supercap) for some burst load tables.\n>>\n>> The reason I started to think about putting WAL on a RAM drive to begin with\n>> was that performance figures for unlogged tables looked very promising\n>> indeed. And the test were of the sort that's occupying my bandwidth;\n>> accumulating statistical writes.\n>>\n>> The present pg9 computer is a Pg 9.0.4, Debian Squeeze, 2xXeon, 72GB,\n>> software 4xRAID6(sorry) + 2xSSD. It's OLTP website with 10M products and\n>> SOLR for FTS. During peak it's using ~3-4% CPU, and it's 99.9% reads or\n>> thereabouts. It's the peaks we want to take down. RAID6 or not, with a\n>> spindle as bottleneck there is just a certain max# of writes/s.\n>> \n> First things first, get off RAID-6. A 4 drive RAID-6 gives no more\n> storage than a 4 drive RAID-10, and is painfully slow by comparison.\n> Looking at SSDs for WAL is putting the cart about 1,000 miles ahead of\n> the horse at this point. You'd be much better off migrating to a\n> single SSD for everything than running on a 4 disk RAID-6.\n>\n> \n\nMessage received and understood :)\n\nHaving read up too much on drive reliability paranoia in combination\nwith going from 7k2 -> 15k I feel a bit uneasy, but this mama is fast.\nI suppose a little bit could be credited the newly restored dump instead\nof the little over a year entropy in the other machine. But I also did some\nupdate/write torture and it was hard to provoke any io wait.\n\nI put OS & WAL on one array and the general data files on the other.\nThe data directory that used to be on the SSD drive was also put on the\nWAL raid.\n\nThanks for your advices!\nMarcus\n",
"msg_date": "Tue, 29 Nov 2011 17:58:56 +0100",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL in RAM"
}
] |
[
{
"msg_contents": "Dear all,\n\nI am new to PG but I have a solid background on tuning in Oracle and MSSQL.\nI have a query coming out from a piece of software from our SW-Stack (I can't change it) and of course it takes a large amount of time.\n\nThe table I am query are inherited (partitioned) and the query is the following (names are changed for policy):\n\n[SELECT]LOG: duration: 427514.480 ms execute <unnamed>: select\n \"f_table\".\"fk_column\" as \"c0\",\n \"f_table\".\"fk_column\" as \"c1\"\n from\n \"s_schema\".\"f_table\" as \"f_table\"\n where\n \"f_table\".\"fk_column\" = 'somevalue'\n group by\n \"f_table\".\"fk_column\"\n order by\n \"f_table\".\"fk_column\" ASC\n\nthe fk_column/somevalue is the \"partition key\" and the planner correctly purge the inherited table accordingly.\nRecords in partitions vary from a min of 30M to max of 160M rows.\n\n'Group (cost=0.00..4674965.80 rows=200 width=17)'\n' -> Append (cost=0.00..4360975.94 rows=125595945 width=17)'\n' -> Index Scan using f_table_pkey on f_table (cost=0.00..5.64 rows=1 width=58)'\n' Index Cond: ((fk_column)::text = 'somevalue'::text)'\n' -> Seq Scan on f_table _scxc f_table (cost=0.00..4360970.30 rows=125595944 width=17)'\n' Filter: ((fk_column)::text = 'somevalue'::text)'\n\ndisabling the seq_scan do not help it forces the index but it takes ages.\n\nIn each partition the value of fk_column is just one (being the partition key) and I am expecting that this is checked on the constraint by the planner.\nFurthermore I have put an index on fk_column (tried both btree and hash) however the plan is always a seq_scan on the partition, even if the index has only one value?\n\nRegardless the constraint (which I think it should be taken into consideration here) I am expecting that through Index Scan would easily figure out that the value.\nIn theory there should be no need to access the table here but perform everything on the index object (and of course in the \"father\" table).\nFurthemore I don't understand why on the main table is using an index scan (on 0 rows).\nYes: Analyzed.\n\nI fear I am missing something on Index usage in Postgres.\n\ncheers,\ng\n\n\n4 CPU (on VMWare) + 8G of RAM\n\nseq_page_cost = 1.0 # measured on an arbitrary scale\nrandom_page_cost = 2.5 # same scale as above\ncpu_tuple_cost = 0.01 # same scale as above\ncpu_index_tuple_cost = 0.005 # same scale as above\ncpu_operator_cost = 0.0025 # same scale as above\ndefault_statistics_target = 100 # range 1-10000\nconstraint_exclusion = partition # on, off, or partition\nshared_buffers = 960MB # min 128kB\ntemp_buffers = 256MB # min 800kB\nmax_prepared_transactions = 100 # zero disables the feature\nwork_mem = 192MB # min 64kB\nmaintenance_work_mem = 480MB # min 1MB\n\n\n\n\n\n\nDear all, I am new to PG but I have a solid background on tuning in Oracle and MSSQL.I have a query coming out from a piece of software from our SW-Stack (I can’t change it) and of course it takes a large amount of time. The table I am query are inherited (partitioned) and the query is the following (names are changed for policy): [SELECT]LOG: duration: 427514.480 ms execute <unnamed>: select \"f_table\".\"fk_column\" as \"c0\", \"f_table\".\"fk_column\" as \"c1\" from \"s_schema\".\"f_table\" as \"f_table\" where \"f_table\".\"fk_column\" = 'somevalue' group by \"f_table\".\"fk_column\" order by \"f_table\".\"fk_column\" ASC the fk_column/somevalue is the “partition key” and the planner correctly purge the inherited table accordingly.Records in partitions vary from a min of 30M to max of 160M rows. 'Group (cost=0.00..4674965.80 rows=200 width=17)'' -> Append (cost=0.00..4360975.94 rows=125595945 width=17)'' -> Index Scan using f_table_pkey on f_table (cost=0.00..5.64 rows=1 width=58)'' Index Cond: ((fk_column)::text = 'somevalue'::text)'' -> Seq Scan on f_table _scxc f_table (cost=0.00..4360970.30 rows=125595944 width=17)'' Filter: ((fk_column)::text = 'somevalue'::text)' disabling the seq_scan do not help it forces the index but it takes ages. In each partition the value of fk_column is just one (being the partition key) and I am expecting that this is checked on the constraint by the planner.Furthermore I have put an index on fk_column (tried both btree and hash) however the plan is always a seq_scan on the partition, even if the index has only one value? Regardless the constraint (which I think it should be taken into consideration here) I am expecting that through Index Scan would easily figure out that the value.In theory there should be no need to access the table here but perform everything on the index object (and of course in the “father” table).Furthemore I don’t understand why on the main table is using an index scan (on 0 rows).Yes: Analyzed. I fear I am missing something on Index usage in Postgres. cheers,g 4 CPU (on VMWare) + 8G of RAM seq_page_cost = 1.0 # measured on an arbitrary scalerandom_page_cost = 2.5 # same scale as abovecpu_tuple_cost = 0.01 # same scale as abovecpu_index_tuple_cost = 0.005 # same scale as abovecpu_operator_cost = 0.0025 # same scale as abovedefault_statistics_target = 100 # range 1-10000constraint_exclusion = partition # on, off, or partitionshared_buffers = 960MB # min 128kBtemp_buffers = 256MB # min 800kBmax_prepared_transactions = 100 # zero disables the featurework_mem = 192MB # min 64kBmaintenance_work_mem = 480MB # min 1MB",
"msg_date": "Fri, 28 Oct 2011 19:27:31 +0200",
"msg_from": "\"Sorbara, Giorgio (CIOK)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange query plan"
},
{
"msg_contents": "Hi,\n\nOn 28 Říjen 2011, 19:27, Sorbara, Giorgio (CIOK) wrote:\n> Dear all,\n>\n> I am new to PG but I have a solid background on tuning in Oracle and\n> MSSQL.\n> I have a query coming out from a piece of software from our SW-Stack (I\n> can't change it) and of course it takes a large amount of time.\n>\n> The table I am query are inherited (partitioned) and the query is the\n> following (names are changed for policy):\n\nThat's a bit ridiculous policy, especially as you've used the same fake\ncolumn name (fk_column) for all columns. Does that mean you're reading\njust one column, or that there are actually more columns? I'd guess the\nfirst option, as fk_column is referenced twice in the select list ...\n\n> [SELECT]LOG: duration: 427514.480 ms execute <unnamed>: select\n> \"f_table\".\"fk_column\" as \"c0\",\n> \"f_table\".\"fk_column\" as \"c1\"\n> from\n> \"s_schema\".\"f_table\" as \"f_table\"\n> where\n> \"f_table\".\"fk_column\" = 'somevalue'\n> group by\n> \"f_table\".\"fk_column\"\n> order by\n> \"f_table\".\"fk_column\" ASC\n>\n> the fk_column/somevalue is the \"partition key\" and the planner correctly\n> purge the inherited table accordingly.\n> Records in partitions vary from a min of 30M to max of 160M rows.\n>\n> 'Group (cost=0.00..4674965.80 rows=200 width=17)'\n> ' -> Append (cost=0.00..4360975.94 rows=125595945 width=17)'\n> ' -> Index Scan using f_table_pkey on f_table (cost=0.00..5.64\n> rows=1 width=58)'\n> ' Index Cond: ((fk_column)::text = 'somevalue'::text)'\n> ' -> Seq Scan on f_table _scxc f_table (cost=0.00..4360970.30\n> rows=125595944 width=17)'\n> ' Filter: ((fk_column)::text = 'somevalue'::text)'\n>\n> disabling the seq_scan do not help it forces the index but it takes ages.\n>\n> In each partition the value of fk_column is just one (being the partition\n> key) and I am expecting that this is checked on the constraint by the\n> planner.\n> Furthermore I have put an index on fk_column (tried both btree and hash)\n> however the plan is always a seq_scan on the partition, even if the index\n> has only one value?\n\nI'm a bit confused right now. The fk_column is used for partitioning, so\n\"fk_column = somevalue\" actually means \"give me all data from exactly one\npartition, right?\n\nIn that case the above behaviour is expected, because index scan would\nmean a lot of random I/O. MVCC in PostgreSQL works very differently,\ncompared to Oracle for example - the indexes do not contain necessary\nvisibility info (which transactions can see those records), so whenever\nyou read a tuple from index, you have to check the data in the actual\ntable.\n\nSo an index scan of the whole table means \"read the whole index and the\nwhole table\" and the table is accessed randomly (which kinda defeats the\ndb cache). So the sequential scan is the expected and perfectly sane.\n\nBTW this should change in 9.2, as there is an index-only scan implementation.\n\n> Regardless the constraint (which I think it should be taken into\n> consideration here) I am expecting that through Index Scan would easily\n> figure out that the value.\n> In theory there should be no need to access the table here but perform\n> everything on the index object (and of course in the \"father\" table).\n> Furthemore I don't understand why on the main table is using an index scan\n> (on 0 rows).\n\nNot true. PostgreSQL MVCC does not work that - see explanation above.\n\n> I fear I am missing something on Index usage in Postgres.\n\nYup, seems like that.\n\nAnyway, a few recommendations / questions:\n\n1) Don't post EXPLAIN output, post EXPLAIN ANALYZE if possible.\n\n2) How many live tuples are there? Are the really 125.595.945 live rows,\nor is the table bloated? When you do a \"select count(*)\" from the table,\nwhat number you get?\n\n3) What amount of data are we talking here? 125 million rows could be 10MB\nor 10GB, hard to guess.\n\nTomas\n\n",
"msg_date": "Fri, 28 Oct 2011 20:10:03 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange query plan"
},
{
"msg_contents": "\r\nHi Tomas, \r\n\r\nand thank you for your reply.\r\nInline my comments\r\n\r\n> -----Original Message-----\r\n> From: Tomas Vondra [mailto:[email protected]]\r\n> Sent: 28 October 2011 8:10 PM\r\n> To: Sorbara, Giorgio (CIOK)\r\n> Cc: [email protected]\r\n> Subject: Re: [PERFORM] Strange query plan\r\n> \r\n> Hi,\r\n> \r\n> On 28 Říjen 2011, 19:27, Sorbara, Giorgio (CIOK) wrote:\r\n> > Dear all,\r\n> >\r\n> > I am new to PG but I have a solid background on tuning in Oracle and\r\n> > MSSQL.\r\n> > I have a query coming out from a piece of software from our SW-Stack\r\n> (I\r\n> > can't change it) and of course it takes a large amount of time.\r\n> >\r\n> > The table I am query are inherited (partitioned) and the query is the\r\n> > following (names are changed for policy):\r\n> \r\n> That's a bit ridiculous policy, especially as you've used the same fake\r\n> column name (fk_column) for all columns. Does that mean you're reading\r\n> just one column, or that there are actually more columns? I'd guess the\r\n> first option, as fk_column is referenced twice in the select list ...\r\n\r\nSorry but that is the exact query (I'll ignore the policy and post the exact columns from now on).\r\nJust to be clear is a query generated by Mondrian (ROLAP engine) for a degenerated dimension and it looks like this:\r\n\r\nselect\r\n\"f_suipy\".\"fk_theme\" as \"c0\",\r\n\"f_suipy\".\"fk_theme\" as \"c1\"\r\nfrom\r\n\"gaez\".\"f_suipy\" as \"f_suipy\"\r\nwhere\r\n\"f_suipy\".\"fk_theme\" = 'main_py_six_scxc'\r\ngroup by\r\n\"f_suipy\".\"fk_theme\"\r\norder by\r\n\"f_suipy\".\"fk_theme\" ASC;\r\n\r\nwe have a total of 18 partitions.\r\n\r\n> >\r\n> > the fk_column/somevalue is the \"partition key\" and the planner\r\n> correctly\r\n> > purge the inherited table accordingly.\r\n> > Records in partitions vary from a min of 30M to max of 160M rows.\r\n> >\r\n> > 'Group (cost=0.00..4674965.80 rows=200 width=17)'\r\n> > ' -> Append (cost=0.00..4360975.94 rows=125595945 width=17)'\r\n> > ' -> Index Scan using f_table_pkey on f_table\r\n> (cost=0.00..5.64\r\n> > rows=1 width=58)'\r\n> > ' Index Cond: ((fk_column)::text = 'somevalue'::text)'\r\n> > ' -> Seq Scan on f_table _scxc f_table\r\n> (cost=0.00..4360970.30\r\n> > rows=125595944 width=17)'\r\n> > ' Filter: ((fk_column)::text = 'somevalue'::text)'\r\n> >\r\n> > disabling the seq_scan do not help it forces the index but it takes\r\n> ages.\r\n> >\r\n> > In each partition the value of fk_column is just one (being the\r\n> partition\r\n> > key) and I am expecting that this is checked on the constraint by the\r\n> > planner.\r\n> > Furthermore I have put an index on fk_column (tried both btree and\r\n> hash)\r\n> > however the plan is always a seq_scan on the partition, even if the\r\n> index\r\n> > has only one value?\r\n> \r\n> I'm a bit confused right now. The fk_column is used for partitioning,\r\n> so\r\n> \"fk_column = somevalue\" actually means \"give me all data from exactly\r\n> one\r\n> partition, right?\r\n\r\nYes, but there is an enforced constraint telling me that column can host only one value.\r\n\r\n> \r\n> In that case the above behaviour is expected, because index scan would\r\n> mean a lot of random I/O. MVCC in PostgreSQL works very differently,\r\n> compared to Oracle for example - the indexes do not contain necessary\r\n> visibility info (which transactions can see those records), so whenever\r\n> you read a tuple from index, you have to check the data in the actual\r\n> table.\r\n> \r\n> So an index scan of the whole table means \"read the whole index and the\r\n> whole table\" and the table is accessed randomly (which kinda defeats\r\n> the\r\n> db cache). So the sequential scan is the expected and perfectly sane.\r\n> \r\n> BTW this should change in 9.2, as there is an index-only scan\r\n> implementation.\r\n> \r\n> > Regardless the constraint (which I think it should be taken into\r\n> > consideration here) I am expecting that through Index Scan would\r\n> easily\r\n> > figure out that the value.\r\n> > In theory there should be no need to access the table here but\r\n> perform\r\n> > everything on the index object (and of course in the \"father\" table).\r\n> > Furthemore I don't understand why on the main table is using an index\r\n> scan\r\n> > (on 0 rows).\r\n> \r\n> Not true. PostgreSQL MVCC does not work that - see explanation above.\r\n> \r\n> > I fear I am missing something on Index usage in Postgres.\r\n> \r\n> Yup, seems like that.\r\n\r\nOk... so since the index is not version aware I have to check the version in the \"data\" segment to be sure I am pointing at the right value.\r\nI can see now there is no point at using this partitioning scheme... it was sort of perfect to me as I could drive the partition easily with a degenerated dimension. Except for this small issue (waiting more than 10 min is not an option).\r\n\r\nFurthermore I am afraid that even partial indexes won't work.\r\n\r\n> \r\n> Anyway, a few recommendations / questions:\r\n> \r\n> 1) Don't post EXPLAIN output, post EXPLAIN ANALYZE if possible.\r\n\r\nGroup (cost=0.00..4674965.80 rows=200 width=17) (actual time=13.375..550943.592 rows=1 loops=1)\r\n -> Append (cost=0.00..4360975.94 rows=125595945 width=17) (actual time=13.373..524324.817 rows=125595932 loops=1)\r\n -> Index Scan using f_suipy_pkey on f_suipy (cost=0.00..5.64 rows=1 width=58) (actual time=0.019..0.019 rows=0 loops=1)\r\n Index Cond: ((fk_theme)::text = 'main_py_six_scxc'::text)\r\n -> Seq Scan on f_suipy_main_py_six_scxc f_suipy (cost=0.00..4360970.30 rows=125595944 width=17) (actual time=13.352..495259.117 rows=125595932 loops=1)\r\n Filter: ((fk_theme)::text = 'main_py_six_scxc'::text)\r\n Total runtime: 550943.699 ms\r\n> \r\n> 2) How many live tuples are there? Are the really 125.595.945 live\r\n> rows,\r\n> or is the table bloated? When you do a \"select count(*)\" from the\r\n> table,\r\n> what number you get?\r\n\r\nSELECT count(0)\r\n FROM gaez.f_suipy_main_py_six_scxc;\r\n\r\n125595932\r\n\r\n> \r\n> 3) What amount of data are we talking here? 125 million rows could be\r\n> 10MB\r\n> or 10GB, hard to guess.\r\n\r\npg_size_pretty tells me: 'f_suipy_main_py_six_scxc';'21 GB'\r\n\r\n> \r\n> Tomas\r\n\r\ng\r\n\r\nps: excellent benchmarking work on your site :)\r\n",
"msg_date": "Mon, 31 Oct 2011 14:52:21 +0100",
"msg_from": "\"Sorbara, Giorgio (CIOK)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange query plan"
},
{
"msg_contents": "On Mon, Oct 31, 2011 at 9:52 AM, Sorbara, Giorgio (CIOK)\n<[email protected]> wrote:\n> Group (cost=0.00..4674965.80 rows=200 width=17) (actual time=13.375..550943.592 rows=1 loops=1)\n> -> Append (cost=0.00..4360975.94 rows=125595945 width=17) (actual time=13.373..524324.817 rows=125595932 loops=1)\n> -> Index Scan using f_suipy_pkey on f_suipy (cost=0.00..5.64 rows=1 width=58) (actual time=0.019..0.019 rows=0 loops=1)\n> Index Cond: ((fk_theme)::text = 'main_py_six_scxc'::text)\n> -> Seq Scan on f_suipy_main_py_six_scxc f_suipy (cost=0.00..4360970.30 rows=125595944 width=17) (actual time=13.352..495259.117 rows=125595932 loops=1)\n> Filter: ((fk_theme)::text = 'main_py_six_scxc'::text)\n> Total runtime: 550943.699 ms\n\nHow fast do you expect this to run? It's aggregating 125 million\nrows, so that's going to take some time no matter how you slice it.\nUnless I'm misreading this, it's actually taking only about 4\nmicroseconds per row, which does not obviously suck.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 4 Nov 2011 12:06:49 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange query plan"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: Robert Haas [mailto:[email protected]]\n> Sent: 04 November 2011 5:07 PM\n> To: Sorbara, Giorgio (CIOK)\n> Cc: Tomas Vondra; [email protected]\n> Subject: Re: [PERFORM] Strange query plan\n> \n> On Mon, Oct 31, 2011 at 9:52 AM, Sorbara, Giorgio (CIOK)\n> <[email protected]> wrote:\n> > Group (cost=0.00..4674965.80 rows=200 width=17) (actual\n> time=13.375..550943.592 rows=1 loops=1)\n> > -> Append (cost=0.00..4360975.94 rows=125595945 width=17) (actual\n> time=13.373..524324.817 rows=125595932 loops=1)\n> > -> Index Scan using f_suipy_pkey on f_suipy\n> (cost=0.00..5.64 rows=1 width=58) (actual time=0.019..0.019 rows=0\n> loops=1)\n> > Index Cond: ((fk_theme)::text =\n> 'main_py_six_scxc'::text)\n> > -> Seq Scan on f_suipy_main_py_six_scxc f_suipy\n> (cost=0.00..4360970.30 rows=125595944 width=17) (actual\n> time=13.352..495259.117 rows=125595932 loops=1)\n> > Filter: ((fk_theme)::text = 'main_py_six_scxc'::text)\n> > Total runtime: 550943.699 ms\n> \n> How fast do you expect this to run? It's aggregating 125 million\n> rows, so that's going to take some time no matter how you slice it.\n> Unless I'm misreading this, it's actually taking only about 4\n> microseconds per row, which does not obviously suck.\n\nWell, the problem is not how fast it takes to process one row rather the best query plan I am supposed to get. I don't mean the planer is wrong but I was expecting a feature is not there (yet).\nWe don't have pure index scan. Fair enough. so I have approached the problem in a different way: getting rid of the degenerated dimensions and exploiting \"useless\" dimension table.\nIt's a workaround but it actually seems to work :) now I have a ~350 millions fact table and no partition but I am happy to get the data I want in 1 sec or less.\n\n\n> \n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 4 Nov 2011 17:14:48 +0100",
"msg_from": "\"Sorbara, Giorgio (CIOK)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange query plan"
},
{
"msg_contents": "On Fri, Nov 4, 2011 at 12:14 PM, Sorbara, Giorgio (CIOK)\n<[email protected]> wrote:\n>> How fast do you expect this to run? It's aggregating 125 million\n>> rows, so that's going to take some time no matter how you slice it.\n>> Unless I'm misreading this, it's actually taking only about 4\n>> microseconds per row, which does not obviously suck.\n>\n> Well, the problem is not how fast it takes to process one row rather the best query plan I am supposed to get. I don't mean the planer is wrong but I was expecting a feature is not there (yet).\n> We don't have pure index scan. Fair enough. so I have approached the problem in a different way: getting rid of the degenerated dimensions and exploiting \"useless\" dimension table.\n> It's a workaround but it actually seems to work :) now I have a ~350 millions fact table and no partition but I am happy to get the data I want in 1 sec or less.\n\nAm I misreading the EXPLAIN ANALYZE output? I'm reading that to say\nthat there were 125 million rows in the table that matched your filter\ncondition. If that's correct, I don't think index-only scans (which\nwill be in 9.2) are going to help you much - it might be faster, but\nit's definitely not going to be anything like instantaneous.\n\nOn the flip side, if I *am* misreading the output and the number of\nrows needed to compute the aggregate is actually some very small\nnumber, then you ought to be getting an index scan even in older\nversions.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 4 Nov 2011 14:49:56 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange query plan"
}
] |
[
{
"msg_contents": "Hi there,\n\nI have the function:\nCREATE OR REPLACE FUNCTION \"Test\"( ... )\nRETURNS SETOF record AS\n$BODY$\nBEGIN\n RETURN QUERY\n SELECT ...;\nEND;\n$BODY$\nLANGUAGE 'plpgsql' STABLE\n\nThe function call takes about 5 minute to proceed, but using directly its\nquery statement, after replacing the arguments with the same values, it\ntakes just 5 seconds !\n\nI repeat the test several times and the duration is the same.\n\nWhat is wrong ?\n\nPlease note Postgresql version is \"PostgreSQL 8.3.5, compiled by Visual C++\nbuild 1400\". I used ANALYZE, and my query / function returns about 150 rows.\nI made the tests in pgAdmin query windows.\n\nTIA,\nSabin\n\n\n",
"msg_date": "Tue, 1 Nov 2011 16:01:03 +0200",
"msg_from": "Sabin Coanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "procedure takes much more time than its query statement"
},
{
"msg_contents": "The most common reason for this (not specific to PG) is that the function\nis getting compiled without the substituted constants, and the query plan\nis generic, whereas with specific values it is able to use column\nstatistics to pick a more efficient one.\nOn Nov 1, 2011 8:16 PM, \"Sabin Coanda\" <[email protected]> wrote:\n\n> Hi there,\n>\n> I have the function:\n> CREATE OR REPLACE FUNCTION \"Test\"( ... )\n> RETURNS SETOF record AS\n> $BODY$\n> BEGIN\n> RETURN QUERY\n> SELECT ...;\n> END;\n> $BODY$\n> LANGUAGE 'plpgsql' STABLE\n>\n> The function call takes about 5 minute to proceed, but using directly its\n> query statement, after replacing the arguments with the same values, it\n> takes just 5 seconds !\n>\n> I repeat the test several times and the duration is the same.\n>\n> What is wrong ?\n>\n> Please note Postgresql version is \"PostgreSQL 8.3.5, compiled by Visual C++\n> build 1400\". I used ANALYZE, and my query / function returns about 150\n> rows.\n> I made the tests in pgAdmin query windows.\n>\n> TIA,\n> Sabin\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThe most common reason for this (not specific to PG) is that the function is getting compiled without the substituted constants, and the query plan is generic, whereas with specific values it is able to use column statistics to pick a more efficient one.\nOn Nov 1, 2011 8:16 PM, \"Sabin Coanda\" <[email protected]> wrote:\nHi there,\n\nI have the function:\nCREATE OR REPLACE FUNCTION \"Test\"( ... )\nRETURNS SETOF record AS\n$BODY$\nBEGIN\n RETURN QUERY\n SELECT ...;\nEND;\n$BODY$\nLANGUAGE 'plpgsql' STABLE\n\nThe function call takes about 5 minute to proceed, but using directly its\nquery statement, after replacing the arguments with the same values, it\ntakes just 5 seconds !\n\nI repeat the test several times and the duration is the same.\n\nWhat is wrong ?\n\nPlease note Postgresql version is \"PostgreSQL 8.3.5, compiled by Visual C++\nbuild 1400\". I used ANALYZE, and my query / function returns about 150 rows.\nI made the tests in pgAdmin query windows.\n\nTIA,\nSabin\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 1 Nov 2011 20:21:38 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: procedure takes much more time than its query statement"
},
{
"msg_contents": "On 11/01/2011 10:01 PM, Sabin Coanda wrote:\n> Hi there,\n>\n> I have the function:\n> CREATE OR REPLACE FUNCTION \"Test\"( ... )\n> RETURNS SETOF record AS\n> $BODY$\n> BEGIN\n> RETURN QUERY\n> SELECT ...;\n> END;\n> $BODY$\n> LANGUAGE 'plpgsql' STABLE\n>\n> The function call takes about 5 minute to proceed, but using directly its\n> query statement, after replacing the arguments with the same values, it\n> takes just 5 seconds !\n>\n> I repeat the test several times and the duration is the same.\n>\n> What is wrong ?\n>\nIs it also slow if, outside PL/PgSQL in a regular psql session, you \nPREPARE the query, then EXECUTE it?\n\nIf so, you're being bitten by a generic query plan. The server does a \nbetter job when it knows what parameter is used when it's planning the \nstatement. To work around it, you can use the PL/PgSQL 'EXECUTE ... \nUSING ...' statement to force a re-plan of the statement for every time \nit's run.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 02 Nov 2011 10:21:20 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: procedure takes much more time than its query statement"
},
{
"msg_contents": "Excelent !\nYou are right\nThanks a lot\nSabin\n\n\"Craig Ringer\" <[email protected]> wrote in message \nnews:[email protected]...\n> On 11/01/2011 10:01 PM, Sabin Coanda wrote:\n>> Hi there,\n>>\n>> I have the function:\n>> CREATE OR REPLACE FUNCTION \"Test\"( ... )\n>> RETURNS SETOF record AS\n>> $BODY$\n>> BEGIN\n>> RETURN QUERY\n>> SELECT ...;\n>> END;\n>> $BODY$\n>> LANGUAGE 'plpgsql' STABLE\n>>\n>> The function call takes about 5 minute to proceed, but using directly its\n>> query statement, after replacing the arguments with the same values, it\n>> takes just 5 seconds !\n>>\n>> I repeat the test several times and the duration is the same.\n>>\n>> What is wrong ?\n>>\n> Is it also slow if, outside PL/PgSQL in a regular psql session, you \n> PREPARE the query, then EXECUTE it?\n>\n> If so, you're being bitten by a generic query plan. The server does a \n> better job when it knows what parameter is used when it's planning the \n> statement. To work around it, you can use the PL/PgSQL 'EXECUTE ... USING \n> ...' statement to force a re-plan of the statement for every time it's \n> run.\n>\n> --\n> Craig Ringer\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n",
"msg_date": "Wed, 2 Nov 2011 10:15:45 +0200",
"msg_from": "\"Sabin Coanda\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: procedure takes much more time than its query statement"
}
] |
[
{
"msg_contents": "Hello list,\n\nA OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1 \nsetup. I was pretty convinced this was the perfect solution to run \nPostgreSQL on SSDs without a IO controller with BBU. No worries for \nstrange firmware bugs because of two different drives, good write \nendurance of the 710. Access to the smart attributes. Complete control \nover the disks: nothing hidden by a hardware raid IO layer.\n\nThen I did a pgbench test:\n- bigger than RAM test (~30GB database with 24GB ram)\n- and during that test I removed the Intel 710.\n- during the test I removed the 710 and 10 minutes later inserted it \nagain and added it to the array.\n\nThe pgbench transaction latency graph is here: http://imgur.com/JSdQd\n\nWith only the OCZ, latencies are acceptable but with two drives, there \nare latencies up to 3 seconds! (and 11 seconds at disk remove time) Is \nthis due to software raid, or is it the Intel 710? To figure that out I \nrepeated the test, but now removing the OCZ, latency graph at: \nhttp://imgur.com/DQa59 (The 12 seconds maximum was at disk remove time.)\n\nSo the Intel 710 kind of sucks latency wise. Is it because it is also \nheavily reading, and maybe WAL should not be put on it?\n\nI did another test, same as before but\n* with 5GB database completely fitting in RAM (24GB)\n* put WAL on a ramdisk\n* started on the mirror\n* during the test mdadm --fail on the Intel SSD\n\nLatency graph is at: http://imgur.com/dY0Rk\n\nSo still: with Intel 710 participating in writes (beginning of graph), \nsome latencies are over 2 seconds, with only the OCZ, max write \nlatencies are near 300ms.\n\nI'm now contemplating not using the 710 at all. Why should I not buy two \n6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS) \nwith a IO controller+BBU?\n\nBenefits: should be faster for all kinds of reads and writes.\nConcerns: TRIM becomes impossible (which was already impossible with md \nraid1, lvm / dm based mirroring could work) but is TRIM important for a \nPostgreSQL io load, without e.g. routine TRUNCATES? Also the write \nendurance of these drives is probably a lot less than previous setup.\n\nThoughts, ideas are highly appreciated!\n-- Yeb\n\nPS:\nI checked for proper alignment of partitions as well as md's data \noffsett, all was well.\nExt4 filesystem mounted with barrier=0\n/proc/sys/vm/dirty_background_bytes set to 178500000\n\n\n",
"msg_date": "Wed, 02 Nov 2011 14:05:06 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": true,
"msg_subject": "Intel 710 pgbench write latencies"
},
{
"msg_contents": "Yeb Havinga <[email protected]> wrote:\n \n> I'm now contemplating not using the 710 at all. Why should I not\n> buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex\n> 3 Max IOPS) with a IO controller+BBU?\n \nWouldn't the data be subject to loss between the time the IO\ncontroller writes to the SSD and the time it makes it from buffers\nto flash RAM?\n \n-Kevin\n",
"msg_date": "Wed, 02 Nov 2011 09:06:34 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On Wed, Nov 2, 2011 at 8:05 AM, Yeb Havinga <[email protected]> wrote:\n> Hello list,\n>\n> A OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1\n> setup. I was pretty convinced this was the perfect solution to run\n> PostgreSQL on SSDs without a IO controller with BBU. No worries for strange\n> firmware bugs because of two different drives, good write endurance of the\n> 710. Access to the smart attributes. Complete control over the disks:\n> nothing hidden by a hardware raid IO layer.\n>\n> Then I did a pgbench test:\n> - bigger than RAM test (~30GB database with 24GB ram)\n> - and during that test I removed the Intel 710.\n> - during the test I removed the 710 and 10 minutes later inserted it again\n> and added it to the array.\n>\n> The pgbench transaction latency graph is here: http://imgur.com/JSdQd\n>\n> With only the OCZ, latencies are acceptable but with two drives, there are\n> latencies up to 3 seconds! (and 11 seconds at disk remove time) Is this due\n> to software raid, or is it the Intel 710? To figure that out I repeated the\n> test, but now removing the OCZ, latency graph at: http://imgur.com/DQa59\n> (The 12 seconds maximum was at disk remove time.)\n>\n> So the Intel 710 kind of sucks latency wise. Is it because it is also\n> heavily reading, and maybe WAL should not be put on it?\n>\n> I did another test, same as before but\n> * with 5GB database completely fitting in RAM (24GB)\n> * put WAL on a ramdisk\n> * started on the mirror\n> * during the test mdadm --fail on the Intel SSD\n>\n> Latency graph is at: http://imgur.com/dY0Rk\n>\n> So still: with Intel 710 participating in writes (beginning of graph), some\n> latencies are over 2 seconds, with only the OCZ, max write latencies are\n> near 300ms.\n>\n> I'm now contemplating not using the 710 at all. Why should I not buy two\n> 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS) with\n> a IO controller+BBU?\n>\n> Benefits: should be faster for all kinds of reads and writes.\n> Concerns: TRIM becomes impossible (which was already impossible with md\n> raid1, lvm / dm based mirroring could work) but is TRIM important for a\n> PostgreSQL io load, without e.g. routine TRUNCATES? Also the write endurance\n> of these drives is probably a lot less than previous setup.\n\nsoftware RAID (mdadm) is currently blocking TRIM. the only way to to\nget TRIM in a raid-ish environment is through LVM mirroring/striping\nor w/brtfs raid (which is not production ready afaik).\n\nGiven that, if you do use software raid, it's not a good idea to\npartition the entire drive because the very first thing the raid\ndriver does is write to the entire device.\n\nI would keep at least 20-30% of both drives unpartitioned to leave the\ncontroller room to wear level and as well as other stuff. I'd try\nwiping the drives, reparititoing, and repeating your test. I would\nalso compare times through mdadm and directly to the device.\n\nmerlin\n",
"msg_date": "Wed, 2 Nov 2011 09:26:28 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On 2011-11-02 15:06, Kevin Grittner wrote:\n> Yeb Havinga<[email protected]> wrote:\n>\n>> I'm now contemplating not using the 710 at all. Why should I not\n>> buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex\n>> 3 Max IOPS) with a IO controller+BBU?\n>\n> Wouldn't the data be subject to loss between the time the IO\n> controller writes to the SSD and the time it makes it from buffers\n> to flash RAM?\n\nGood question. My guess would be no, if the raid controller does \n'write-throughs' on the attached disks, and the SSD's don't lie about \nwhen they've written to RAM.\n\nI'll put this on my to test list for the new setup.\n\n-- Yeb\n\n",
"msg_date": "Wed, 02 Nov 2011 16:04:30 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On Wed, Nov 2, 2011 at 16:04, Yeb Havinga <[email protected]> wrote:\n> On 2011-11-02 15:06, Kevin Grittner wrote:\n>>\n>> Yeb Havinga<[email protected]> wrote:\n>>\n>>> I'm now contemplating not using the 710 at all. Why should I not\n>>> buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex\n>>> 3 Max IOPS) with a IO controller+BBU?\n>>\n>> Wouldn't the data be subject to loss between the time the IO\n>> controller writes to the SSD and the time it makes it from buffers\n>> to flash RAM?\n>\n> Good question. My guess would be no, if the raid controller does\n> 'write-throughs' on the attached disks, and the SSD's don't lie about when\n> they've written to RAM.\n\nDoesn't most SSDs without supercaps lie about the writes, though?\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n",
"msg_date": "Wed, 2 Nov 2011 16:06:44 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On 2011-11-02 15:26, Merlin Moncure wrote:\n> On Wed, Nov 2, 2011 at 8:05 AM, Yeb Havinga<[email protected]> wrote:\n>> Hello list,\n>>\n>> A OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1\n>> setup. I was pretty convinced this was the perfect solution to run\n>> PostgreSQL on SSDs without a IO controller with BBU. No worries for strange\n>> firmware bugs because of two different drives, good write endurance of the\n>> 710. Access to the smart attributes. Complete control over the disks:\n>> nothing hidden by a hardware raid IO layer.\n>>\n>> Then I did a pgbench test:\n>> - bigger than RAM test (~30GB database with 24GB ram)\n>> - and during that test I removed the Intel 710.\n>> - during the test I removed the 710 and 10 minutes later inserted it again\n>> and added it to the array.\n>>\n>> The pgbench transaction latency graph is here: http://imgur.com/JSdQd\n>>\n>> With only the OCZ, latencies are acceptable but with two drives, there are\n>> latencies up to 3 seconds! (and 11 seconds at disk remove time) Is this due\n>> to software raid, or is it the Intel 710? To figure that out I repeated the\n>> test, but now removing the OCZ, latency graph at: http://imgur.com/DQa59\n>> (The 12 seconds maximum was at disk remove time.)\n>>\n>> So the Intel 710 kind of sucks latency wise. Is it because it is also\n>> heavily reading, and maybe WAL should not be put on it?\n>>\n>> I did another test, same as before but\n>> * with 5GB database completely fitting in RAM (24GB)\n>> * put WAL on a ramdisk\n>> * started on the mirror\n>> * during the test mdadm --fail on the Intel SSD\n>>\n>> Latency graph is at: http://imgur.com/dY0Rk\n>>\n>> So still: with Intel 710 participating in writes (beginning of graph), some\n>> latencies are over 2 seconds, with only the OCZ, max write latencies are\n>> near 300ms.\n>>\n>> I'm now contemplating not using the 710 at all. Why should I not buy two\n>> 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS) with\n>> a IO controller+BBU?\n>>\n>> Benefits: should be faster for all kinds of reads and writes.\n>> Concerns: TRIM becomes impossible (which was already impossible with md\n>> raid1, lvm / dm based mirroring could work) but is TRIM important for a\n>> PostgreSQL io load, without e.g. routine TRUNCATES? Also the write endurance\n>> of these drives is probably a lot less than previous setup.\n> software RAID (mdadm) is currently blocking TRIM. the only way to to\n> get TRIM in a raid-ish environment is through LVM mirroring/striping\n> or w/brtfs raid (which is not production ready afaik).\n>\n> Given that, if you do use software raid, it's not a good idea to\n> partition the entire drive because the very first thing the raid\n> driver does is write to the entire device.\n\nIf that is bad because of a decreased lifetime, I don't think these \nnumber of writes are significant - in a few hours of pgbenching I the \nGBs written are more than 10 times the GB sizes of the drives. Or do you \nsuggest this because then the disk firmware can operate assuming a \nsmaller idema capacity, thereby proloning the drive life? (i.e. the \nIntel 710 200GB has 200GB idema capacity but 320GB raw flash).\n\n> I would keep at least 20-30% of both drives unpartitioned to leave the\n> controller room to wear level and as well as other stuff. I'd try\n> wiping the drives, reparititoing, and repeating your test. I would\n> also compare times through mdadm and directly to the device.\n\nGood idea.\n\n-- Yeb\n\n",
"msg_date": "Wed, 02 Nov 2011 16:16:07 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "\n>\n> So the Intel 710 kind of sucks latency wise. Is it because it is also \n> heavily reading, and maybe WAL should not be put on it?\n\nA couple quick thoughts:\n\n1. There are a lot of moving parts in the system besides the SSDs.\nIt will take some detailed analysis to determine the cause for the\noutlying high latency transactions. The cause may not be as simple\nas one SSD processes I/O operations less quickly than another.\nFor example the system may be subject to some sort of\nstarvation issue in PG or the OS that is affected by quite\nsmall differences in underlying storage performance.\n\n2. What are your expectations for maximum transaction latency ?\nIn my experience it is not possible to guarantee sub-second\n(or even sub-multi-second) latencies overall in a system\nbuilt with general purpose OS and database software.\n(put another way : a few outlying 1 second and even\nseveral-second transactions would be pretty much what\nI'd expect to see on a database under sustained saturation\nload as experienced under a pgbench test).\n\n\n\n",
"msg_date": "Wed, 02 Nov 2011 09:20:45 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On Wed, Nov 2, 2011 at 10:16 AM, Yeb Havinga <[email protected]> wrote:\n> On 2011-11-02 15:26, Merlin Moncure wrote:\n>>\n>> On Wed, Nov 2, 2011 at 8:05 AM, Yeb Havinga<[email protected]> wrote:\n>>>\n>>> Hello list,\n>>>\n>>> A OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1\n>>> setup. I was pretty convinced this was the perfect solution to run\n>>> PostgreSQL on SSDs without a IO controller with BBU. No worries for\n>>> strange\n>>> firmware bugs because of two different drives, good write endurance of\n>>> the\n>>> 710. Access to the smart attributes. Complete control over the disks:\n>>> nothing hidden by a hardware raid IO layer.\n>>>\n>>> Then I did a pgbench test:\n>>> - bigger than RAM test (~30GB database with 24GB ram)\n>>> - and during that test I removed the Intel 710.\n>>> - during the test I removed the 710 and 10 minutes later inserted it\n>>> again\n>>> and added it to the array.\n>>>\n>>> The pgbench transaction latency graph is here: http://imgur.com/JSdQd\n>>>\n>>> With only the OCZ, latencies are acceptable but with two drives, there\n>>> are\n>>> latencies up to 3 seconds! (and 11 seconds at disk remove time) Is this\n>>> due\n>>> to software raid, or is it the Intel 710? To figure that out I repeated\n>>> the\n>>> test, but now removing the OCZ, latency graph at: http://imgur.com/DQa59\n>>> (The 12 seconds maximum was at disk remove time.)\n>>>\n>>> So the Intel 710 kind of sucks latency wise. Is it because it is also\n>>> heavily reading, and maybe WAL should not be put on it?\n>>>\n>>> I did another test, same as before but\n>>> * with 5GB database completely fitting in RAM (24GB)\n>>> * put WAL on a ramdisk\n>>> * started on the mirror\n>>> * during the test mdadm --fail on the Intel SSD\n>>>\n>>> Latency graph is at: http://imgur.com/dY0Rk\n>>>\n>>> So still: with Intel 710 participating in writes (beginning of graph),\n>>> some\n>>> latencies are over 2 seconds, with only the OCZ, max write latencies are\n>>> near 300ms.\n>>>\n>>> I'm now contemplating not using the 710 at all. Why should I not buy two\n>>> 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS)\n>>> with\n>>> a IO controller+BBU?\n>>>\n>>> Benefits: should be faster for all kinds of reads and writes.\n>>> Concerns: TRIM becomes impossible (which was already impossible with md\n>>> raid1, lvm / dm based mirroring could work) but is TRIM important for a\n>>> PostgreSQL io load, without e.g. routine TRUNCATES? Also the write\n>>> endurance\n>>> of these drives is probably a lot less than previous setup.\n>>\n>> software RAID (mdadm) is currently blocking TRIM. the only way to to\n>> get TRIM in a raid-ish environment is through LVM mirroring/striping\n>> or w/brtfs raid (which is not production ready afaik).\n>>\n>> Given that, if you do use software raid, it's not a good idea to\n>> partition the entire drive because the very first thing the raid\n>> driver does is write to the entire device.\n>\n> If that is bad because of a decreased lifetime, I don't think these number\n> of writes are significant - in a few hours of pgbenching I the GBs written\n> are more than 10 times the GB sizes of the drives. Or do you suggest this\n> because then the disk firmware can operate assuming a smaller idema\n> capacity, thereby proloning the drive life? (i.e. the Intel 710 200GB has\n> 200GB idema capacity but 320GB raw flash).\n\nIt's bad because the controller thinks all the data is 'live' -- that\nis, important. When all the data on the drive is live the fancy\ntricks the controller pulls to do intelligent wear leveling and to get\nfast write times becomes much more difficult which in turn leads to\nmore write amplification and early burnout. Supposedly, the 710 has\nextra space anyways which is probably there specifically to ameliorate\nthe raid issue as well as extend lifespan but I'm still curious how\nthis works out.\n\nmerlin\n",
"msg_date": "Wed, 2 Nov 2011 15:01:27 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On 2011-11-02 16:16, Yeb Havinga wrote:\n> On 2011-11-02 15:26, Merlin Moncure wrote:\n>\n>> I would keep at least 20-30% of both drives unpartitioned to leave the\n>> controller room to wear level and as well as other stuff. I'd try\n>> wiping the drives, reparititoing, and repeating your test. I would\n>> also compare times through mdadm and directly to the device.\n>\n> Good idea.\n\nReinstalled system - > 50% drives unpartitioned.\n/dev/sdb3 19G 5.0G 13G 29% /ocz\n/dev/sda3 19G 4.8G 13G 28% /intel\n/dev/sdb3 on /ocz type ext4 (rw,noatime,nobarrier,discard)\n/dev/sda3 on /intel type ext4 (rw,noatime,nobarrier,discard)\n\nAgain WAL was put in a ramdisk.\n\npgbench -i -s 300 t # fits in ram\npgbench -c 20 -M prepared -T 300 -l t\n\nIntel latency graph at http://imgur.com/Hh3xI\nOcz latency graph at http://imgur.com/T09LG\n\n\n",
"msg_date": "Wed, 02 Nov 2011 21:45:05 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On Wed, Nov 2, 2011 at 3:45 PM, Yeb Havinga <[email protected]> wrote:\n> On 2011-11-02 16:16, Yeb Havinga wrote:\n>>\n>> On 2011-11-02 15:26, Merlin Moncure wrote:\n>>\n>>> I would keep at least 20-30% of both drives unpartitioned to leave the\n>>> controller room to wear level and as well as other stuff. I'd try\n>>> wiping the drives, reparititoing, and repeating your test. I would\n>>> also compare times through mdadm and directly to the device.\n>>\n>> Good idea.\n>\n> Reinstalled system - > 50% drives unpartitioned.\n> /dev/sdb3 19G 5.0G 13G 29% /ocz\n> /dev/sda3 19G 4.8G 13G 28% /intel\n> /dev/sdb3 on /ocz type ext4 (rw,noatime,nobarrier,discard)\n> /dev/sda3 on /intel type ext4 (rw,noatime,nobarrier,discard)\n>\n> Again WAL was put in a ramdisk.\n>\n> pgbench -i -s 300 t # fits in ram\n> pgbench -c 20 -M prepared -T 300 -l t\n>\n> Intel latency graph at http://imgur.com/Hh3xI\n> Ocz latency graph at http://imgur.com/T09LG\n\ncurious: what were the pgbench results in terms of tps?\n\nmerlin\n",
"msg_date": "Wed, 2 Nov 2011 16:08:38 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "Your results are consistent with the benchmarks I've seen. Intel SSD have much worse write performance compared to SSD that uses Sandforce controllers, which Vertex 2 Pro does.\n\nAccording to this benchmark, at high queue depth the random write performance of Sandforce is more than 5 times that of Intel 710:\nhttp://www.anandtech.com/show/4902/intel-ssd-710-200gb-review/4\n\n\nWhy don't you just use two Vertex 2 Pro in sw RAID1? It should give you good write performance.\n\n>Why should I not buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS) with a IO controller+BBU?\nBecause it that case you'll lose data whenever you have a power loss. Without capacitors data written to the SSD is not durable.\n\n\n________________________________\nFrom: Yeb Havinga <[email protected]>\nTo: \"[email protected]\" <[email protected]>\nSent: Wednesday, November 2, 2011 9:05 AM\nSubject: [PERFORM] Intel 710 pgbench write latencies\n\nHello list,\n\nA OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1 setup. I was pretty convinced this was the perfect solution to run PostgreSQL on SSDs without a IO controller with BBU. No worries for strange firmware bugs because of two different drives, good write endurance of the 710. Access to the smart attributes. Complete control over the disks: nothing hidden by a hardware raid IO layer.\n\nThen I did a pgbench test:\n- bigger than RAM test (~30GB database with 24GB ram)\n- and during that test I removed the Intel 710.\n- during the test I removed the 710 and 10 minutes later inserted it again and added it to the array.\n\nThe pgbench transaction latency graph is here: http://imgur.com/JSdQd\n\nWith only the OCZ, latencies are acceptable but with two drives, there are latencies up to 3 seconds! (and 11 seconds at disk remove time) Is this due to software raid, or is it the Intel 710? To figure that out I repeated the test, but now removing the OCZ, latency graph at: http://imgur.com/DQa59 (The 12 seconds maximum was at disk remove time.)\n\nSo the Intel 710 kind of sucks latency wise. Is it because it is also heavily reading, and maybe WAL should not be put on it?\n\nI did another test, same as before but\n* with 5GB database completely fitting in RAM (24GB)\n* put WAL on a ramdisk\n* started on the mirror\n* during the test mdadm --fail on the Intel SSD\n\nLatency graph is at: http://imgur.com/dY0Rk\n\nSo still: with Intel 710 participating in writes (beginning of graph), some latencies are over 2 seconds, with only the OCZ, max write latencies are near 300ms.\n\nI'm now contemplating not using the 710 at all. Why should I not buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS) with a IO controller+BBU?\n\nBenefits: should be faster for all kinds of reads and writes.\nConcerns: TRIM becomes impossible (which was already impossible with md raid1, lvm / dm based mirroring could work) but is TRIM important for a PostgreSQL io load, without e.g. routine TRUNCATES? Also the write endurance of these drives is probably a lot less than previous setup.\n\nThoughts, ideas are highly appreciated!\n-- Yeb\n\nPS:\nI checked for proper alignment of partitions as well as md's data offsett, all was well.\nExt4 filesystem mounted with barrier=0\n/proc/sys/vm/dirty_background_bytes set to 178500000\n\n\n\n-- Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nYour results are consistent with the benchmarks I've seen. Intel SSD have much worse write performance compared to SSD that uses Sandforce controllers, which Vertex 2 Pro does.According to this benchmark, at high queue depth the random write performance of Sandforce is more than 5 times that of Intel 710:http://www.anandtech.com/show/4902/intel-ssd-710-200gb-review/4Why don't you just use two Vertex 2 Pro in sw RAID1? It should give you good write performance.>Why should I not buy two\n 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS) with a IO controller+BBU?Because it that case you'll lose data whenever you have a power loss. Without capacitors data written to the SSD is not durable.From: Yeb Havinga <[email protected]>To: \"[email protected]\" <[email protected]>Sent: Wednesday, November 2, 2011 9:05 AMSubject: [PERFORM] Intel 710 pgbench write\n latencies\nHello list,A OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1 setup. I was pretty convinced this was the perfect solution to run PostgreSQL on SSDs without a IO controller with BBU. No worries for strange firmware bugs because of two different drives, good write endurance of the 710. Access to the smart attributes. Complete control over the disks: nothing hidden by a hardware raid IO layer.Then I did a pgbench test:- bigger than RAM test (~30GB database with 24GB ram)- and during that test I removed the Intel 710.- during the test I removed the 710 and 10 minutes later inserted it again and added it to the array.The pgbench transaction latency graph is here: http://imgur.com/JSdQdWith only the OCZ, latencies are acceptable but with two drives, there are latencies up to 3 seconds! (and 11 seconds at disk remove time) Is this due to software raid, or is it the Intel 710? To figure that out I\n repeated the test, but now removing the OCZ, latency graph at: http://imgur.com/DQa59 (The 12 seconds maximum was at disk remove time.)So the Intel 710 kind of sucks latency wise. Is it because it is also heavily reading, and maybe WAL should not be put on it?I did another test, same as before but* with 5GB database completely fitting in RAM (24GB)* put WAL on a ramdisk* started on the mirror* during the test mdadm --fail on the Intel SSDLatency graph is at: http://imgur.com/dY0RkSo still: with Intel 710 participating in writes (beginning of graph), some latencies are over 2 seconds, with only the OCZ, max write latencies are near 300ms.I'm now contemplating not using the 710 at all. Why should I not buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS) with a IO controller+BBU?Benefits: should be faster for all kinds of reads and writes.Concerns: TRIM becomes\n impossible (which was already impossible with md raid1, lvm / dm based mirroring could work) but is TRIM important for a PostgreSQL io load, without e.g. routine TRUNCATES? Also the write endurance of these drives is probably a lot less than previous setup.Thoughts, ideas are highly appreciated!-- YebPS:I checked for proper alignment of partitions as well as md's data offsett, all was well.Ext4 filesystem mounted with barrier=0/proc/sys/vm/dirty_background_bytes set to 178500000-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 2 Nov 2011 22:56:00 -0700 (PDT)",
"msg_from": "Andy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On 2011-11-02 22:08, Merlin Moncure wrote:\n> On Wed, Nov 2, 2011 at 3:45 PM, Yeb Havinga<[email protected]> wrote:\n>> Intel latency graph at http://imgur.com/Hh3xI\n>> Ocz latency graph at http://imgur.com/T09LG\n> curious: what were the pgbench results in terms of tps?\n>\n> merlin\n\nBoth comparable near 10K tps.\n\n-- Yeb\n\n",
"msg_date": "Thu, 03 Nov 2011 10:38:49 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On Thu, Nov 3, 2011 at 4:38 AM, Yeb Havinga <[email protected]> wrote:\n> On 2011-11-02 22:08, Merlin Moncure wrote:\n>>\n>> On Wed, Nov 2, 2011 at 3:45 PM, Yeb Havinga<[email protected]> wrote:\n>>>\n>>> Intel latency graph at http://imgur.com/Hh3xI\n>>> Ocz latency graph at http://imgur.com/T09LG\n>>\n>> curious: what were the pgbench results in terms of tps?\n>>\n>> merlin\n>\n> Both comparable near 10K tps.\n\nWell, and this is just me, I'd probably stick with the 710, but that's\nbased on my understanding of things on paper, not real world\nexperience with that drive. The vertex 2 is definitely a more\nreliable performer, but it looks like the results in your graph are\nmostly skewed by a few outlying data points. If the 710 can has the\nwrite durability that intel is advertising, then ISTM that is one less\nthing to think about. My one experience with the vertex 2 pro was\nthat it was certainly fast but burned out just shy of the 10k write\ncycle point after all the numbers were crunched. This is just too\nclose for comfort on databases that are doing a lot of writing.\n\nNote that either drive is giving you the performance of somewhere\nbetween a 40 and 60 drive tray of 15k drives configured in a raid 10\n(once you overflow the write cache on the raid controller(s)). It\nwould take a pretty impressive workload indeed to become i/o bound\nwith either one of these drives...high scale pgbench is fairly\npathological.\n\nmerlin\n",
"msg_date": "Thu, 3 Nov 2011 08:59:05 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On 11/03/2011 04:38 AM, Yeb Havinga wrote:\n\n> Both comparable near 10K tps.\n\nThat's another thing I was wondering about. Why are we talking about \nVertex 2 Pro's, anyway? The Vertex 3 Pros post much better results and \nare still capacitor-backed.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Thu, 3 Nov 2011 09:31:58 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On 2011-11-03 15:31, Shaun Thomas wrote:\n> On 11/03/2011 04:38 AM, Yeb Havinga wrote:\n>\n>> Both comparable near 10K tps.\n>\n> That's another thing I was wondering about. Why are we talking about \n> Vertex 2 Pro's, anyway? The Vertex 3 Pros post much better results and \n> are still capacitor-backed.\n>\n\nNot for sale yet..\n\n-- Yeb\n\n",
"msg_date": "Thu, 03 Nov 2011 15:40:56 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel 710 pgbench write latencies"
},
{
"msg_contents": "On 2011-11-02 16:06, Magnus Hagander wrote:\n> On Wed, Nov 2, 2011 at 16:04, Yeb Havinga<[email protected]> wrote:\n>> On 2011-11-02 15:06, Kevin Grittner wrote:\n>>> Yeb Havinga<[email protected]> wrote:\n>>>\n>>>> I'm now contemplating not using the 710 at all. Why should I not\n>>>> buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex\n>>>> 3 Max IOPS) with a IO controller+BBU?\n>>> Wouldn't the data be subject to loss between the time the IO\n>>> controller writes to the SSD and the time it makes it from buffers\n>>> to flash RAM?\n>> Good question. My guess would be no, if the raid controller does\n>> 'write-throughs' on the attached disks, and the SSD's don't lie about when\n>> they've written to RAM.\n> Doesn't most SSDs without supercaps lie about the writes, though?\n>\n\nI happened to have a Vertex 3, no supercap, available to test this with \ndiskchecker. On a ext4 filesystem (just mounted with noatime, not \nbarriers=off), this happenend:\n\n# /root/diskchecker.pl -s 192.168.73.1 verify testfile\n verifying: 0.00%\n verifying: 30.67%\n verifying: 78.97%\n verifying: 100.00%\nTotal errors: 0\n\nSo I guess that's about as much as I can test without actually hooking \nit behind a hardware controller and test that. I will soon test the \n3ware 9750 with Vertex 3 and Intel 510 - both in the 3ware's ssd \ncompatibility list.\n\nMore info from testing software raid 1:\n- with lvm mirroring, discards / trim go through to the disks. This is \nwhere the Intel is fast enough, but the vertex 2 pro is busy for ~ 10 \nseconds.\n\n-- Yeb\n\n",
"msg_date": "Thu, 03 Nov 2011 16:52:48 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel 710 pgbench write latencies"
}
] |
[
{
"msg_contents": "As I've come up to speed on SQL and PostgreSQL with some\nmedium-complexity queries, I've asked a few questions about what the\noptimizer will do in various situations. I'm not talking about the\nseq-scan-vs-index type of optimizing; I mean \"transforming within the\nrelational calculus (algebra?) to an equivalent but more performant\nquery\". The same topics come up:\n\n- Flattening. I think that means \"Merge the intent of the subquery\ninto the various clauses of the parent query\".\n\n- Inlining. That's \"Don't run this function/subquery/view as an atomic\nunit; instead, push it up into the parent query so the optimizer can\nsee it all at once.\" Maybe that's the same as flattening.\n\n- Predicate pushdown. That's \"This subquery produces a lot of rows,\nbut the parent query has a WHERE clause that will eliminate half of\nthem, so don't produce the unnecessary rows.\"\n\nAm I right so far? Now, the big question, which I haven't seen\ndocumented anywhere: Under what circumstances can the optimizer do\neach of these things?\n\nFor instance, I have a complex query that calculates the similarity of\none user to every other user. The output is two columns, one row per\nuser:\n\n select * from similarity(my_user_id);\n\n other_user | similarity%\n -----------|-------------\n 123 | 99\n\nBeing a novice at SQL, I first wrote it in PL/pgSQL, so I could stay\nin my imperative, iterative head. The query performed decently well\nwhen scanning the whole table, but when I only wanted to compare\nmyself to a single user, I said:\n\n select * from similarity(my_user_id) as s where s.other_user = 321;\n\nAnd, of course, similarity() produced the whole table anyway, because\npredicates don't get pushed down into PL/pgSQL functions.\n\nSo I went and rewrote similarity as a SQL function, but I still didn't\nwant one big hairy SQL query. Ah ha! CTEs let you write modular\nsubqueries, and you also avoid problems with lack of LATERAL. I'll use\nthose.\n\n.. But of course predicates don't get pushed into CTEs, either. (Or\nmaybe it was that they would, but only if they were inline with the\npredicate.. I forget now.)\n\nSo you can see where I'm going. I know if I break everything into\nelegant, composable functions, it'll continue to perform poorly. If I\nwrite one big hairy, it'll perform great but it will be difficult to\nmaintain, and it will be inelegant and a kitten will die. My tools\nare CTEs, subqueries, aliases, SQL functions, PL/pgSQL functions, and\nviews (and other tools?) What optimizations do each of those prevent?\n\nWe're on 9.0 now but will happily upgrade to 9.1 if that matters.\n\nJay Levitt\n",
"msg_date": "Wed, 2 Nov 2011 10:22:21 -0400",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Guide to PG's capabilities for inlining, predicate hoisting,\n\tflattening, etc?"
},
{
"msg_contents": "Jay Levitt <[email protected]> writes:\n> So you can see where I'm going. I know if I break everything into\n> elegant, composable functions, it'll continue to perform poorly. If I\n> write one big hairy, it'll perform great but it will be difficult to\n> maintain, and it will be inelegant and a kitten will die. My tools\n> are CTEs, subqueries, aliases, SQL functions, PL/pgSQL functions, and\n> views (and other tools?) What optimizations do each of those prevent?\n\nplpgsql functions are black boxes to the optimizer. If you can express\nyour functions as single SQL commands, using SQL-language functions is\nusually a better bet than plpgsql.\n\nCTEs are also treated as optimization fences; this is not so much an\noptimizer limitation as to keep the semantics sane when the CTE contains\na writable query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Nov 2011 10:38:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PG's capabilities for inlining, predicate hoisting,\n\tflattening, etc?"
},
{
"msg_contents": "On Wed, Nov 2, 2011 at 10:38 AM, Tom Lane <[email protected]> wrote:\n> Jay Levitt <[email protected]> writes:\n>> So you can see where I'm going. I know if I break everything into\n>> elegant, composable functions, it'll continue to perform poorly. If I\n>> write one big hairy, it'll perform great but it will be difficult to\n>> maintain, and it will be inelegant and a kitten will die. My tools\n>> are CTEs, subqueries, aliases, SQL functions, PL/pgSQL functions, and\n>> views (and other tools?) What optimizations do each of those prevent?\n>\n> plpgsql functions are black boxes to the optimizer. If you can express\n> your functions as single SQL commands, using SQL-language functions is\n> usually a better bet than plpgsql.\n>\n> CTEs are also treated as optimization fences; this is not so much an\n> optimizer limitation as to keep the semantics sane when the CTE contains\n> a writable query.\n\nI wonder if we need to rethink, though. We've gotten a number of\nreports of problems that were caused by single-use CTEs not being\nequivalent - in terms of performance - to a non-CTE formulation of the\nsame idea. It seems necessary for CTEs to behave this way when the\nsubquery modifies data, and there are certainly situations where it\ncould be desirable otherwise, but I'm starting to think that we\nshouldn't do it that way by default. Perhaps we could let people say\nsomething like WITH x AS FENCE (...) when they want the fencing\nbehavior, and otherwise assume they don't (but give it to them anyway\nif there's a data-modifying operation in there).\n\nWhenever I give a talk on the query optimizer, I'm constantly telling\npeople to take logic out of functions and inline it, avoid CTEs, and\ngenerally merge everything into one big query. But as the OP says,\nthat is decidedly less than ideal from a code-beauty-and-maintenance\npoint of view: people WANT to be able to use syntactic sugar and still\nget good performance. Allowing for the insertion of optimization\nfences is good and important but it needs to be user-controllable\nbehavior.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 2 Nov 2011 11:13:09 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PG's capabilities for inlining, predicate\n\thoisting, flattening, etc?"
},
{
"msg_contents": "On Wed, Nov 2, 2011 at 11:13 AM, Robert Haas <[email protected]> wrote:\n\n> […] Perhaps we could let people say\n> something like WITH x AS FENCE (...) when they want the fencing\n> behavior, and otherwise assume they don't (but give it to them anyway\n> if there's a data-modifying operation in there).\n>\n\nI would love to be able to test some of our CTE queries in such a scenario.\n\nNone of them do data modification. How hard would it be to patch my own\nbuild to disable the fence unilaterally for testing purposes?\n\nOn Wed, Nov 2, 2011 at 11:13 AM, Robert Haas <[email protected]> wrote:\n[…] Perhaps we could let people say\nsomething like WITH x AS FENCE (...) when they want the fencing\nbehavior, and otherwise assume they don't (but give it to them anyway\nif there's a data-modifying operation in there).I would love to be able to test some of our CTE queries in such a scenario.None of them do data modification. How hard would it be to patch my own build to disable the fence unilaterally for testing purposes?",
"msg_date": "Wed, 2 Nov 2011 12:41:56 -0400",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PG's capabilities for inlining, predicate\n\thoisting, flattening, etc?"
},
{
"msg_contents": "On Wednesday 02 Nov 2011 16:13:09 Robert Haas wrote:\n> On Wed, Nov 2, 2011 at 10:38 AM, Tom Lane <[email protected]> wrote:\n> > Jay Levitt <[email protected]> writes:\n> >> So you can see where I'm going. I know if I break everything into\n> >> elegant, composable functions, it'll continue to perform poorly. If I\n> >> write one big hairy, it'll perform great but it will be difficult to\n> >> maintain, and it will be inelegant and a kitten will die. My tools\n> >> are CTEs, subqueries, aliases, SQL functions, PL/pgSQL functions, and\n> >> views (and other tools?) What optimizations do each of those prevent?\n> > \n> > plpgsql functions are black boxes to the optimizer. If you can express\n> > your functions as single SQL commands, using SQL-language functions is\n> > usually a better bet than plpgsql.\n> > \n> > CTEs are also treated as optimization fences; this is not so much an\n> > optimizer limitation as to keep the semantics sane when the CTE contains\n> > a writable query.\n> \n> I wonder if we need to rethink, though. We've gotten a number of\n> reports of problems that were caused by single-use CTEs not being\n> equivalent - in terms of performance - to a non-CTE formulation of the\n> same idea. It seems necessary for CTEs to behave this way when the\n> subquery modifies data, and there are certainly situations where it\n> could be desirable otherwise, but I'm starting to think that we\n> shouldn't do it that way by default. Perhaps we could let people say\n> something like WITH x AS FENCE (...) when they want the fencing\n> behavior, and otherwise assume they don't (but give it to them anyway\n> if there's a data-modifying operation in there).\n+1. I avoid writing CTEs in many cases where they would be very useful just \nfor that reasons.\nI don't even think some future inlining necessarily has to be restricted to \none-use cases only...\n\n+1 for making fencing behaviour as well. Currently there is no real explicit \nmethod to specify this which is necessarily future proof (WITH, OFFSET 0)...\n\n\nAndres\n",
"msg_date": "Wed, 2 Nov 2011 18:13:06 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PG's capabilities for inlining, predicate hoisting,\n\tflattening, etc?"
},
{
"msg_contents": "On Wed, Nov 2, 2011 at 12:13 PM, Robert Haas <[email protected]> wrote:\n> I wonder if we need to rethink, though. We've gotten a number of\n> reports of problems that were caused by single-use CTEs not being\n> equivalent - in terms of performance - to a non-CTE formulation of the\n> same idea. It seems necessary for CTEs to behave this way when the\n> subquery modifies data, and there are certainly situations where it\n> could be desirable otherwise, but I'm starting to think that we\n> shouldn't do it that way by default. Perhaps we could let people say\n> something like WITH x AS FENCE (...) when they want the fencing\n> behavior, and otherwise assume they don't (but give it to them anyway\n> if there's a data-modifying operation in there).\n\nWell, in my case, I got performance thanks to CTEs *being*\noptimization fences, letting me fiddle with query execution.\n\nAnd I mean, going from half-hour queries to 1-minute queries.\n\nIt is certainly desirable to maintain the possibility to use fences when needed.\n",
"msg_date": "Wed, 2 Nov 2011 14:22:05 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PG's capabilities for inlining, predicate\n\thoisting, flattening, etc?"
},
{
"msg_contents": "On 11/2/11 10:22 AM, Claudio Freire wrote:\n> On Wed, Nov 2, 2011 at 12:13 PM, Robert Haas<[email protected]> wrote:\n>> I wonder if we need to rethink, though. We've gotten a number of\n>> reports of problems that were caused by single-use CTEs not being\n>> equivalent - in terms of performance - to a non-CTE formulation of the\n>> same idea. It seems necessary for CTEs to behave this way when the\n>> subquery modifies data, and there are certainly situations where it\n>> could be desirable otherwise, but I'm starting to think that we\n>> shouldn't do it that way by default. Perhaps we could let people say\n>> something like WITH x AS FENCE (...) when they want the fencing\n>> behavior, and otherwise assume they don't (but give it to them anyway\n>> if there's a data-modifying operation in there).\n> Well, in my case, I got performance thanks to CTEs *being*\n> optimization fences, letting me fiddle with query execution.\n>\n> And I mean, going from half-hour queries to 1-minute queries.\nSame here. It was a case where I asked this group and was told that putting an \"offset 0\" fence in was probably the only way to solve it (once again reminding us that Postgres actually does have hints ... they're just called other things).\n> It is certainly desirable to maintain the possibility to use fences when needed.\nIndeed. Optimizer problems are usually fixed in due course, but these \"fences\" are invaluable when you have a dead web site that has to be fixed right now.\n\nCraig\n\n",
"msg_date": "Wed, 02 Nov 2011 12:39:36 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PG's capabilities for inlining, predicate\n\thoisting, flattening, etc?"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Robert Haas [mailto:[email protected]]\n> Sent: Wednesday, November 02, 2011 11:13 AM\n> To: Tom Lane\n> Cc: Jay Levitt; [email protected]\n> Subject: Re: Guide to PG's capabilities for inlining, predicate\n> hoisting, flattening, etc?\n> .......\n> .......\n> Perhaps we could let people say\n> something like WITH x AS FENCE (...) when they want the fencing\n> behavior, and otherwise assume they don't (but give it to them anyway\n> if there's a data-modifying operation in there).\n> \n> ....\n> .... \n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\nHints.... here we come :)\n",
"msg_date": "Wed, 2 Nov 2011 16:22:36 -0400",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PG's capabilities for inlining, predicate hoisting,\n\tflattening, etc?"
},
{
"msg_contents": "On 03/11/11 09:22, Igor Neyman wrote:\n>\n>> -----Original Message-----\n>> From: Robert Haas [mailto:[email protected]]\n>> Sent: Wednesday, November 02, 2011 11:13 AM\n>> To: Tom Lane\n>> Cc: Jay Levitt; [email protected]\n>> Subject: Re: Guide to PG's capabilities for inlining, predicate\n>> hoisting, flattening, etc?\n>> .......\n>> .......\n>> Perhaps we could let people say\n>> something like WITH x AS FENCE (...) when they want the fencing\n>> behavior, and otherwise assume they don't (but give it to them anyway\n>> if there's a data-modifying operation in there).\n>>\n>> ....\n>> ....\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>\n> Hints.... here we come :)\n>\nIs that a hint???\n\n[Sorry, my perverse sense of humour kicked in]\n\nI too would like CTE's to take part in optimisation - as I don't like \nthe mass slaughter of kittens, but I still want to pander to my speed \naddiction.\n\nSo I think that having some sort of fence mechanism would be good.\n\n\nCheers,\nGavin\n\n\n",
"msg_date": "Thu, 03 Nov 2011 13:17:49 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PG's capabilities for inlining, predicate\n\thoisting, flattening, etc?"
},
{
"msg_contents": "On 11/03/2011 04:22 AM, Igor Neyman wrote:\n\n> Hints.... here we come :)\n\nPfft! No more than `VOLATILE' vs `STABLE' vs `IMMUTABLE'. It's a \nsemantic difference, not just a performance hint.\n\nThat said, I'm not actually against performance hints if done sensibly.\n\n--\nCraig Ringer\n\n",
"msg_date": "Thu, 03 Nov 2011 17:07:21 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PG's capabilities for inlining, predicate\n\thoisting, flattening, etc?"
},
{
"msg_contents": "> -----Original Message-----\n> From: Craig Ringer [mailto:[email protected]]\n> Sent: Thursday, November 03, 2011 5:07 AM\n> To: Igor Neyman\n> Cc: Robert Haas; Tom Lane; Jay Levitt;\[email protected]\n> Subject: Re: [PERFORM] Guide to PG's capabilities for inlining,\n> predicate hoisting, flattening, etc?\n> \n> On 11/03/2011 04:22 AM, Igor Neyman wrote:\n> \n> That said, I'm not actually against performance hints if done\nsensibly.\n> \n> --\n> Craig Ringer\n> \n\n\n> ...sensibly\nAs it is with any other feature...\n\nIgor Neyman\n",
"msg_date": "Thu, 3 Nov 2011 09:16:10 -0400",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PG's capabilities for inlining, predicate hoisting,\n\tflattening, etc?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've got a query that I need to squeeze as much speed out of as I can.\n\nWhen I execute this query, the average time it takes is about 190 ms. I \nincreased my work_mem from 1 MB to 50MB and it decreased the timing down \nto an average of 170 ms, but that's still not fast enough. This query is \nexecuted extremely frequently, so much of it should be easily cached.\n\nSome settings\nwork_mem = 50MB\nshared_buffers = 5GB\n\nI've made sure that the two tables are recently analyzed (with \ndefault_stats to 100, 400, and 1500 even), and table bloat is low (150 \nmeg table has 7 megs bloat).\n\nHere's the query:\nSELECT yankee.my_id\nFROM yankee\nINNER JOIN hotel_zulu\n ON hotel_zulu.my_id = yankee.zulu_id\n AND hotel_zulu.type IN ('string1', 'string2', 'string3', 'string4')\nWHERE yankee.your_id=402513;\n\nAnd here is a query plan.\n\nHash Join (cost=17516.470..26386.660 rows=27624 width=4) (actual time=309.194..395.135 rows=12384 loops=1)\n Hash Cond: (yankee.alpha = hotel_zulu.quebec)\n -> Bitmap Heap Scan on yankee (cost=1066.470..8605.770 rows=27624 width=20) (actual time=5.178..34.693 rows=26963 loops=1)\n Recheck Cond: (mike = 402513)\n -> Bitmap Index Scan on hotel_alpha (cost=0.000..1059.570 rows=27624 width=0) (actual time=4.770..4.770 rows=26967 loops=1)\n Index Cond: (mike = 402513)\n -> Hash (cost=14465.080..14465.080 rows=114154 width=16) (actual time=303.717..303.717 rows=129908 loops=1)\n Buckets: 4096 Batches: 8 Memory Usage: 784kB\n -> Bitmap Heap Scan on hotel_zulu (cost=2461.770..14465.080 rows=114154 width=16) (actual time=25.642..185.253 rows=129908 loops=1)\n Recheck Cond: ((two)::golf = ANY ('xray'::golf[]))\n -> Bitmap Index Scan on kilo (cost=0.000..2433.230 rows=114154 width=0) (actual time=23.887..23.887 rows=130292 loops=1)\n Index Cond: ((two)::golf = ANY ('xray'::golf[]))\n\n\n\nOne thing I notice is the rows estimated is 27624 and the actual rows \nreturned is 12384. Quite a bit different.\n\nTable 2 (known here as hotel_zulu) is being joined on zulu_id to the \nfirst table, and then a where clause on the column 'type'. There are \nsingle column indexes on each of these columns, and any multi column \nindex I put on these are just ignored by the planner.\n\nAny thoughts on ways to tweak this?\n\n- Brian F\n",
"msg_date": "Wed, 02 Nov 2011 14:12:09 -0600",
"msg_from": "Brian Fehrle <[email protected]>",
"msg_from_op": true,
"msg_subject": "two table join just not fast enough."
},
{
"msg_contents": "Brian Fehrle <[email protected]> writes:\n> I've got a query that I need to squeeze as much speed out of as I can.\n\nHmm ... are you really sure this is being run with work_mem = 50MB?\nThe hash join is getting \"batched\", which means the executor thinks it's\nworking under a memory constraint significantly less than the size of\nthe filtered inner relation, which should be no more than a couple\nmegabytes according to this.\n\nI'm not sure how much that will save, since the hashjoin seems to be\nreasonably speedy anyway, but there's not much other fat to trim here.\n\nOne minor suggestion is to think whether you really need string\ncomparisons here or could convert that to use of an enum type.\nString compares ain't cheap, especially not in non-C locales.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Nov 2011 19:53:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two table join just not fast enough. "
},
{
"msg_contents": "Thanks Tom,\nAnd looks like I pasted an older explain plan, which is almost exactly \nthe same as the one with 50MB work_mem, except for the hash join \n'buckets' part which used more memory and only one 'bucket' so to speak.\n\nWhen running with the 50MB work_mem over 1MB work_mem, the query went \nfrom an average of 190 ms to 169 ms, so it did help some but it wasn't a \ngame changer (I even found for this specific query, 6MB of work_mem was \nthe most that would actually help me).\n\nI have other plans to try to get this thing running faster, I'll be \nexploring them tomorrow, as well as looking at using an enum type.\n\n- Brian F\n\nOn 11/02/2011 05:53 PM, Tom Lane wrote:\n> Brian Fehrle<[email protected]> writes:\n>> I've got a query that I need to squeeze as much speed out of as I can.\n> Hmm ... are you really sure this is being run with work_mem = 50MB?\n> The hash join is getting \"batched\", which means the executor thinks it's\n> working under a memory constraint significantly less than the size of\n> the filtered inner relation, which should be no more than a couple\n> megabytes according to this.\n>\n> I'm not sure how much that will save, since the hashjoin seems to be\n> reasonably speedy anyway, but there's not much other fat to trim here.\n>\n> One minor suggestion is to think whether you really need string\n> comparisons here or could convert that to use of an enum type.\n> String compares ain't cheap, especially not in non-C locales.\n>\n> \t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 02 Nov 2011 18:33:16 -0600",
"msg_from": "Brian Fehrle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two table join just not fast enough."
},
{
"msg_contents": "On 03/11/11 09:12, Brian Fehrle wrote:\n>\n>\n> And here is a query plan.\n>\n> Hash Join (cost=17516.470..26386.660 rows=27624 width=4) (actual \n> time=309.194..395.135 rows=12384 loops=1)\n> Hash Cond: (yankee.alpha = hotel_zulu.quebec)\n> -> Bitmap Heap Scan on yankee (cost=1066.470..8605.770 rows=27624 \n> width=20) (actual time=5.178..34.693 rows=26963 loops=1)\n> Recheck Cond: (mike = 402513)\n> -> Bitmap Index Scan on hotel_alpha (cost=0.000..1059.570 \n> rows=27624 width=0) (actual time=4.770..4.770 rows=26967 loops=1)\n> Index Cond: (mike = 402513)\n> -> Hash (cost=14465.080..14465.080 rows=114154 width=16) (actual \n> time=303.717..303.717 rows=129908 loops=1)\n> Buckets: 4096 Batches: 8 Memory Usage: 784kB\n> -> Bitmap Heap Scan on hotel_zulu (cost=2461.770..14465.080 \n> rows=114154 width=16) (actual time=25.642..185.253 rows=129908 loops=1)\n> Recheck Cond: ((two)::golf = ANY ('xray'::golf[]))\n> -> Bitmap Index Scan on kilo (cost=0.000..2433.230 \n> rows=114154 width=0) (actual time=23.887..23.887 rows=130292 loops=1)\n> Index Cond: ((two)::golf = ANY ('xray'::golf[]))\n>\n\nMight be worth posting table definitions, as this plan does not \nimmediately look like it came from the query you posted. Also unless I \nam misreading the output looks like you have some custom datatypes (e.g \n'golf'), so more info there could be useful too.\n\nWhen we have that, there may be something to be learned from examining \nthe pg_stats data for the join and predicate columns used in these queries.\n\nregards\n\nMark\n",
"msg_date": "Thu, 03 Nov 2011 14:47:59 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two table join just not fast enough."
}
] |
[
{
"msg_contents": "Hi All;\n\nThe below contab2 table conmtains ~400,000 rows. This query should not \ntake this long. We've tweaked work_mem up to 50MB, ensured that the \nappropriate indexes are in place, etc...\n\nThoughts?\n\nThanks in advance\n\n\nExplain analyze:\nSELECT contab2.contacts_tab\nFROM contab2\nINNER JOIN sctab\n ON sctab.id = contab2.to_service_id\n AND sctab.type IN ('FService', 'FqService', 'LService', \n'TService')\nWHERE contab2.from_contact_id=402513;\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------- \n\n Hash Join (cost=16904.28..25004.54 rows=26852 width=4) (actual \ntime=302.621..371.599 rows=12384 loops=1)\n Hash Cond: (contab2.to_service_id = sctab.id)\n -> Bitmap Heap Scan on contab2 (cost=1036.49..8566.14 rows=26852 \nwidth=20) (actual time=5.191..32.701 rows=26963 loops=1)\n Recheck Cond: (from_contact_id = 402513)\n -> Bitmap Index Scan on index_contab2_on_from_user_id \n(cost=0.00..1029.78 rows=26852 width=0) (actual time=4.779..4.779 \nrows=26963 loops=1)\n Index Cond: (from_contact_id = 402513)\n -> Hash (cost=14445.19..14445.19 rows=113808 width=16) (actual \ntime=297.332..297.332 rows=129945 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 6092kB\n -> Bitmap Heap Scan on sctab (cost=2447.07..14445.19 \nrows=113808 width=16) (actual time=29.480..187.166 rows=129945 loops=1)\n Recheck Cond: ((type)::text = ANY \n('{FService,FqService,LService,TService}'::text[]))\n -> Bitmap Index Scan on index_sctab_on_type \n(cost=0.00..2418.62 rows=113808 width=0) (actual time=27.713..27.713 \nrows=130376 loops=1)\n Index Cond: ((type)::text = ANY \n('{FService,FqService,LService,TService}'::text[]))\n Total runtime: 382.514 ms\n(13 rows)\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n",
"msg_date": "Wed, 02 Nov 2011 14:21:18 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor performance on a simple join"
},
{
"msg_contents": "On Wed, Nov 2, 2011 at 2:21 PM, CS DBA <[email protected]> wrote:\n> Hi All;\n>\n> The below contab2 table conmtains ~400,000 rows. This query should not take\n> this long. We've tweaked work_mem up to 50MB, ensured that the appropriate\n> indexes are in place, etc...\n>\n> Thoughts?\n>\n> Thanks in advance\n\nHow long should it take? 300 milliseconds is fairly fast for mushing\n129k rows up against 26k rows and getting 12k rows back. That's 40\nrows / millisecond, which isn't too bad really.\n\n\nWhat pg version are you running? What evidence do you have that this\nis slow? i.e. other machines you've run it on where it's faster? What\nhardware (CPU, RAM, IO subsystem, OS) Are you running on?\n\n>\n>\n> Explain analyze:\n> SELECT contab2.contacts_tab\n> FROM contab2\n> INNER JOIN sctab\n> ON sctab.id = contab2.to_service_id\n> AND sctab.type IN ('FService', 'FqService', 'LService', 'TService')\n> WHERE contab2.from_contact_id=402513;\n> QUERY\n> PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=16904.28..25004.54 rows=26852 width=4) (actual\n> time=302.621..371.599 rows=12384 loops=1)\n> Hash Cond: (contab2.to_service_id = sctab.id)\n> -> Bitmap Heap Scan on contab2 (cost=1036.49..8566.14 rows=26852\n> width=20) (actual time=5.191..32.701 rows=26963 loops=1)\n> Recheck Cond: (from_contact_id = 402513)\n> -> Bitmap Index Scan on index_contab2_on_from_user_id\n> (cost=0.00..1029.78 rows=26852 width=0) (actual time=4.779..4.779\n> rows=26963 loops=1)\n> Index Cond: (from_contact_id = 402513)\n> -> Hash (cost=14445.19..14445.19 rows=113808 width=16) (actual\n> time=297.332..297.332 rows=129945 loops=1)\n> Buckets: 16384 Batches: 1 Memory Usage: 6092kB\n> -> Bitmap Heap Scan on sctab (cost=2447.07..14445.19 rows=113808\n> width=16) (actual time=29.480..187.166 rows=129945 loops=1)\n> Recheck Cond: ((type)::text = ANY\n> ('{FService,FqService,LService,TService}'::text[]))\n> -> Bitmap Index Scan on index_sctab_on_type\n> (cost=0.00..2418.62 rows=113808 width=0) (actual time=27.713..27.713\n> rows=130376 loops=1)\n> Index Cond: ((type)::text = ANY\n> ('{FService,FqService,LService,TService}'::text[]))\n> Total runtime: 382.514 ms\n> (13 rows)\n>\n> --\n> ---------------------------------------------\n> Kevin Kempter - Constent State\n> A PostgreSQL Professional Services Company\n> www.consistentstate.com\n> ---------------------------------------------\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Wed, 2 Nov 2011 14:45:56 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on a simple join"
},
{
"msg_contents": "On 11/02/2011 02:45 PM, Scott Marlowe wrote:\n> On Wed, Nov 2, 2011 at 2:21 PM, CS DBA<[email protected]> wrote:\n>> Hi All;\n>>\n>> The below contab2 table conmtains ~400,000 rows. This query should not take\n>> this long. We've tweaked work_mem up to 50MB, ensured that the appropriate\n>> indexes are in place, etc...\n>>\n>> Thoughts?\n>>\n>> Thanks in advance\n> How long should it take? 300 milliseconds is fairly fast for mushing\n> 129k rows up against 26k rows and getting 12k rows back. That's 40\n> rows / millisecond, which isn't too bad really.\n>\n>\n> What pg version are you running? What evidence do you have that this\n> is slow? i.e. other machines you've run it on where it's faster? What\n> hardware (CPU, RAM, IO subsystem, OS) Are you running on?\n>\n>>\n>> Explain analyze:\n>> SELECT contab2.contacts_tab\n>> FROM contab2\n>> INNER JOIN sctab\n>> ON sctab.id = contab2.to_service_id\n>> AND sctab.type IN ('FService', 'FqService', 'LService', 'TService')\n>> WHERE contab2.from_contact_id=402513;\n>> QUERY\n>> PLAN\n>> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Hash Join (cost=16904.28..25004.54 rows=26852 width=4) (actual\n>> time=302.621..371.599 rows=12384 loops=1)\n>> Hash Cond: (contab2.to_service_id = sctab.id)\n>> -> Bitmap Heap Scan on contab2 (cost=1036.49..8566.14 rows=26852\n>> width=20) (actual time=5.191..32.701 rows=26963 loops=1)\n>> Recheck Cond: (from_contact_id = 402513)\n>> -> Bitmap Index Scan on index_contab2_on_from_user_id\n>> (cost=0.00..1029.78 rows=26852 width=0) (actual time=4.779..4.779\n>> rows=26963 loops=1)\n>> Index Cond: (from_contact_id = 402513)\n>> -> Hash (cost=14445.19..14445.19 rows=113808 width=16) (actual\n>> time=297.332..297.332 rows=129945 loops=1)\n>> Buckets: 16384 Batches: 1 Memory Usage: 6092kB\n>> -> Bitmap Heap Scan on sctab (cost=2447.07..14445.19 rows=113808\n>> width=16) (actual time=29.480..187.166 rows=129945 loops=1)\n>> Recheck Cond: ((type)::text = ANY\n>> ('{FService,FqService,LService,TService}'::text[]))\n>> -> Bitmap Index Scan on index_sctab_on_type\n>> (cost=0.00..2418.62 rows=113808 width=0) (actual time=27.713..27.713\n>> rows=130376 loops=1)\n>> Index Cond: ((type)::text = ANY\n>> ('{FService,FqService,LService,TService}'::text[]))\n>> Total runtime: 382.514 ms\n>> (13 rows)\n>>\n>> --\n>> ---------------------------------------------\n>> Kevin Kempter - Constent State\n>> A PostgreSQL Professional Services Company\n>> www.consistentstate.com\n>> ---------------------------------------------\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\nAgreed. but it's not fast enough for the client. I think we're going to \nlook at creating an aggregate table or maybe partitioning\n\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n",
"msg_date": "Wed, 02 Nov 2011 15:53:28 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance on a simple join"
},
{
"msg_contents": "On Wed, Nov 2, 2011 at 3:53 PM, CS DBA <[email protected]> wrote:\n> On 11/02/2011 02:45 PM, Scott Marlowe wrote:\n>>\n>> On Wed, Nov 2, 2011 at 2:21 PM, CS DBA<[email protected]> wrote:\n>>>\n>>> Hi All;\n>>>\n>>> The below contab2 table conmtains ~400,000 rows. This query should not\n>>> take\n>>> this long. We've tweaked work_mem up to 50MB, ensured that the\n>>> appropriate\n>>> indexes are in place, etc...\n>>>\n>>> Thoughts?\n>>>\n>>> Thanks in advance\n>>\n>> How long should it take? 300 milliseconds is fairly fast for mushing\n>> 129k rows up against 26k rows and getting 12k rows back. That's 40\n>> rows / millisecond, which isn't too bad really.\n>>\n>>\n>> What pg version are you running? What evidence do you have that this\n>> is slow? i.e. other machines you've run it on where it's faster? What\n>> hardware (CPU, RAM, IO subsystem, OS) Are you running on?\n>>\n>>>\n>>> Explain analyze:\n>>> SELECT contab2.contacts_tab\n>>> FROM contab2\n>>> INNER JOIN sctab\n>>> ON sctab.id = contab2.to_service_id\n>>> AND sctab.type IN ('FService', 'FqService', 'LService',\n>>> 'TService')\n>>> WHERE contab2.from_contact_id=402513;\n>>>\n>>> QUERY\n>>> PLAN\n>>>\n>>> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Hash Join (cost=16904.28..25004.54 rows=26852 width=4) (actual\n>>> time=302.621..371.599 rows=12384 loops=1)\n>>> Hash Cond: (contab2.to_service_id = sctab.id)\n>>> -> Bitmap Heap Scan on contab2 (cost=1036.49..8566.14 rows=26852\n>>> width=20) (actual time=5.191..32.701 rows=26963 loops=1)\n>>> Recheck Cond: (from_contact_id = 402513)\n>>> -> Bitmap Index Scan on index_contab2_on_from_user_id\n>>> (cost=0.00..1029.78 rows=26852 width=0) (actual time=4.779..4.779\n>>> rows=26963 loops=1)\n>>> Index Cond: (from_contact_id = 402513)\n>>> -> Hash (cost=14445.19..14445.19 rows=113808 width=16) (actual\n>>> time=297.332..297.332 rows=129945 loops=1)\n>>> Buckets: 16384 Batches: 1 Memory Usage: 6092kB\n>>> -> Bitmap Heap Scan on sctab (cost=2447.07..14445.19\n>>> rows=113808\n>>> width=16) (actual time=29.480..187.166 rows=129945 loops=1)\n>>> Recheck Cond: ((type)::text = ANY\n>>> ('{FService,FqService,LService,TService}'::text[]))\n>>> -> Bitmap Index Scan on index_sctab_on_type\n>>> (cost=0.00..2418.62 rows=113808 width=0) (actual time=27.713..27.713\n>>> rows=130376 loops=1)\n>>> Index Cond: ((type)::text = ANY\n>>> ('{FService,FqService,LService,TService}'::text[]))\n>>> Total runtime: 382.514 ms\n>>> (13 rows)\n>>>\n>>> --\n>>> ---------------------------------------------\n>>> Kevin Kempter - Constent State\n>>> A PostgreSQL Professional Services Company\n>>> www.consistentstate.com\n>>> ---------------------------------------------\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n> Agreed. but it's not fast enough for the client. I think we're going to\n> look at creating an aggregate table or maybe partitioning\n\nTake a look here:\nhttp://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views\n",
"msg_date": "Wed, 2 Nov 2011 20:04:40 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on a simple join"
},
{
"msg_contents": "On 11/02/2011 09:04 PM, Scott Marlowe wrote:\n\n> Take a look here:\n> http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views\n\nNot sure materialized views are the approach I would take here. We \nactually see a lot of these kinds of queries with giant result sets, \nhere. If they actually need all 12k rows for every execution (not \nlikely, but possible) and 300ms is just too darn slow for that, there's \nalways client-side caching.\n\nWe have a couple queries that we need to keep cached at all times. Stock \nquotes and positions, for example, have to be available in sub-ms time \nthanks to the level of parallelism involved. One query in particular \neffectively grabs the entire set of current positions and every \noptimization in the book brings its execution time down to about two \nseconds. We can't have thousands of clients executing that all the time, \nso it gets shoved into a local memcached on each webserver.\n\nBut if he's getting back 12k rows even *after* specifying a contact ID, \na materialized view is still going to return 12k rows, and still has to \nperform at least an index scan unless he creates an MV for each contact \nID (eww). This doesn't really look like fact-table territory either.\n\nI think the real question is: Why isn't 300ms fast enough? Is it because \nthe client executes this repeatedly? If so, what changes often enough it \nmust fetch all 12k rows every single time? Would implementing a \ntimestamp and only grabbing newer rows work better? Is it because of \nseveral connections each running it in parallel? Why not cache a local \ncopy and refresh periodically? Do they actually need all 12k rows every \ntime? maybe some limit and offset clauses are in order.\n\nThere's very little a human can do with 12k results. An automated tool \nshouldn't be grabbing them either, unless they're actually changing with \nevery execution. If they're not, the tool really wants items since the \nlast change, or it's doing too much work. If it were a report, 300ms is \nnothing compared to most reporting queries which can run for several \nminutes.\n\nI think we're missing a lot of details here.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Thu, 3 Nov 2011 09:28:31 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on a simple join"
}
] |
[
{
"msg_contents": "I basically have 3 tables. One being the core table and the other 2 depend\non the 1st. I have the requirement to add upto 70000 records in the tables.\nI do have constraints (primary & foreign keys, index, unique etc) set for\nthe tables. I can't go for bulk import (using COPY command) as there is no\nstandard .csv file in requirement, and the mapping is explicitly required\nplus few validations are externally applied in a C based programming file.\nEach record details (upto 70000) will be passed from .pgc (an ECPG based C\nProgramming file) to postgresql file. It takes less time for the 1st few\nrecords and the performance is turning bad to the latter records! The\nresult is very sad that it takes days to cover upto 20000! What are the\nperformance measures could I step in into this? Please guide me\n\nI basically have 3 tables. One being the core table and the \nother 2 depend on the 1st. I have the requirement to add upto 70000 \nrecords in the tables. I do have constraints (primary & foreign \nkeys, index, unique etc) set for the tables. I can't go for bulk import \n(using COPY command) as there is no standard .csv file in requirement, \nand the mapping is explicitly required plus few validations are \nexternally applied in a C based programming file. Each record details \n(upto 70000) will be passed from .pgc (an ECPG based C Programming file)\n to postgresql file. It takes less time for the 1st few records and the \nperformance is turning bad to the latter records! The result is very sad\n that it takes days to cover upto 20000! What are the performance \nmeasures could I step in into this? Please guide me",
"msg_date": "Thu, 3 Nov 2011 21:22:42 +0530",
"msg_from": "siva palanisamy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimization required for multiple insertions in PostgreSQL"
},
{
"msg_contents": "siva palanisamy <[email protected]> wrote:\n \n> I basically have 3 tables. One being the core table and the other\n> 2 depend on the 1st. I have the requirement to add upto 70000\n> records in the tables. I do have constraints (primary & foreign\n> keys, index, unique etc) set for the tables. I can't go for bulk\n> import (using COPY command) as there is no standard .csv file in\n> requirement, and the mapping is explicitly required plus few\n> validations are externally applied in a C based programming file.\n> Each record details (upto 70000) will be passed from .pgc (an ECPG\n> based C Programming file) to postgresql file. It takes less time\n> for the 1st few records and the performance is turning bad to the\n> latter records! The result is very sad that it takes days to cover\n> upto 20000! What are the performance measures could I step in into\n> this? Please guide me\n \nThere's an awful lot you're not telling us, like what version of\nPostgreSQL you're using, what your hardware looks like, how many\nrows you're trying to insert per database transaction, what resource\nlooks like on the machine when it's running slow, what the specific\nslow queries are and what their execution plans look like, etc. I\ncould make a lot of guesses and take a shot in the dark with some\ngeneric advice, but you would be better served by the more specific\nadvice you will get if you provide more detail.\n \nPlease review this page (and its links) and post again:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Thu, 03 Nov 2011 11:05:58 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization required for multiple insertions in\n\t PostgreSQL"
},
{
"msg_contents": "On 3 Listopad 2011, 16:52, siva palanisamy wrote:\n> I basically have 3 tables. One being the core table and the other 2 depend\n> on the 1st. I have the requirement to add upto 70000 records in the\n> tables.\n> I do have constraints (primary & foreign keys, index, unique etc) set for\n> the tables. I can't go for bulk import (using COPY command) as there is no\n> standard .csv file in requirement, and the mapping is explicitly required\n> plus few validations are externally applied in a C based programming file.\n> Each record details (upto 70000) will be passed from .pgc (an ECPG based C\n> Programming file) to postgresql file. It takes less time for the 1st few\n> records and the performance is turning bad to the latter records! The\n> result is very sad that it takes days to cover upto 20000! What are the\n> performance measures could I step in into this? Please guide me\n\nAs Kevin already pointed out, this overall and very vague description is\nnot sufficient. We need to know at least this for starters\n\n- version of PostgreSQL\n- environment (what OS, what hardware - CPU, RAM, drives)\n- basic PostgreSQL config values (shared buffers, checkpoint segments)\n- structure of the tables, indexes etc.\n- output of vmstat/iostat collected when the inserts are slow\n\nAnd BTW the fact that you're not using a standard .csv file does not mean\nyou can't use COPY. You can either transform the file to CSV or create it\non the fly.\n\nTomas\n\n",
"msg_date": "Thu, 3 Nov 2011 17:18:00 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization required for multiple insertions in\n PostgreSQL"
},
{
"msg_contents": "[Please keep the list copied.]\n\nsiva palanisamy <[email protected]> wrote:\n \n> Could you pls guide me on how to minimize time consumption? I've\n> postgresql 8.1.4; Linux OS.\n \nWell, the first thing to do is to use a supported version of\nPostgreSQL. More recent releases perform better, for starters.\n \nhttp://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy\n \nWhichever major release you use, you should be up-to-date on bug\nfixes, some of which are fixes for bugs which cause performance\nproblems:\n \nhttp://www.postgresql.org/support/versioning\n \n> I'm yet to check its RAM and other memory capacity but I\n> guess it would've the necessary stuffs.\n \nKnowing what hardware you have, and what your current PostgreSQL\nconfiguration setting are, would allow us to suggest what you might\nreconfigure to tune your database.\n \n> My master table's schema is\n> \n> CREATE TABLE contacts ( contact_id SERIAL PRIMARY KEY,\n> contact_type INTEGER DEFAULT 0, display_name TEXT NOT NULL DEFAULT\n> '', first_name TEXT DEFAULT '', last_name TEXT DEFAULT '',\n> company_name TEXT DEFAULT '', last_updated TIMESTAMP NOT NULL\n> DEFAULT current_timestamp, UNIQUE(display_name) ) WITHOUT OIDS;\n \nNot that this is a performance issue, but you almost certainly will\nexpect the semantics provided by TIMESTAMP WITH TIME ZONE for your\nlast_updated column. Just specifying TIMESTAMP is probably going to\ngive you an unpleasant surprise somewhere down the road.\n \n> I've a sql function that is called from a C program where\n> parameters are being passed. It is replicated for the other 2\n> tables as well. Totally, I've 3 tables.\n \nWhich table is the source of your slowness, and how do you know\nthat?\n \n> FYI, database connection is opened for the 1st and closed\n> only after the last record is attempted. Do you think these\n> constraints take a lot of time?\n \nThe only constraints you've shown are PRIMARY KEY and UNIQUE. It is\nsomewhat slower to add rows with those constraints in place than to\nblast in data without the constraints and then add the constraints;\nbut I understand that if the data is not known to be clean and free\nof duplicates, that's not possible. That certainly doesn't account\nfor the timings you describe.\n \n> taking days to complete 20000 odd records are not encouraging!\n \nI've seen PostgreSQL insert more rows than that per second, so it's\nnot like it is some inherent slowness of PostgreSQL. There is\nsomething you're doing with it that is that slow. Getting onto a\nmodern version of PostgreSQL may help a lot, but most likely there's\nsomething you're not telling us yet that is the thing that really\nneeds to change.\n \nJust as one off-the-wall example of what *can* happen -- if someone\ndisabled autovacuum and had a function which did an update to all\nrows in a table each time the function was called, they would see\nperformance like you describe. How do I know, from what you've told\nme, that you're *not* doing that? Or one of a hundred other things\nI could postulate? (Hint, if you showed us your current PostgreSQL\nsettings I could probably have ruled this out.)\n \n-Kevin\n\n",
"msg_date": "Thu, 03 Nov 2011 13:32:48 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization required for multiple insertions in\n\t PostgreSQL"
}
] |
[
{
"msg_contents": "I'm confused. I have a now-trivial SQL function that, unrestricted, would \nscan my whole users table. When I paste the body of the function as a \nsubquery and restrict it to one row, it only produces one row. When I paste \nthe body of the function into a view and restrict it to one row, it produces \none row. But when I put it in a SQL function... it scans the whole users \ntable and then throws the other rows away.\n\nI thought SQL functions were generally inline-able, push-down-able, etc. As \na workaround, I can put my WHERE clause inside the function and pass it \nparameters, but that feels ugly, and it won't help for things like \nresticting via JOINs. The real function needs parameters, so I can't use it \nas a view. Are there better workarounds?\n\nI suspect the problem is (something like) the planner doesn't realize the \nfunction will produce a variable number of rows; I can specify COST or ROWS, \nbut they're both fixed values.\n\nPretty-printed function and explain analyze results:\n\nhttps://gist.github.com/1336963\n\nIn ASCII for web-haters and posterity:\n\n-- THE OVERLY SIMPLIFIED FUNCTION\n\ncreate or replace function matcher()\nreturns table(user_id int, match int) as $$\n\n select o.user_id, 1 as match\n from (\n select u.id as user_id, u.gender\n from users as u\n ) as o\n cross join\n (\n select u.id as user_id, u.gender\n from users as u\n where u.id = 1\n ) as my;\n\n$$ language sql stable;\n\n-- WHEN I CALL IT AS A FUNCTION\n\nselect * from matcher() where user_id = 2;\n\nLOG: duration: 1.242 ms plan:\n Query Text:\n\n select o.user_id, 1 as match\n from (\n select u.id as user_id, u.gender\n from users as u\n ) as o\n cross join\n (\n select u.id as user_id, u.gender\n from users as u\n where u.id = 1\n ) as my;\n\n\n Nested Loop (cost=0.00..118.39 rows=1656 width=4) (actual \ntime=0.022..0.888 rows=1613 loops=1)\n Output: u.id, 1\n -> Index Scan using users_pkey on public.users u (cost=0.00..8.27 \nrows=1 width=0) (actual time=0.013..0.015 rows=1 loops=1)\n Index Cond: (u.id = 1)\n -> Seq Scan on public.users u (cost=0.00..93.56 rows=1656 width=4) \n(actual time=0.004..0.479 rows=1613 loops=1)\n Output: u.id\nCONTEXT: SQL function \"matcher\" statement 1\nLOG: duration: 1.951 ms plan:\n Query Text: select * from matcher() where user_id = 2;\n Function Scan on public.matcher (cost=0.25..12.75 rows=5 width=8) \n(actual time=1.687..1.940 rows=1 loops=1)\n Output: user_id, match\n Filter: (matcher.user_id = 2)\n\n-- WHEN I CALL IT AS A SUBQUERY\n\nselect * from\n(\n select o.user_id, 1 as match\n from (\n select u.id as user_id, u.gender\n from users as u\n ) as o\n cross join\n (\n select u.id as user_id, u.gender\n from users as u\n where u.id = 1\n ) as my\n) as matcher\nwhere user_id = 2;\n\nLOG: duration: 0.044 ms plan:\n Query Text: select * from\n (\n select o.user_id, 1 as match\n from (\n select u.id as user_id, u.gender\n from users as u\n ) as o\n cross join\n (\n select u.id as user_id, u.gender\n from users as u\n where u.id = 1\n ) as my\n ) as matcher\n where user_id = 2;\n Nested Loop (cost=0.00..16.55 rows=1 width=4) (actual \ntime=0.028..0.031 rows=1 loops=1)\n Output: u.id, 1\n -> Index Scan using users_pkey on public.users u (cost=0.00..8.27 \nrows=1 width=4) (actual time=0.021..0.022 rows=1 loops=1)\n Output: u.id\n Index Cond: (u.id = 2)\n -> Index Scan using users_pkey on public.users u (cost=0.00..8.27 \nrows=1 width=0) (actual time=0.004..0.006 rows=1 loops=1)\n Index Cond: (u.id = 1)\n\n-- WHEN I CALL IT AS A VIEW\n\ncreate view matchview as\nselect o.user_id, 1 as match\n from (\n select u.id as user_id, u.gender\n from users as u\n ) as o\n cross join\n (\n select u.id as user_id, u.gender\n from users as u\n where u.id = 1\n ) as my;\n\nselect * from matchview where user_id = 2;\n\n\nLOG: duration: 0.044 ms plan:\n Query Text: select * from matchview where user_id = 2;\n Nested Loop (cost=0.00..16.55 rows=1 width=4) (actual \ntime=0.028..0.031 rows=1 loops=1)\n Output: u.id, 1\n -> Index Scan using users_pkey on public.users u (cost=0.00..8.27 \nrows=1 width=4) (actual time=0.021..0.022 rows=1 loops=1)\n Output: u.id\n Index Cond: (u.id = 2)\n -> Index Scan using users_pkey on public.users u (cost=0.00..8.27 \nrows=1 width=0) (actual time=0.005..0.007 rows=1 loops=1)\n Index Cond: (u.id = 1)\n\n",
"msg_date": "Thu, 03 Nov 2011 13:47:55 -0400",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Predicates not getting pushed into SQL function?"
},
{
"msg_contents": "Jay Levitt <[email protected]> writes:\n> I'm confused. I have a now-trivial SQL function that, unrestricted, would \n> scan my whole users table. When I paste the body of the function as a \n> subquery and restrict it to one row, it only produces one row. When I paste \n> the body of the function into a view and restrict it to one row, it produces \n> one row. But when I put it in a SQL function... it scans the whole users \n> table and then throws the other rows away.\n\n> I thought SQL functions were generally inline-able, push-down-able, etc.\n\ninline-able, yes, but if they're not inlined you don't get any such\nthing as pushdown of external conditions into the function body.\nA non-inlined function is a black box.\n\nThe interesting question here is why the function doesn't get inlined\ninto the calling query. You got the obvious showstoppers: it has a\nSETOF result, it's not volatile, nor strict. The only other possibility\nI can see offhand is that there's some sort of result datatype mismatch,\nbut you've not provided enough info to be sure about that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Nov 2011 14:41:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Predicates not getting pushed into SQL function? "
},
{
"msg_contents": "What other info can I \nprovide? id is int, gender is varchar(255), and it's happening on \n9.0.4...\n\n\n \nTom Lane\n\nNovember 3, 2011 \n2:41 PM\n\nJay Levitt <[email protected]> writes:\nI'm confused. I have a now-trivial SQL function that, unrestricted, would \nscan my whole users table. When I paste the body of the function as a \nsubquery and restrict it to one row, it only produces one row. When I paste \nthe body of the function into a view and restrict it to one row, it produces \none row. But when I put it in a SQL function... it scans the whole users \ntable and then throws the other rows away.\n\nI thought SQL functions were generally inline-able, push-down-able, etc.\n\ninline-able, yes, but if they're not inlined you don't get any such\nthing as pushdown of external conditions into the function body.\nA non-inlined function is a black box.\n\nThe interesting question here is why the function doesn't get inlined\ninto the calling query. You got the obvious showstoppers: it has a\nSETOF result, it's not volatile, nor strict. The only other possibility\nI can see offhand is that there's some sort of result datatype mismatch,\nbut you've not provided enough info to be sure about that.\n\n\t\t\tregards, tom lane\n\n \nJay Levitt\n\nNovember 3, 2011 \n1:47 PM\nI'm confused. I have a \nnow-trivial SQL function that, unrestricted, would \nscan my whole users table. When I paste the body of the function as a \nsubquery and restrict it to one row, it only produces one row. When I \npaste \nthe body of the function into a view and restrict it to one row, it \nproduces \none row. But when I put it in a SQL function... it scans the whole \nusers \ntable and then throws the other rows away.\n\nI thought SQL functions were generally inline-able, push-down-able, \netc. As \na workaround, I can put my WHERE clause inside the function and pass it \nparameters, but that feels ugly, and it won't help for things like \nresticting via JOINs. The real function needs parameters, so I can't \nuse it \nas a view. Are there better workarounds?\n\nI suspect the problem is (something like) the planner doesn't \nrealize the \nfunction will produce a variable number of rows; I can specify COST or \nROWS, \nbut they're both fixed values.\n\nPretty-printed function and explain analyze results:\n\nhttps://gist.github.com/1336963\n\nIn ASCII for web-haters and posterity:\n\n-- THE OVERLY SIMPLIFIED FUNCTION\n\ncreate or replace function matcher()\nreturns table(user_id int, match int) as $$\n\n select o.user_id, 1 as match\n from (\n select u.id as user_id, u.gender\n from users as u\n ) as o\n cross join\n (\n select u.id as user_id, u.gender\n from users as u\n where u.id = 1\n ) as my;\n\n$$ language sql stable;\n\n-- WHEN I CALL IT AS A FUNCTION\n\nselect * from matcher() where user_id = 2;\n\nLOG: duration: 1.242 ms plan:\n Query Text:\n\n select o.user_id, 1 as match\n from (\n select u.id as user_id, u.gender\n from users as u\n ) as o\n cross join\n (\n select u.id as user_id, u.gender\n from users as u\n where u.id = 1\n ) as my;\n\n\n Nested Loop (cost=0.00..118.39 rows=1656 width=4) (actual \ntime=0.022..0.888 rows=1613 loops=1)\n Output: u.id, 1\n -> Index Scan using users_pkey on public.users u \n(cost=0.00..8.27 \nrows=1 width=0) (actual time=0.013..0.015 rows=1 loops=1)\n Index Cond: (u.id = 1)\n -> Seq Scan on public.users u (cost=0.00..93.56 rows=1656\n width=4) \n(actual time=0.004..0.479 rows=1613 loops=1)\n Output: u.id\nCONTEXT: SQL function \"matcher\" statement 1\nLOG: duration: 1.951 ms plan:\n Query Text: select * from matcher() where user_id = 2;\n Function Scan on public.matcher (cost=0.25..12.75 rows=5 \nwidth=8) \n(actual time=1.687..1.940 rows=1 loops=1)\n Output: user_id, match\n Filter: (matcher.user_id = 2)\n\n-- WHEN I CALL IT AS A SUBQUERY\n\nselect * from\n(\n select o.user_id, 1 as match\n from (\n select u.id as user_id, u.gender\n from users as u\n ) as o\n cross join\n (\n select u.id as user_id, u.gender\n from users as u\n where u.id = 1\n ) as my\n) as matcher\nwhere user_id = 2;\n\nLOG: duration: 0.044 ms plan:\n Query Text: select * from\n (\n select o.user_id, 1 as match\n from (\n select u.id as user_id, u.gender\n from users as u\n ) as o\n cross join\n (\n select u.id as user_id, u.gender\n from users as u\n where u.id = 1\n ) as my\n ) as matcher\n where user_id = 2;\n Nested Loop (cost=0.00..16.55 rows=1 width=4) (actual \ntime=0.028..0.031 rows=1 loops=1)\n Output: u.id, 1\n -> Index Scan using users_pkey on public.users u \n(cost=0.00..8.27 \nrows=1 width=4) (actual time=0.021..0.022 rows=1 loops=1)\n Output: u.id\n Index Cond: (u.id = 2)\n -> Index Scan using users_pkey on public.users u \n(cost=0.00..8.27 \nrows=1 width=0) (actual time=0.004..0.006 rows=1 loops=1)\n Index Cond: (u.id = 1)\n\n-- WHEN I CALL IT AS A VIEW\n\ncreate view matchview as\nselect o.user_id, 1 as match\n from (\n select u.id as user_id, u.gender\n from users as u\n ) as o\n cross join\n (\n select u.id as user_id, u.gender\n from users as u\n where u.id = 1\n ) as my;\n\nselect * from matchview where user_id = 2;\n\n\nLOG: duration: 0.044 ms plan:\n Query Text: select * from matchview where user_id = 2;\n Nested Loop (cost=0.00..16.55 rows=1 width=4) (actual \ntime=0.028..0.031 rows=1 loops=1)\n Output: u.id, 1\n -> Index Scan using users_pkey on public.users u \n(cost=0.00..8.27 \nrows=1 width=4) (actual time=0.021..0.022 rows=1 loops=1)\n Output: u.id\n Index Cond: (u.id = 2)\n -> Index Scan using users_pkey on public.users u \n(cost=0.00..8.27 \nrows=1 width=0) (actual time=0.005..0.007 rows=1 loops=1)\n Index Cond: (u.id = 1)",
"msg_date": "Thu, 03 Nov 2011 14:49:53 -0400",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Predicates not getting pushed into SQL function?"
},
{
"msg_contents": "Jay Levitt <[email protected]> writes:\n> <html><head>\n> <meta content=\"text/html; charset=ISO-8859-1\" http-equiv=\"Content-Type\">\n> </head><body bgcolor=\"#FFFFFF\" text=\"#000000\">What other info can I \n> provide? id is int, gender is varchar(255), and it's happening on \n> 9.0.4...<br>\n> <blockquote style=\"border: 0px none;\" \n> [ etc etc ]\n\nPlease don't send HTML-only email to these lists.\n\nAnyway, the answer seems to be that inline_set_returning_function needs\nsome work to handle cases with declared OUT parameters. I will see\nabout fixing that going forward, but in existing releases what you need\nto do is declare the function as returning SETOF some named composite\ntype, eg\n\ncreate type matcher_result as (user_id int, match int);\n\ncreate or replace function matcher() returns setof matcher_result as ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Nov 2011 16:35:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Predicates not getting pushed into SQL function? "
},
{
"msg_contents": "Tom Lane wrote:\n > Please don't send HTML-only email to these lists.\n\nOops - new mail client, sorry.\n\n > Anyway, the answer seems to be that inline_set_returning_function needs\n > some work to handle cases with declared OUT parameters. I will see\n > about fixing that going forward, but in existing releases what you need\n > to do is declare the function as returning SETOF some named composite\n > type\n\n\nYes, that patch works great! Oddly enough, the workaround now does NOT \nwork; functions returning SETOF named composite types don't get inlined, but \nfunctions returning the equivalent TABLE do get inlined. Let me know if you \nneed a failcase, but the bug doesn't actually affect me now :)\n\nJay\n\n >\n > create type matcher_result as (user_id int, match int);\n >\n > create or replace function matcher() returns setof matcher_result as ...\n\n",
"msg_date": "Mon, 07 Nov 2011 15:13:43 -0500",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Predicates not getting pushed into SQL function?"
},
{
"msg_contents": "Jay Levitt wrote:\n> Yes, that patch works great! Oddly enough, the workaround now does NOT work;\n> functions returning SETOF named composite types don't get inlined, but\n> functions returning the equivalent TABLE do get inlined. Let me know if you\n> need a failcase, but the bug doesn't actually affect me now :)\n\nNever mind... I left a \"strict\" in my test. Works great all around.\n",
"msg_date": "Mon, 07 Nov 2011 15:15:23 -0500",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Predicates not getting pushed into SQL function?"
}
] |
[
{
"msg_contents": "Hi list,\n\nI've been experiencing a weird performance issue lately.\n\nI have a very simple (and usually very fast) query:\n\nSELECT track_logs.id\nFROM track_logs\nWHERE track_logs.track_id = <some id> AND track_logs.track_status_id =\n1 AND track_logs.date >= now() - interval '1 hours'\nFOR UPDATE\t\n\nWhose plan is:\n\n\"LockRows (cost=0.00..26.73 rows=1 width=14)\"\n\" -> Index Scan using idx_track_logs_track_id on track_logs\n(cost=0.00..26.72 rows=1 width=14)\"\n\" Index Cond: (track_id = <some id>)\"\n\" Filter: ((track_status_id = 1) AND (date >= (now() -\n'01:00:00'::interval)))\"\n\nThe same query, without FOR UPDATE, takes just 68 milliseconds.\n\nWith the FOR UPDATE, it takes like half a minute or more to finish.\n\nNow, I understand the for update part may be blocking on some other\ntransaction, and it's probably the case.\nBut I cannot figure out which transaction it would be. There *are*, in\nfact, connections in <idle in transaction> state, which makes me think\nthose would be the culprit. But for the life of me, I cannot make\nsense of the pg_locks view, which shows all locks as granted:\n\n\nPID Relation XID\t\t\tTX\t\t\tMode\t\t\tGranted \tStart\n14751\t5551986\t\t\t\t154/4038460\tAccessShareLock\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5526310\t\t\t\t154/4038460\tRowShareLock\t\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5552008\t\t\t\t154/4038460\tRowExclusiveLock\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5552020\t\t\t\t154/4038460\tRowExclusiveLock\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5552008\t\t\t\t154/4038460\tAccessShareLock\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5525296\t\t\t\t154/4038460\tRowShareLock\t\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5525292\t\t\t\t154/4038460\tRowShareLock\t\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5552019\t\t\t\t154/4038460\tAccessShareLock\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5552019\t\t\t\t154/4038460\tRowExclusiveLock\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5552020\t\t\t\t154/4038460\tAccessShareLock\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5525292\t\t\t\t154/4038460\tRowExclusiveLock\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t\t\t154/4038460\t154/4038460\tExclusiveLock\t\tYes\t\t2011-11-03\n12:45:03.551516-05\n14751\t\t\t\t\t\t154/4038460\tExclusiveLock\t\tYes\t\t2011-11-03 12:45:03.551516-05\n14751\t5526308\t\t\t\t154/4038460\tAccessShareLock\tYes\t\t2011-11-03 12:45:03.551516-05\n\nWhere should I look?\nWhat other information should I provide?\n",
"msg_date": "Thu, 3 Nov 2011 14:51:24 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On Thu, Nov 3, 2011 at 2:51 PM, Claudio Freire <[email protected]> wrote:\n> What other information should I provide?\n\nForgot all the usual details:\n\nServer is postgresql 9.0.3, running in linux, quite loaded (load\naverage ~7), WAL on raid 1 2 spindles, data on raid 10 4 spindles, 16G\nRAM.\n\nCould it be high contention between the worker processes? (because of\nthe high load)\n",
"msg_date": "Thu, 3 Nov 2011 14:55:52 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> The same query, without FOR UPDATE, takes just 68 milliseconds.\n> With the FOR UPDATE, it takes like half a minute or more to finish.\n\n> Now, I understand the for update part may be blocking on some other\n> transaction, and it's probably the case.\n\nYeah, that's what I'd guess.\n\n> But I cannot figure out which transaction it would be. There *are*, in\n> fact, connections in <idle in transaction> state, which makes me think\n> those would be the culprit. But for the life of me, I cannot make\n> sense of the pg_locks view, which shows all locks as granted:\n\nA block on a row would typically show up as one transaction waiting on\nanother's XID. Did you capture this *while* the query was blocked?\nAlso, I'm suspicious that you may be using a view that filters out\nthe relevant lock types --- that's obviously not a raw display of\npg_locks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Nov 2011 14:45:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking excessively in FOR UPDATE "
},
{
"msg_contents": "On Thu, Nov 3, 2011 at 3:45 PM, Tom Lane <[email protected]> wrote:\n> Claudio Freire <[email protected]> writes:\n>> But I cannot figure out which transaction it would be. There *are*, in\n>> fact, connections in <idle in transaction> state, which makes me think\n>> those would be the culprit. But for the life of me, I cannot make\n>> sense of the pg_locks view, which shows all locks as granted:\n>\n> A block on a row would typically show up as one transaction waiting on\n> another's XID. Did you capture this *while* the query was blocked?\n\nYes\n\n> Also, I'm suspicious that you may be using a view that filters out\n> the relevant lock types --- that's obviously not a raw display of\n> pg_locks.\n\nIt's pgadmin, which I usually use to monitor pg_stats_activity and\npg_locks in a \"pretty\" view.\npg_locks does not show the query, only the pid, so it's harder to spot.\n\nNext time I find it blocking, I will check pg_locks directly and post\nthe output.\n\nI did that once, and they were all granted. I didn't correlate with\nother XIDs since I thought the \"granted\" column meant it wasn't\nwaiting. Is that wrong?\n",
"msg_date": "Thu, 3 Nov 2011 16:29:02 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On Thu, Nov 3, 2011 at 4:29 PM, Claudio Freire <[email protected]> wrote:\n> Next time I find it blocking, I will check pg_locks directly and post\n> the output.\n\nHere it is, two instances of the query, while blocked:\n\nselect * from pg_locks where pid = 22636;\n\n locktype | database | relation | page | tuple | virtualxid |\ntransactionid | classid | objid | objsubid | virtualtransaction | pid\n | mode | granted\n---------------+----------+----------+------+-------+-------------+---------------+---------+-------+----------+--------------------+-------+------------------+---------\n transactionid | | | | | |\n 360992199 | | | | 89/22579344 | 22636 |\nExclusiveLock | t\n virtualxid | | | | | 89/22579344 |\n | | | | 89/22579344 | 22636 |\nExclusiveLock | t\n relation | 16398 | 5552020 | | | |\n | | | | 89/22579344 | 22636 |\nAccessShareLock | t\n relation | 16398 | 5552020 | | | |\n | | | | 89/22579344 | 22636 |\nRowExclusiveLock | t\n relation | 16398 | 5552019 | | | |\n | | | | 89/22579344 | 22636 |\nAccessShareLock | t\n relation | 16398 | 5552019 | | | |\n | | | | 89/22579344 | 22636 |\nRowExclusiveLock | t\n relation | 16398 | 5525292 | | | |\n | | | | 89/22579344 | 22636 |\nRowShareLock | t\n relation | 16398 | 5525292 | | | |\n | | | | 89/22579344 | 22636 |\nRowExclusiveLock | t\n relation | 16398 | 5552008 | | | |\n | | | | 89/22579344 | 22636 |\nAccessShareLock | t\n relation | 16398 | 5552008 | | | |\n | | | | 89/22579344 | 22636 |\nRowExclusiveLock | t\n(10 rows)\n\nselect * from pg_locks where pid = 22618;\n\n locktype | database | relation | page | tuple | virtualxid |\ntransactionid | classid | objid | objsubid | virtualtransaction | pid\n | mode | granted\n---------------+----------+----------+------+-------+-------------+---------------+---------+-------+----------+--------------------+-------+------------------+---------\n virtualxid | | | | | 159/2706505 |\n | | | | 159/2706505 | 22618 |\nExclusiveLock | t\n relation | 16398 | 5551986 | | | |\n | | | | 159/2706505 | 22618 |\nAccessShareLock | t\n transactionid | | | | | |\n 360992478 | | | | 159/2706505 | 22618 |\nExclusiveLock | t\n relation | 16398 | 5552008 | | | |\n | | | | 159/2706505 | 22618 |\nAccessShareLock | t\n relation | 16398 | 5552008 | | | |\n | | | | 159/2706505 | 22618 |\nRowExclusiveLock | t\n relation | 16398 | 5526310 | | | |\n | | | | 159/2706505 | 22618 |\nRowShareLock | t\n relation | 16398 | 5552020 | | | |\n | | | | 159/2706505 | 22618 |\nAccessShareLock | t\n relation | 16398 | 5552020 | | | |\n | | | | 159/2706505 | 22618 |\nRowExclusiveLock | t\n relation | 16398 | 5526308 | | | |\n | | | | 159/2706505 | 22618 |\nAccessShareLock | t\n relation | 16398 | 5552019 | | | |\n | | | | 159/2706505 | 22618 |\nAccessShareLock | t\n relation | 16398 | 5552019 | | | |\n | | | | 159/2706505 | 22618 |\nRowExclusiveLock | t\n relation | 16398 | 5525296 | | | |\n | | | | 159/2706505 | 22618 |\nRowShareLock | t\n relation | 16398 | 5525292 | | | |\n | | | | 159/2706505 | 22618 |\nRowShareLock | t\n relation | 16398 | 5525292 | | | |\n | | | | 159/2706505 | 22618 |\nRowExclusiveLock | t\n(14 rows)\n",
"msg_date": "Thu, 3 Nov 2011 16:53:41 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> On Thu, Nov 3, 2011 at 4:29 PM, Claudio Freire <[email protected]> wrote:\n>> Next time I find it blocking, I will check pg_locks directly and post\n>> the output.\n\n> Here it is, two instances of the query, while blocked:\n\nHmm ... definitely seems that you're not blocked on a FOR UPDATE tuple\nlock. If you were, there'd be an ungranted ShareLock on a transactionid\nin there.\n\nIt seems possible that you're blocked on an LWLock, which would not show\nin pg_locks. But before pursuing that idea, probably first you should\nback up and confirm whether the process is actually waiting, or running,\nor just really slow due to CPU contention. It might be useful to see\nwhat strace has to say about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Nov 2011 19:45:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking excessively in FOR UPDATE "
},
{
"msg_contents": "On Thu, Nov 3, 2011 at 8:45 PM, Tom Lane <[email protected]> wrote:\n> But before pursuing that idea, probably first you should\n> back up and confirm whether the process is actually waiting, or running,\n> or just really slow due to CPU contention. It might be useful to see\n> what strace has to say about it.\n\nThanks for the tip, it seems locks had nothing to do with it.\n\nstrace suggests those queries get blocked on write().\n\nThis is an explain analyze without for update:\n\n\"Index Scan using idx_track_logs_track_id on track_logs\n(cost=0.00..26.75 rows=1 width=8) (actual time=0.056..38.119 rows=1\nloops=1)\"\n\" Index Cond: (track_id = <some id>)\"\n\" Filter: ((track_status_id = 1) AND (date >= (now() - '01:00:00'::interval)))\"\n\" Buffers: shared hit=140 read=3127\"\n\"Total runtime: 38.147 ms\"\n\nThis is with for update that goes fast:\n\n\"LockRows (cost=0.00..26.76 rows=1 width=14) (actual\ntime=0.075..37.420 rows=1 loops=1)\"\n\" Buffers: shared hit=63 read=3205\"\n\" -> Index Scan using idx_track_logs_track_id on track_logs\n(cost=0.00..26.75 rows=1 width=14) (actual time=0.058..37.402 rows=1\nloops=1)\"\n\" Index Cond: (track_id = <some id>)\"\n\" Filter: ((track_status_id = 1) AND (date >= (now() -\n'01:00:00'::interval)))\"\n\" Buffers: shared hit=62 read=3205\"\n\"Total runtime: 37.462 ms\"\n\nI cannot hit one that goes slow yet, but when I did (and didn't\ncapture the output :( ) it was kinda like:\n\n\n\"LockRows (cost=0.00..26.76 rows=1 width=14) (actual\ntime=0.075..37.420 rows=1 loops=1)\"\n\" Buffers: shared hit=63 read=3205\"\n\" -> Index Scan using idx_track_logs_track_id on track_logs\n(cost=0.00..26.75 rows=1 width=14) (actual time=0.058..37.402 rows=1\nloops=1)\"\n\" Index Cond: (track_id = <some id>)\"\n\" Filter: ((track_status_id = 1) AND (date >= (now() -\n'01:00:00'::interval)))\"\n\" Buffers: shared hit=62 read=3205 written=135\"\n\"Total runtime: 37000.462 ms\"\n\nNow, I'm thinking those writes are catching the DB at a bad moment -\nwe do have regular very write-intensive peaks.\n\nMaybe I should look into increasing shared buffers?\nCheckpoints are well spread and very fast\n\nWhat are those writes about? HOT vacuuming perhaps?\n",
"msg_date": "Fri, 4 Nov 2011 13:07:38 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On Fri, Nov 4, 2011 at 12:07 PM, Claudio Freire <[email protected]> wrote:\n> What are those writes about? HOT vacuuming perhaps?\n\nEvery tuple lock requires dirtying the page. Those writes are all\nthose dirty pages getting flushed out to disk. It's possible that the\nOS is allowing the writes to happen asynchronously for a while, but\nthen when you get too much dirty data in the cache, it starts\nblocking.\n\nThe only thing I'm fuzzy about is why it's locking so many rows, given\nthat the output says rows=1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 4 Nov 2011 12:10:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "Claudio Freire <[email protected]> wrote:\n \n> Now, I'm thinking those writes are catching the DB at a bad moment\n-\n> we do have regular very write-intensive peaks.\n> \n> Maybe I should look into increasing shared buffers?\n \nAs already pointed out, SELECT FOR UPDATE will require a disk write\nof the tuple(s) read. If these are glutting, increasing\nshared_buffers would tend to make things worse. You might want to\nmake the background writer more aggressive or *reduce*\nshared_buffers to better spread the output.\n \nHopefully you have a BBU RAID controller configured for write-back. \nIf not you should. If you do, another thing which might help is\nincreasing the cache on that controller. Or you could move WAL to a\nseparate file system on a separate controller with its own BBU\nwrite-back cache.\n \n-Kevin\n",
"msg_date": "Fri, 04 Nov 2011 11:26:57 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On Fri, Nov 4, 2011 at 1:26 PM, Kevin Grittner\n<[email protected]> wrote:\n> As already pointed out, SELECT FOR UPDATE will require a disk write\n> of the tuple(s) read. If these are glutting, increasing\n> shared_buffers would tend to make things worse.\n\nI thought shared_buffers improved write caching.\nWe do tend to write onto the same rows over and over.\n\n> Hopefully you have a BBU RAID controller configured for write-back.\n> If not you should. If you do, another thing which might help is\n> increasing the cache on that controller.\n\nWrite-back on the RAID is another issue.\nThe controller is old and has a very small cache, we're already in the\nprocess of securing a replacement.\n\n> Or you could move WAL to a\n> separate file system on a separate controller with its own BBU\n> write-back cache.\n\nWAL is already on a separate controller and array.\n",
"msg_date": "Fri, 4 Nov 2011 13:48:51 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "Claudio Freire <[email protected]> wrote:\n> On Fri, Nov 4, 2011 at 1:26 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> As already pointed out, SELECT FOR UPDATE will require a disk\n>> write of the tuple(s) read. If these are glutting, increasing\n>> shared_buffers would tend to make things worse.\n> \n> I thought shared_buffers improved write caching.\n> We do tend to write onto the same rows over and over.\n \nPostgreSQL is very aggressive about holding on to dirty buffers as\nlong as possible, in hopes of reducing duplicate page writes. This\ncan work against the goal of consistent latency. In our shop we\nneeded to make the background writer more aggressive and keep shared\nbuffers in the 0.5GB to 2GB range (depending on hardware and\nworkload) to prevent write gluts leading to latency spikes. In the\nmildly surprising department, the OS seemed to do a decent job of\nspotting pages receiving repeated writes and hold back on an OS\nlevel write, while pushing other pages out in a more timely fashion\n-- there was no discernible increase in OS write activity from\nmaking these changes. I know other people have had other\nexperiences, based on their workloads (and OS versions?).\n \nBefore anything else, you might want to make sure you've spread your\ncheckpoint activity as much as possible by setting\ncheckpoint_completion_target = 0.9.\n \n-Kevin\n",
"msg_date": "Fri, 04 Nov 2011 12:07:49 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On Fri, Nov 4, 2011 at 2:07 PM, Kevin Grittner\n<[email protected]> wrote:\n> Before anything else, you might want to make sure you've spread your\n> checkpoint activity as much as possible by setting\n> checkpoint_completion_target = 0.9.\n\nWe have\n\nshared_buffers = 2G\nbgwriter_delay = 1000ms\neffective_io_concurrency=8\nsynchronous_commit=off\nwal_buffers=16M\nwal_writer_delay=2000ms\ncommit_delay=10000\ncheckpoint_segments=72\ncheckpoint_timeout=60min\ncheckpoint_completion_target=0.8\n\nI'm thinking bgwriter_delay and wal_writer_delay might not be working\nas I expected, and that maybe checkpoint_segments=72 is a bit too\nhigh, but we were having much worse I/O storms before I pushed it that\nhigh. Looking at checkpoint logging for the last few days, it goes\nalmost always like:\n\ncheckpoint complete: wrote 589 buffers (3.6%); 0 transaction log\nfile(s) added, 0 removed, 8 recycled; write=590.325 s, sync=0.055 s,\ntotal=590.417 s\n\n590s seems an awful lot for 589 buffers.\n",
"msg_date": "Fri, 4 Nov 2011 14:22:42 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On 11/04/2011 12:22 PM, Claudio Freire wrote:\n\n> bgwriter_delay = 1000ms\n> wal_writer_delay=2000ms\n> commit_delay=10000\n\n!?\n\nMaybe someone can back me up on this, but my interpretation of these \nsettings suggests they're *way* too high. That commit_delay especially \nmakes me want to cry. From the manual:\n\n\"Setting commit_delay can only help when there are many concurrently \ncommitting transactions, and it is difficult to tune it to a value that \nactually helps rather than hurt throughput.\"\n\nMeaning it may halt all of your commits up to *ten seconds* if it \ndoesn't think there was enough activity to warrant a write. Ouch.\n\nYour bgwriter_delay and wal_writer_delay settings are equally odd. \nYou've made the background writer so passive that when it does finally \nrun, it's going to have a ton of work to do, causing giant write spikes. \nI'm not sure whether or not this also causes compounding problems with \nlocks and backend write delays.\n\n> checkpoint complete: wrote 589 buffers (3.6%); 0 transaction log\n> file(s) added, 0 removed, 8 recycled; write=590.325 s, sync=0.055 s,\n> total=590.417 s\n>\n> 590s seems an awful lot for 589 buffers.\n\nYou're misinterpreting this. The checkpoint completion target is \nmultiplied against your checkpoint timeout. 590 seconds is roughly ten \nminutes, and for 589 buffers, it wrote one per second. That's about as \nslow as it can possibly write that many buffers. It had *up to* 24 \nminutes, and if it had more buffers available to write, it would have \nwritten them. The number you really care about is \"sync=0.055 s\" which \nis how much time the controller spent syncing that data to disk.\n\nIf you're having real problems writing or lock delays due to IO stalls, \nyou'll see that sync parameter shoot way up. This can also be elevated \nin certain NVRAM-based solutions. Once you start seeing whole seconds, \nor minutes, it might actually matter.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Fri, 4 Nov 2011 13:26:39 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On Fri, Nov 4, 2011 at 3:26 PM, Shaun Thomas <[email protected]> wrote:\n> On 11/04/2011 12:22 PM, Claudio Freire wrote:\n>\n>> bgwriter_delay = 1000ms\n>> wal_writer_delay=2000ms\n>> commit_delay=10000\n>\n> !?\n>snip\n> \"Setting commit_delay can only help when there are many concurrently\n> committing transactions, and it is difficult to tune it to a value that\n> actually helps rather than hurt throughput.\"\n>\n> Meaning it may halt all of your commits up to *ten seconds* if it doesn't\n> think there was enough activity to warrant a write. Ouch.\n\nI think you're misinterpreting the value.\nIt's in microseconds, that's 10 *milli*seconds\n\n> Your bgwriter_delay and wal_writer_delay settings are equally odd. You've\n> made the background writer so passive that when it does finally run, it's\n> going to have a ton of work to do, causing giant write spikes. I'm not sure\n> whether or not this also causes compounding problems with locks and backend\n> write delays.\n\nI don't think 1 second can be such a big difference for the bgwriter,\nbut I might be wrong.\n\nThe wal_writer makes me doubt, though. If logged activity was higher\nthan 8MB/s, then that setting would block it all.\nI guess I really should lower it.\n\n> The number you\n> really care about is \"sync=0.055 s\" which is how much time the controller\n> spent syncing that data to disk.\n>\n> If you're having real problems writing or lock delays due to IO stalls,\n> you'll see that sync parameter shoot way up. This can also be elevated in\n> certain NVRAM-based solutions. Once you start seeing whole seconds, or\n> minutes, it might actually matter.\n\nNice to know, I thought so, now I know so.\n:-)\n\nSo... I'm leaning towards lowering wal_writer_delay and see how it goes.\n",
"msg_date": "Fri, 4 Nov 2011 15:45:46 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On Fri, Nov 4, 2011 at 2:45 PM, Claudio Freire <[email protected]> wrote:\n> I don't think 1 second can be such a big difference for the bgwriter,\n> but I might be wrong.\n\nWell, the default value is 200 ms. And I've never before heard of\nanyone tuning it up, except maybe to save on power consumption on a\nsystem with very low utilization. Nearly always you want to reduce\nit.\n\n> The wal_writer makes me doubt, though. If logged activity was higher\n> than 8MB/s, then that setting would block it all.\n> I guess I really should lower it.\n\nHere again, you've set it to ten times the default value. That\ndoesn't seem like a good idea. I would start with the default and\ntune down.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 4 Nov 2011 14:54:41 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On 11/04/2011 01:45 PM, Claudio Freire wrote:\n\n> I think you're misinterpreting the value.\n> It's in microseconds, that's 10 *milli*seconds\n\nWow. My brain totally skimmed over that section. Everything else is in \nmilliseconds, so I never even considered it. Sorry about that!\n\nI stand by everything else though. ;)\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email-disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Fri, 4 Nov 2011 14:00:50 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On Fri, Nov 4, 2011 at 3:54 PM, Robert Haas <[email protected]> wrote:\n> On Fri, Nov 4, 2011 at 2:45 PM, Claudio Freire <[email protected]> wrote:\n>> I don't think 1 second can be such a big difference for the bgwriter,\n>> but I might be wrong.\n>\n> Well, the default value is 200 ms. And I've never before heard of\n> anyone tuning it up, except maybe to save on power consumption on a\n> system with very low utilization. Nearly always you want to reduce\n> it.\n\nWill try\n\n>> The wal_writer makes me doubt, though. If logged activity was higher\n>> than 8MB/s, then that setting would block it all.\n>> I guess I really should lower it.\n>\n> Here again, you've set it to ten times the default value. That\n> doesn't seem like a good idea. I would start with the default and\n> tune down.\n\nAlready did that. Waiting to see how it turns out.\n",
"msg_date": "Fri, 4 Nov 2011 16:07:07 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
},
{
"msg_contents": "On Fri, Nov 4, 2011 at 4:07 PM, Claudio Freire <[email protected]> wrote:\n>> Here again, you've set it to ten times the default value. That\n>> doesn't seem like a good idea. I would start with the default and\n>> tune down.\n>\n> Already did that. Waiting to see how it turns out.\n\nNope, still happening with those changes.\n\nThough it did make sense that those settings were too high, it didn't\nfix the strange blocking.\n\nIs it possible that the query is locking all the tuples hit, rather\nthan only the ones selected?\n\nBecause the index used to reach the tuple has to walk across around 3k\ntuples before finding the one that needs locking. They're supposed to\nbe in memory already (they're quite hot), that's why selecting is\nfast, but maybe it's trying to lock all 3k tuples?\n\nI don't know, I'm just throwing punches blindly at this point.\n",
"msg_date": "Mon, 7 Nov 2011 17:26:13 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocking excessively in FOR UPDATE"
}
] |
[
{
"msg_contents": "Hi ,\n\nWhile performing full vacuum we encountered the error below:\n\n\nINFO: vacuuming \"pg_catalog.pg_index\"\nvacuumdb: vacuuming of database \"xxxx\" failed: ERROR: duplicate key value\nviolates unique constraint \"ccccc\"\nDETAIL: Key (indexrelid)=(2678) already exists.\n\nWe are using Postgres 9.0.1\n\nCan you please help us out in understanding the cause of this error?\n\nRegards,\nBhakti\n\nHi ,While performing full vacuum we encountered the error below:INFO: vacuuming \"pg_catalog.pg_index\"\nvacuumdb: vacuuming of database \"xxxx\" failed: ERROR: duplicate key value violates unique constraint \"ccccc\"DETAIL: Key (indexrelid)=(2678) already exists.We are using Postgres 9.0.1\nCan you please help us out in understanding the cause of this error?Regards,Bhakti",
"msg_date": "Fri, 4 Nov 2011 17:44:21 +0530",
"msg_from": "Bhakti Ghatkar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Error while vacuuming"
},
{
"msg_contents": "Hi,\n\nMay be corrupt index, have you tried REINDEX?\n\n(btw I failed to see how it is related to performance)\n\n\nBhakti Ghatkar <bghatkar 'at' zedo.com> writes:\n\n> Hi ,\n>\n> While performing full vacuum we encountered the error below:\n>\n>\n> INFO: �vacuuming \"pg_catalog.pg_index\"\n> vacuumdb: vacuuming of database \"xxxx\" failed: ERROR: �duplicate key value\n> violates unique constraint \"ccccc\"\n> DETAIL: �Key (indexrelid)=(2678) already exists.\n>\n> We are using Postgres 9.0.1\n>\n> Can you please help us out in understanding the cause of this error?\n>\n> Regards,\n> Bhakti\n>\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "Fri, 04 Nov 2011 14:21:53 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error while vacuuming"
},
{
"msg_contents": "Bhakti Ghatkar <[email protected]> writes:\n> Hi ,\n> While performing full vacuum we encountered the error below:\n\n\n> INFO: vacuuming \"pg_catalog.pg_index\"\n> vacuumdb: vacuuming of database \"xxxx\" failed: ERROR: duplicate key value\n> violates unique constraint \"ccccc\"\n> DETAIL: Key (indexrelid)=(2678) already exists.\n\n> We are using Postgres 9.0.1\n\n> Can you please help us out in understanding the cause of this error?\n\nTry updating ... that looks suspiciously like a bug that was fixed a few\nmonths ago.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Nov 2011 10:14:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error while vacuuming "
},
{
"msg_contents": "Tom,\n\nCurrently we are using version 9.0.1.\n\nWhich version shall we update to? 9.05 or 9.1 ?\n\n~Bhakti\n\nOn Fri, Nov 4, 2011 at 7:44 PM, Tom Lane <[email protected]> wrote:\n\n> Bhakti Ghatkar <[email protected]> writes:\n> > Hi ,\n> > While performing full vacuum we encountered the error below:\n>\n>\n> > INFO: vacuuming \"pg_catalog.pg_index\"\n> > vacuumdb: vacuuming of database \"xxxx\" failed: ERROR: duplicate key\n> value\n> > violates unique constraint \"ccccc\"\n> > DETAIL: Key (indexrelid)=(2678) already exists.\n>\n> > We are using Postgres 9.0.1\n>\n> > Can you please help us out in understanding the cause of this error?\n>\n> Try updating ... that looks suspiciously like a bug that was fixed a few\n> months ago.\n>\n> regards, tom lane\n>\n\nTom,Currently we are using version 9.0.1. \nWhich version shall we update to? 9.05 or 9.1 ?\n~BhaktiOn Fri, Nov 4, 2011 at 7:44 PM, Tom Lane <[email protected]> wrote:\nBhakti Ghatkar <[email protected]> writes:\n\n\n> Hi ,\n> While performing full vacuum we encountered the error below:\n\n\n> INFO: vacuuming \"pg_catalog.pg_index\"\n> vacuumdb: vacuuming of database \"xxxx\" failed: ERROR: duplicate key value\n> violates unique constraint \"ccccc\"\n> DETAIL: Key (indexrelid)=(2678) already exists.\n\n> We are using Postgres 9.0.1\n\n> Can you please help us out in understanding the cause of this error?\n\nTry updating ... that looks suspiciously like a bug that was fixed a few\nmonths ago.\n\n regards, tom lane",
"msg_date": "Tue, 8 Nov 2011 12:03:21 +0530",
"msg_from": "Bhakti Ghatkar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Error while vacuuming"
},
{
"msg_contents": "On Mon, Nov 7, 2011 at 10:33 PM, Bhakti Ghatkar <[email protected]> wrote:\n\n> Tom,\n>\n> Currently we are using version 9.0.1.\n>\n> Which version shall we update to? 9.05 or 9.1 ?\n>\n\n9.0.5 should be compatible with your installed db and contain any bug fixes\nthat have been released. Which isn't to say that you shouldn't test and\nmake a backup before upgrading the binaries on your production server, of\ncourse.\n\n--sam\n\nOn Mon, Nov 7, 2011 at 10:33 PM, Bhakti Ghatkar <[email protected]> wrote:\nTom,Currently we are using version 9.0.1. \nWhich version shall we update to? 9.05 or 9.1 ?9.0.5 should be compatible with your installed db and contain any bug fixes that have been released. Which isn't to say that you shouldn't test and make a backup before upgrading the binaries on your production server, of course.\n--sam",
"msg_date": "Mon, 7 Nov 2011 23:49:15 -0800",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error while vacuuming"
}
] |
[
{
"msg_contents": "Hi everybody,\n\nI'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top\nof VMware ESXi 4.1 U1.\n\nThe problem is query are taking too long, and some times one query \"blocks\"\neverybody else to use the DB as well.\n\nI'm a network administrator, not a DBA, so many things here can be \"newbie\"\nfor you guys, so please be patient. :)\n\nClonning this database to another machine not virtualized, any \"crap\"\nmachine, it runs a way fastar, I've measured one specific \"SELECT\" and at\nthe virtualized system (4GB RAM, 4 processors, SATA disk, virtualized), it\ntook 15minutes!, in the crap machine (2GB RAM, 1 processor, SATA disk, NOT\nvirtualized), it took only 2!!!\n\nI always think the bottleneck is disk I/O as I can see from the vSphere\nperformance view, but the virtual machine is using exclusively the SATA\ndisk with no concurrency with other machines.\n\nhow do you guys deal with virtualization? any tips/recommendations? does\nthat make sense the disk I/O? any other sugestion?\n\nthanks in advance!\n\nLucas.\n\nHi everybody,I'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top of VMware ESXi 4.1 U1.The problem is query are taking too long, and some times one query \"blocks\" everybody else to use the DB as well.\nI'm a network administrator, not a DBA, so many things here can be \"newbie\" for you guys, so please be patient. :)Clonning this database to another machine not virtualized, any \"crap\" machine, it runs a way fastar, I've measured one specific \"SELECT\" and at the virtualized system (4GB RAM, 4 processors, SATA disk, virtualized), it took 15minutes!, in the crap machine (2GB RAM, 1 processor, SATA disk, NOT virtualized), it took only 2!!!\nI always think the bottleneck is disk I/O as I can see from the vSphere performance view, but the virtual machine is using exclusively the SATA disk with no concurrency with other machines.\n\nhow do you guys deal with virtualization? any tips/recommendations? does that make sense the disk I/O? any other sugestion?thanks in advance! Lucas.",
"msg_date": "Mon, 7 Nov 2011 08:36:10 -0200",
"msg_from": "Lucas Mocellin <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL perform poorly on VMware ESXi"
},
{
"msg_contents": "On 7 Listopad 2011, 11:36, Lucas Mocellin wrote:\n> Hi everybody,\n>\n> I'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top\n> of VMware ESXi 4.1 U1.\n>\n> The problem is query are taking too long, and some times one query\n> \"blocks\"\n> everybody else to use the DB as well.\n>\n> I'm a network administrator, not a DBA, so many things here can be\n> \"newbie\"\n> for you guys, so please be patient. :)\n>\n> Clonning this database to another machine not virtualized, any \"crap\"\n> machine, it runs a way fastar, I've measured one specific \"SELECT\" and at\n> the virtualized system (4GB RAM, 4 processors, SATA disk, virtualized), it\n> took 15minutes!, in the crap machine (2GB RAM, 1 processor, SATA disk, NOT\n> virtualized), it took only 2!!!\n\nWhat is this \"cloning\" thing? Dump/restore? Something at the\nfilesystem-device level?\n\nMy wild guess is that the autovacuum is not working, thus the database is\nbloated and the cloning removes the bloat.\n\nPost EXPLAIN ANALYZE output of the query for both machines (use\nexplain.depesz.com).\n\nHave you done any benchmarking on the virtualized machine to check the\nbasic I/O performance? A simple \"dd test\", bonnie?\n\nTomas\n\n",
"msg_date": "Mon, 7 Nov 2011 11:54:02 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL perform poorly on VMware ESXi"
},
{
"msg_contents": "On Mon, Nov 07, 2011 at 08:36:10AM -0200, Lucas Mocellin wrote:\n> Hi everybody,\n> \n> I'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top\n> of VMware ESXi 4.1 U1.\n> \n> The problem is query are taking too long, and some times one query \"blocks\"\n> everybody else to use the DB as well.\n> \n> I'm a network administrator, not a DBA, so many things here can be \"newbie\"\n> for you guys, so please be patient. :)\n> \n> Clonning this database to another machine not virtualized, any \"crap\"\n> machine, it runs a way fastar, I've measured one specific \"SELECT\" and at\n> the virtualized system (4GB RAM, 4 processors, SATA disk, virtualized), it\n> took 15minutes!, in the crap machine (2GB RAM, 1 processor, SATA disk, NOT\n> virtualized), it took only 2!!!\n> \n> I always think the bottleneck is disk I/O as I can see from the vSphere\n> performance view, but the virtual machine is using exclusively the SATA\n> disk with no concurrency with other machines.\n> \n> how do you guys deal with virtualization? any tips/recommendations? does\n> that make sense the disk I/O? any other sugestion?\n> \n> thanks in advance!\n> \n> Lucas.\n\nHi Lucas,\n\nVirtualization is not a magic bullet. It has many advantages but also\nhas disadvantages. The resources of the virtual machine are always a\nsubset of the host machine resources. In addition, the second layer of\ndisk I/O indirection through the virtual disk can effectively turn\na sequential I/O pattern into a random I/O pattern with the accompanying\n10:1 decrease in I/O throughput.\n\nI would recommend testing your I/O on your virtual machine.\n\nRegards,\nKen\n",
"msg_date": "Mon, 7 Nov 2011 08:15:10 -0600",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL perform poorly on VMware ESXi"
},
{
"msg_contents": "On 07/11/2011 11:36, Lucas Mocellin wrote:\n> Hi everybody,\n> \n> I'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top\n> of VMware ESXi 4.1 U1.\n\nI hope your hardware is Nehalem-based or newer...\n\n> The problem is query are taking too long, and some times one query \"blocks\"\n> everybody else to use the DB as well.\n\nOk, so multiple users connect to this one database, right?\n\n> I'm a network administrator, not a DBA, so many things here can be \"newbie\"\n> for you guys, so please be patient. :)\n\nFirst, did you configure the server and PostgreSQL at all?\n\nFor FreeBSD, you'll probably need this in sysctl.conf:\n\nvfs.hirunningspace=8388608\nvfs.lorunningspace=6291456\nvfs.read_max=128\n\nand for PostgreSQL, read these:\n\nhttp://www.revsys.com/writings/postgresql-performance.html\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\n> I always think the bottleneck is disk I/O as I can see from the vSphere\n> performance view, but the virtual machine is using exclusively the SATA\n> disk with no concurrency with other machines.\n\nI don't see why concurrency with other machines is relevant. Are you\ncomplaining that multiple users accessing a single database are blocked\nwhile one large query is executing or that this one query blocks other VMs?\n\nIf the single query generates a larger amount of IO than your VM host\ncan handle then you're probably out of luck. Virtualization is always\nbad for IO. You might try increasing the amount of memory for the\nvirtual machine in the hope that more data will be cached.\n\n> how do you guys deal with virtualization? any tips/recommendations? does\n> that make sense the disk I/O? any other sugestion?\n\nAre you sure it's IO? Run \"iostat 1\" for 10 seconds while the query is\nexecuting and post the results.",
"msg_date": "Mon, 07 Nov 2011 16:21:08 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL perform poorly on VMware ESXi"
}
] |
[
{
"msg_contents": "When I run the following query:\n\nselect questions.id\nfrom questions\njoin (\n select u.id as user_id\n from users as u\n left join scores as s\n on s.user_id = u.id\n) as subquery\non subquery.user_id = questions.user_id;\n\nthe subquery is scanning my entire user table, even though it's restricted \nby the outer query. (My real subquery is much more complicated, of course, \nbut this is the minimal fail case.)\n\nIs this just not a thing the optimizer can do? Are there ways to rewrite \nthis, still as a subquery, that will be smart enough to only produce the one \nrow of subquery that matches questions.user_id?\n\nJay Levitt\n",
"msg_date": "Mon, 07 Nov 2011 16:25:08 -0500",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "Jay Levitt <[email protected]> writes:\n> When I run the following query:\n> select questions.id\n> from questions\n> join (\n> select u.id as user_id\n> from users as u\n> left join scores as s\n> on s.user_id = u.id\n> ) as subquery\n> on subquery.user_id = questions.user_id;\n\n> the subquery is scanning my entire user table, even though it's restricted \n> by the outer query. (My real subquery is much more complicated, of course, \n> but this is the minimal fail case.)\n\n> Is this just not a thing the optimizer can do?\n\nEvery release since 8.2 has been able to reorder joins in a query\nwritten that way. Probably it just thinks it's cheaper than the\nalternatives.\n\n(Unless you've reduced the collapse_limit variables for some reason?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Nov 2011 16:41:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery in a JOIN not getting restricted? "
},
{
"msg_contents": "Jay Levitt <[email protected]> wrote:\n> When I run the following query:\n> \n> select questions.id\n> from questions\n> join (\n> select u.id as user_id\n> from users as u\n> left join scores as s\n> on s.user_id = u.id\n> ) as subquery\n> on subquery.user_id = questions.user_id;\n> \n> the subquery is scanning my entire user table, even though it's\n> restricted by the outer query. (My real subquery is much more\n> complicated, of course, but this is the minimal fail case.)\n \nIt's not a fail case -- it's choosing the plan it thinks is cheapest\nbased on your costing parameters and the statistics gathered by the\nlatest ANALYZE of the data.\n \n> Is this just not a thing the optimizer can do? Are there ways to\n> rewrite this, still as a subquery, that will be smart enough to\n> only produce the one row of subquery that matches\n> questions.user_id?\n \nWell, it can certainly produce the plan you seem to want, if it\nlooks less expensive. It kinda did with the following script:\n \ncreate table questions\n (id int not null primary key, user_id int not null);\n\ninsert into questions\n select generate_series(1,100), (random()*1000000)::int;\n\ncreate table users (id int not null primary key);\n\ninsert into users select generate_series(1, 1000000);\n\ncreate table scores\n (id int not null primary key, user_id int not null);\n\ninsert into scores select n, n\n from (select generate_series(1,1000000)) x(n);\n\nvacuum freeze analyze;\n\nexplain analyze\nselect questions.id\nfrom questions\njoin (\n select u.id as user_id\n from users as u\n left join scores as s\n on s.user_id = u.id\n) as subquery\non subquery.user_id = questions.user_id;\n \nHere's the plan I got, which scans the questions and then uses the\nindex to join to the users. It's throwing the result of that into a\nhash table which is then checked from a sequential scan of the\nscores table. If I had made the scores table wider, it might have\ngone from the user table to scores on the index.\n \n Hash Right Join\n (cost=438.23..18614.23 rows=100 width=4)\n (actual time=2.776..161.237 rows=100 loops=1)\n Hash Cond: (s.user_id = u.id)\n -> Seq Scan on scores s\n (cost=0.00..14425.00 rows=1000000 width=4)\n (actual time=0.025..77.876 rows=1000000 loops=1)\n -> Hash\n (cost=436.98..436.98 rows=100 width=8)\n (actual time=0.752..0.752 rows=100 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 4kB\n -> Nested Loop\n (cost=0.00..436.98 rows=100 width=8)\n (actual time=0.032..0.675 rows=100 loops=1)\n -> Seq Scan on questions\n (cost=0.00..2.00 rows=100 width=8)\n (actual time=0.010..0.042 rows=100 loops=1)\n -> Index Only Scan using users_pkey on users u\n (cost=0.00..4.34 rows=1 width=4)\n (actual time=0.005..0.005 rows=1 loops=100)\n Index Cond: (id = questions.user_id)\n Total runtime: 168.585 ms\n \nIf you want help figuring out whether it is choosing the fastest\nplan, and how to get it do better if it is not, please read this\npage and post the relevant information:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Mon, 07 Nov 2011 15:53:33 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> If I had made the scores table wider, it might have gone from the\n> user table to scores on the index.\n \nBah. I just forgot to put an index on scores.user_id. With that\nindex available it did what you were probably expecting -- seq scan\non questions, nested loop index scan on users, nested loop index\nscan on scores.\n \nYou weren't running you test with just a few rows in each table and\nexpecting the same plan to be generated as for tables with a lot of\nrows, were you?\n \n-Kevin\n",
"msg_date": "Mon, 07 Nov 2011 16:51:54 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "Kevin Grittner wrote:\n> \"Kevin Grittner\"<[email protected]> wrote:\n>\n>> If I had made the scores table wider, it might have gone from the\n>> user table to scores on the index.\n>\n> Bah. I just forgot to put an index on scores.user_id. With that\n> index available it did what you were probably expecting -- seq scan\n> on questions, nested loop index scan on users, nested loop index\n> scan on scores.\n>\n> You weren't running you test with just a few rows in each table and\n> expecting the same plan to be generated as for tables with a lot of\n> rows, were you?\n\nNo, we're a startup - we only have 2,000 users and 17,000 scores! We don't \nneed test databases yet...\n\nBut I just realized something I'd completely forgot (or blocked) - scores is \na view. And views don't have indexes. The underlying tables are ultimately \nindexed by user_id, but I can believe that Postgres doesn't think that's a \ncheap way to do it - especially since we're still using stock tuning \nsettings (I know) so its costs are all screwed up.\n\nAnd yep! When I do a CREATE TABLE AS from that view, and add an index on \nuser_id, it works just as I'd like. I've been meaning to persist that view \nanyway, so that's what I'll do.\n\nThanks for the push in the right direction..\n\nJay\n",
"msg_date": "Mon, 07 Nov 2011 18:19:32 -0500",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "Jay Levitt wrote:\n> And yep! When I do a CREATE TABLE AS from that view, and add an index on\n> user_id, it works just as I'd like.\n\nOr not. Feel free to kick me back over to pgsql-novice, but I don't get why \nthe GROUP BY in this subquery forces it to scan the entire users table (seq \nscan here, index scan on a larger table) when there's only one row in users \nthat can match:\n\ncreate table questions (\n id int not null primary key,\n user_id int not null\n);\ninsert into questions\n select generate_series(1,1100), (random()*2000)::int;\n\ncreate table users (\n id int not null primary key\n);\ninsert into users select generate_series(1, 2000);\n\nvacuum freeze analyze;\n\nexplain analyze\nselect questions.id\nfrom questions\njoin (\n select u.id\n from users as u\n group by u.id\n) as s\non s.id = questions.user_id\nwhere questions.id = 1;\n\n\nHash Join (cost=42.28..89.80 rows=2 width=4) (actual time=0.857..1.208 \nrows=1 loops=1)\n Hash Cond: (u.id = questions.user_id)\n -> HashAggregate (cost=34.00..54.00 rows=2000 width=4) (actual \ntime=0.763..1.005 rows=2000 loops=1)\n -> Seq Scan on users u (cost=0.00..29.00 rows=2000 width=4) \n(actual time=0.003..0.160 rows=2000 loops=1)\n -> Hash (cost=8.27..8.27 rows=1 width=8) (actual time=0.015..0.015 \nrows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using questions_pkey on questions (cost=0.00..8.27 \nrows=1 width=8) (actual time=0.012..0.013 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 1.262 ms\n\nThis is on patched 9.0.5 built earlier today. The real query has \naggregates, so it really does need GROUP BY.. I think..\n\nJay\n",
"msg_date": "Mon, 07 Nov 2011 20:56:42 -0500",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "Jay Levitt <[email protected]> wrote:\n \n> I don't get why the GROUP BY in this subquery forces it to scan\n> the entire users table (seq scan here, index scan on a larger\n> table) when there's only one row in users that can match:\n \n> explain analyze\n> select questions.id\n> from questions\n> join (\n> select u.id\n> from users as u\n> group by u.id\n> ) as s\n> on s.id = questions.user_id\n> where questions.id = 1;\n \n> Total runtime: 1.262 ms\n \nAre you sure there's a plan significantly faster than 1.3 ms?\n \nThat said, there might be some room for an optimization which pushes\nthat test into the query with the \"group by\" clause. I don't know\nif there's a problem with that which I'm missing, the construct was\njudged to be too rare to be worth the cost of testing for it, or\nit's just that nobody has yet gotten to it.\n \n-Kevin\n",
"msg_date": "Wed, 09 Nov 2011 09:15:35 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "On Wed, Nov 9, 2011 at 9:15 AM, Kevin Grittner\n<[email protected]> wrote:\n> Jay Levitt <[email protected]> wrote:\n>\n>> I don't get why the GROUP BY in this subquery forces it to scan\n>> the entire users table (seq scan here, index scan on a larger\n>> table) when there's only one row in users that can match:\n>\n>> explain analyze\n>> select questions.id\n>> from questions\n>> join (\n>> select u.id\n>> from users as u\n>> group by u.id\n>> ) as s\n>> on s.id = questions.user_id\n>> where questions.id = 1;\n>\n>> Total runtime: 1.262 ms\n>\n> Are you sure there's a plan significantly faster than 1.3 ms?\n\nWell, this may not fit the OP's 'real' query, but the inner subquery\nis probably better written as a semi-join (WHERE EXISTS).\n\nmerlin\n",
"msg_date": "Wed, 9 Nov 2011 13:50:28 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "Merlin Moncure <[email protected]> wrote:\n \n> Well, this may not fit the OP's 'real' query\n \nRight, if I recall correctly, the OP said it was simplified down as\nfar as it could be and still have the issue show.\n \n> but the inner subquery is probably better written as a semi-join\n> (WHERE EXISTS).\n \nWell, that doesn't mean that we shouldn't try to optimize perfectly\nvalid alternative spellings. Having each logically equivalent way\nto write a query generate a different plan is far worse than hints\nthat are more clearly written. ;-)\n \n-Kevin\n",
"msg_date": "Wed, 09 Nov 2011 14:20:00 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Jay Levitt<[email protected]> wrote:\n>\n>> I don't get why the GROUP BY in this subquery forces it to scan\n>> the entire users table (seq scan here, index scan on a larger\n>> table) when there's only one row in users that can match:\n\n> Are you sure there's a plan significantly faster than 1.3 ms?\n\nYep! Watch this:\n\ndrop schema if exists jaytest cascade;\ncreate schema jaytest;\nset search_path to jaytest;\n\ncreate table questions (\n id int not null primary key,\n user_id int not null\n);\ninsert into questions\n select generate_series(1,1100), (random()*2000000)::int;\n\ncreate table users (\n id int not null primary key\n);\ninsert into users select generate_series(1, 2000000);\n\nvacuum freeze analyze;\n\nexplain analyze\nselect questions.id\nfrom questions\njoin (\n select u.id\n from users as u\n group by u.id\n) as s\non s.id = questions.user_id\nwhere questions.id = 1;\n\n-----------------------\n Merge Join (cost=8.28..90833.02 rows=1818 width=4) (actual \ntime=888.787..888.790 rows=1 loops=1)\n Merge Cond: (u.id = questions.user_id)\n -> Group (cost=0.00..65797.47 rows=2000000 width=4) (actual \ntime=0.017..735.509 rows=1747305 loops=1)\n -> Index Scan using users_pkey on users u (cost=0.00..60797.47 \nrows=2000000 width=4) (actual time=0.015..331.990 rows=1747305 loops=1)\n -> Materialize (cost=8.28..8.29 rows=1 width=8) (actual \ntime=0.013..0.015 rows=1 loops=1)\n -> Sort (cost=8.28..8.28 rows=1 width=8) (actual \ntime=0.012..0.013 rows=1 loops=1)\n Sort Key: questions.user_id\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using questions_pkey on questions \n(cost=0.00..8.27 rows=1 width=8) (actual time=0.006..0.006 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 888.832 ms\n(11 rows)\n\nexplain analyze\nselect questions.id\nfrom questions\njoin (\n select u.id\n from users as u\n) as s\non s.id = questions.user_id\nwhere questions.id = 1;\n\n-----------------------\n Nested Loop (cost=0.00..16.77 rows=1 width=4) (actual time=0.019..0.021 \nrows=1 loops=1)\n -> Index Scan using questions_pkey on questions (cost=0.00..8.27 \nrows=1 width=8) (actual time=0.008..0.009 rows=1 loops=1)\n Index Cond: (id = 1)\n -> Index Scan using users_pkey on users u (cost=0.00..8.49 rows=1 \nwidth=4) (actual time=0.007..0.007 rows=1 loops=1)\n Index Cond: (u.id = questions.user_id)\n Total runtime: 0.045 ms\n(6 rows)\n\n> That said, there might be some room for an optimization which pushes\n> that test into the query with the \"group by\" clause. I don't know\n> if there's a problem with that which I'm missing, the construct was\n> judged to be too rare to be worth the cost of testing for it, or\n> it's just that nobody has yet gotten to it.\n\nAnyone have more insights on whether this is hard to optimize or simply \nnot-yet-optimized? And if the latter, where might I start looking? (Not \nthat you -really- want me to submit a patch; my C has regressed to the \"try \nan ampersand. OK, try an asterisk.\" level...)\n\nJay\n",
"msg_date": "Wed, 09 Nov 2011 15:39:14 -0500",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Merlin Moncure<[email protected]> wrote:\n>\n>> Well, this may not fit the OP's 'real' query\n>\n> Right, if I recall correctly, the OP said it was simplified down as\n> far as it could be and still have the issue show.\n>\n>> but the inner subquery is probably better written as a semi-join\n>> (WHERE EXISTS).\n\nKevin's right. The real query involves several SQL and PL/pgsql functions \n(all now inlineable), custom aggregates, a union or two and a small coyote. \n I could post it, but that feels like \"Please write my code for me\". \nStill, if you really want to...\n\nMeanwhile, it's good for me to learn how the planner sees my queries and how \nI can best state them. I assume this is me not understanding something \nabout restrictions across group-by nodes.\n\nIf the query was more like\n\nselect questions.id\nfrom questions\njoin (\n select sum(u.id)\n from users as u\n group by u.id\n) as s\non s.id = questions.user_id\nwhere questions.id = 1;\n\nwould you no longer be surprised that it scanned all user rows? I.E. is the \n\"group by\" a red herring, which usually wouldn't be present without an \naggregate, and the real problem is that the planner can't restrict aggregates?\n\nThis comment in planagg.c may be relevant; I'm not doing min/max, but is it \nstill true that GROUP BY always looks at all the rows, period?\n\nvoid\npreprocess_minmax_aggregates(PlannerInfo *root, List *tlist)\n...\n/* We don't handle GROUP BY or windowing, because our current\n* implementations of grouping require looking at all the rows anyway,\n*/\n",
"msg_date": "Thu, 10 Nov 2011 08:52:23 -0500",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "Jay Levitt <[email protected]> writes:\n> If the query was more like\n\n> select questions.id\n> from questions\n> join (\n> select sum(u.id)\n> from users as u\n> group by u.id\n> ) as s\n> on s.id = questions.user_id\n> where questions.id = 1;\n\n> would you no longer be surprised that it scanned all user rows?\n\nDon't hold your breath waiting for that to change. To do what you're\nwishing for, we'd have to treat the GROUP BY subquery as if it were an\ninner indexscan, and push a join condition into it. That's not even\npossible today. It might be possible after I get done with the\nparameterized-path stuff I've been speculating about for a couple of\nyears now; but I suspect that even if it is possible, we won't do it\nfor subqueries because of the planner-performance hit we'd take from\nrepeatedly replanning the same subquery.\n\nI'd suggest rephrasing the query to do the join underneath the GROUP BY.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Nov 2011 10:22:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery in a JOIN not getting restricted? "
},
{
"msg_contents": "> Don't hold your breath waiting for that to change. To do what you're\n> wishing for, we'd have to treat the GROUP BY subquery as if it were an\n> inner indexscan, and push a join condition into it. That's not even\n> possible today.\n\nThanks! Knowing \"that's not a thing\" helps; we'll just have to rephrase the \nquery.\n\nJay\n",
"msg_date": "Thu, 10 Nov 2011 10:42:53 -0500",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "On 10/11/11 09:39, Jay Levitt wrote:\n> Kevin Grittner wrote:\n>> Jay Levitt<[email protected]> wrote:\n>>\n>>> I don't get why the GROUP BY in this subquery forces it to scan\n>>> the entire users table (seq scan here, index scan on a larger\n>>> table) when there's only one row in users that can match:\n>\n>> Are you sure there's a plan significantly faster than 1.3 ms?\n>\n> Yep! Watch this:\n>\n> drop schema if exists jaytest cascade;\n> create schema jaytest;\n> set search_path to jaytest;\n>\n> create table questions (\n> id int not null primary key,\n> user_id int not null\n> );\n> insert into questions\n> select generate_series(1,1100), (random()*2000000)::int;\n>\n> create table users (\n> id int not null primary key\n> );\n> insert into users select generate_series(1, 2000000);\n>\n> vacuum freeze analyze;\n>\n> explain analyze\n> select questions.id\n> from questions\n> join (\n> select u.id\n> from users as u\n> group by u.id\n> ) as s\n> on s.id = questions.user_id\n> where questions.id = 1;\n>\n> -----------------------\n> Merge Join (cost=8.28..90833.02 rows=1818 width=4) (actual \n> time=888.787..888.790 rows=1 loops=1)\n> Merge Cond: (u.id = questions.user_id)\n> -> Group (cost=0.00..65797.47 rows=2000000 width=4) (actual \n> time=0.017..735.509 rows=1747305 loops=1)\n> -> Index Scan using users_pkey on users u \n> (cost=0.00..60797.47 rows=2000000 width=4) (actual time=0.015..331.990 \n> rows=1747305 loops=1)\n> -> Materialize (cost=8.28..8.29 rows=1 width=8) (actual \n> time=0.013..0.015 rows=1 loops=1)\n> -> Sort (cost=8.28..8.28 rows=1 width=8) (actual \n> time=0.012..0.013 rows=1 loops=1)\n> Sort Key: questions.user_id\n> Sort Method: quicksort Memory: 25kB\n> -> Index Scan using questions_pkey on questions \n> (cost=0.00..8.27 rows=1 width=8) (actual time=0.006..0.006 rows=1 \n> loops=1)\n> Index Cond: (id = 1)\n> Total runtime: 888.832 ms\n> (11 rows)\n>\n> explain analyze\n> select questions.id\n> from questions\n> join (\n> select u.id\n> from users as u\n> ) as s\n> on s.id = questions.user_id\n> where questions.id = 1;\n>\n> -----------------------\n> Nested Loop (cost=0.00..16.77 rows=1 width=4) (actual \n> time=0.019..0.021 rows=1 loops=1)\n> -> Index Scan using questions_pkey on questions (cost=0.00..8.27 \n> rows=1 width=8) (actual time=0.008..0.009 rows=1 loops=1)\n> Index Cond: (id = 1)\n> -> Index Scan using users_pkey on users u (cost=0.00..8.49 rows=1 \n> width=4) (actual time=0.007..0.007 rows=1 loops=1)\n> Index Cond: (u.id = questions.user_id)\n> Total runtime: 0.045 ms\n> (6 rows)\n>\n>> That said, there might be some room for an optimization which pushes\n>> that test into the query with the \"group by\" clause. I don't know\n>> if there's a problem with that which I'm missing, the construct was\n>> judged to be too rare to be worth the cost of testing for it, or\n>> it's just that nobody has yet gotten to it.\n>\n> Anyone have more insights on whether this is hard to optimize or \n> simply not-yet-optimized? And if the latter, where might I start \n> looking? (Not that you -really- want me to submit a patch; my C has \n> regressed to the \"try an ampersand. OK, try an asterisk.\" level...)\n>\n> Jay\n>\nMinor note:\n\n'PRIMARY KEY' gives you a 'NOT NULL' constraint for free.\n",
"msg_date": "Sat, 12 Nov 2011 22:28:45 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
},
{
"msg_contents": "Tom Lane wrote:\n> Jay Levitt<[email protected]> writes:\n>> If the query was more like\n>\n>> select questions.id\n>> from questions\n>> join (\n>> select sum(u.id)\n>> from users as u\n>> group by u.id\n>> ) as s\n>> on s.id = questions.user_id\n>> where questions.id = 1;\n>\n>> would you no longer be surprised that it scanned all user rows?\n>\n> I'd suggest rephrasing the query to do the join underneath the GROUP BY.\n\nWell, my real goal is to have that inner query in a set-returning function \nthat gives a computed table of other users relative to the current user, and \nthen be able to JOIN that with other things and ORDER BY it:\n\nselect questions.id\nfrom questions\njoin (select * from relevance(current_user)) as r\non r.id = questions.user_id\nwhere questions.id = 1;\n\nI assume there's no way for that function (in SQL or PL/pgSQL) to reach to \nthe upper node and say \"do that join again here\", or force the join order \nfrom down below? I can't imagine how there could be, but never hurts to ask. \n Right now, our workaround is to pass the joined target user as a function \nparameter and do the JOIN in the function, but that means we have to put the \nfunction in the select list, else we hit the lack of LATERAL support:\n\n -- This would need LATERAL\n\nselect questions.id\nfrom questions\njoin (\n select * from relevance(current_user, questions.user_id)) as r\n)\non r.id = questions.user_id\nwhere questions.id = 1;\n\n -- This works but has lots of row-at-a-time overhead\n\nselect questions.id, (\n select * from relevance(current_user, questions.user_id)\n) as r\nfrom questions\nwhere questions.id = 1;\n\nAgain, just checking if there's a solution I'm missing. I know the \noptimizer is only asymptotically approaching optimal!\n\nJay\n",
"msg_date": "Wed, 16 Nov 2011 09:06:34 -0500",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Subquery in a JOIN not getting restricted?"
}
] |
[
{
"msg_contents": "Hi Everyone,\n\nI recently saw a crash on one of our databases, and I was wondering if this\nmight be an indication of something with WAL that might be unexpectedly\ncreating more files than it needs to?\n\nNov 5 16:18:27 localhost postgres[25092]: [111-1] 2011-11-05 16:18:27.524\nPDT [user=slony,db=uk dbhub.com <http://dbhub.sac.iparadigms.com/>(35180)\nPID:25092 XID:2142895751]PANIC: could not write to file\n\"pg_xlog/xlogtemp.25092\": No space left on device\nNov 5 16:18:27 localhost postgres[25092]: [111-2] 2011-11-05 16:18:27.524\nPDT [user=slony,db=uk dbhub.com <http://dbhub.sac.iparadigms.com/>(35180)\nPID:25092 XID:2142895751]STATEMENT: select \"_sac_uk\".forwardConfirm(2, 4, '\n5003717188', '2011-11-05 16:18:26.977112');\nNov 5 16:18:27 localhost postgres[32121]: [7-1] 2011-11-05 16:18:27.531\nPDT [user=,db= PID:32121 XID:0]LOG: server process (PID 25092) was\nterminated by signal 6: Aborted\nNov 5 16:18:27 localhost postgres[32121]: [8-1] 2011-11-05 16:18:27.531\nPDT [user=,db= PID:32121 XID:0]LOG: terminating any other active server\nprocesses\n\nIf you look at this graph (http://cl.ly/2y0W27330t3o2J281H3K), the\npartition actually fills up, and the logs show that postgres crashed.\n After postgres crashed, it automatically restarted, cleared out its WAL\nfiles, and began processing things again at 4:30PM.\n\n From the graph, it looks like a vacuum on table m_dg_read finished at\n4:08PM, which might explain why the downward slope levels off for a few\nminutes:\n\n> Nov 5 16:08:03 localhost postgres[18741]: [20-1] 2011-11-05 16:08:03.400\n> PDT [user=,db= PID:18741 XID:0]LOG: automatic vacuum of table\n> \"uk.public.m_dg_read\": index scans: 1\n> Nov 5 16:08:03 localhost postgres[18741]: [20-2] pages: 0 removed,\n> 65356 remain\n> Nov 5 16:08:03 localhost postgres[18741]: [20-3] tuples: 31770\n> removed, 1394263 remain\n> Nov 5 16:08:03 localhost postgres[18741]: [20-4] system usage: CPU\n> 2.08s/5.35u sec elapsed 619.39 sec\n\n\nLooks like right afterwards, it got started on table m_object, which\nfinished at 4:18PM:\n\n> Nov 5 16:18:19 localhost postgres[18686]: [9-1] 2011-11-05 16:18:19.448\n> PDT [user=,db= PID:18686 XID:0]LOG: automatic vacuum of table\n> \"uk.public.m_object\": index scans: 1\n> Nov 5 16:18:19 localhost postgres[18686]: [9-2] pages: 0 removed,\n> 152862 remain\n> Nov 5 16:18:19 localhost postgres[18686]: [9-3] tuples: 17084\n> removed, 12455761 remain\n> Nov 5 16:18:19 localhost postgres[18686]: [9-4] system usage: CPU\n> 4.55s/15.09u sec elapsed 1319.98 sec\n\n\nIt could very well be the case that upon the finish of m_object's vacuum,\nanother vacuum was beginning, and it eventually just crashed because there\nwas no room for another vacuum to finish.\n\nWe encountered a situation like this last summer on 7/4/2010 for a\ndifferent database cluster -- a big vacuum-for-wraparound on at 15GB table\nfilled the pg_xlog partition--this is how we started monitoring the pg_xlog\nfile size and wraparound countdown. Seems like there was some sort of\nvacuum-for-wraparound process happening during the time of this crash, as\nwe also track the XID to see when we should expect a VACUUM FREEZE (\nhttp://cl.ly/3s1S373I0l0v3E171Z0V).\n\nSome configs:\ncheckpoint_segments=16\nwal_buffers=8MB\n#archive_mode=off\ncheckpoint_completion_target=0.9\n\nPostgres version is 8.4.5\n\nNote also that the pg_xlog partition is 9.7GB. No other apps run on the\nmachine besides pgbouncer, so it's highly unlikely that files are written\nto this partition by another process. Also, our five largest tables are\nthe following:\ngm3_load_times | 2231 MB\nm_object_paper | 1692 MB\nm_object | 1192 MB\nm_report_stats | 911 MB\ngm3_mark | 891 MB\n\nMy biggest question is: we know from the docs that there should be no more\nthan (2 + checkpoint_completion_target) * checkpoint_segments + 1 files.\n For us, that would mean no more than 48 files, which equates to 384MB--far\nlower than the 9.7GB partition size. **Why would WAL use up so much disk\nspace?**\n\nThanks for reading, and thanks in advance for any help you may provide.\n--Richard\n\nHi Everyone,I recently saw a crash on one of our databases, and I was wondering if this might be an indication of something with WAL that might be unexpectedly creating more files than it needs to?\nNov 5 16:18:27 localhost postgres[25092]: [111-1] 2011-11-05 16:18:27.524 PDT [user=slony,db=uk dbhub.com(35180) PID:25092 XID:2142895751]PANIC: could not write to file \"pg_xlog/xlogtemp.25092\": No space left on device\nNov 5 16:18:27 localhost postgres[25092]: [111-2] 2011-11-05 16:18:27.524 PDT [user=slony,db=uk dbhub.com(35180) PID:25092 XID:2142895751]STATEMENT: select \"_sac_uk\".forwardConfirm(2, 4, '5003717188', '2011-11-05 16:18:26.977112'); \nNov 5 16:18:27 localhost postgres[32121]: [7-1] 2011-11-05 16:18:27.531 PDT [user=,db= PID:32121 XID:0]LOG: server process (PID 25092) was terminated by signal 6: Aborted\nNov 5 16:18:27 localhost postgres[32121]: [8-1] 2011-11-05 16:18:27.531 PDT [user=,db= PID:32121 XID:0]LOG: terminating any other active server processes\nIf you look at this graph (http://cl.ly/2y0W27330t3o2J281H3K), the partition actually fills up, and the logs show that postgres crashed. After postgres crashed, it automatically restarted, cleared out its WAL files, and began processing things again at 4:30PM.\nFrom the graph, it looks like a vacuum on table m_dg_read finished at 4:08PM, which might explain why the downward slope levels off for a few minutes:\n\n\nNov 5 16:08:03 localhost postgres[18741]: [20-1] 2011-11-05 16:08:03.400 PDT [user=,db= PID:18741 XID:0]LOG: automatic vacuum of table \"uk.public.m_dg_read\": index scans: 1Nov 5 16:08:03 localhost postgres[18741]: [20-2] pages: 0 removed, 65356 remain\n\n\nNov 5 16:08:03 localhost postgres[18741]: [20-3] tuples: 31770 removed, 1394263 remainNov 5 16:08:03 localhost postgres[18741]: [20-4] system usage: CPU 2.08s/5.35u sec elapsed 619.39 sec\nLooks like right afterwards, it got started on table m_object, which finished at 4:18PM:\n\n\nNov 5 16:18:19 localhost postgres[18686]: [9-1] 2011-11-05 16:18:19.448 PDT [user=,db= PID:18686 XID:0]LOG: automatic vacuum of table \"uk.public.m_object\": index scans: 1Nov 5 16:18:19 localhost postgres[18686]: [9-2] pages: 0 removed, 152862 remain\n\n\nNov 5 16:18:19 localhost postgres[18686]: [9-3] tuples: 17084 removed, 12455761 remainNov 5 16:18:19 localhost postgres[18686]: [9-4] system usage: CPU 4.55s/15.09u sec elapsed 1319.98 sec\nIt could very well be the case that upon the finish of m_object's vacuum, another vacuum was beginning, and it eventually just crashed because there was no room for another vacuum to finish.\nWe encountered a situation like this last summer on 7/4/2010 for a different database cluster -- a big vacuum-for-wraparound on at 15GB table filled the pg_xlog partition--this is how we started monitoring the pg_xlog file size and wraparound countdown. Seems like there was some sort of vacuum-for-wraparound process happening during the time of this crash, as we also track the XID to see when we should expect a VACUUM FREEZE (http://cl.ly/3s1S373I0l0v3E171Z0V).\nSome configs:checkpoint_segments=16wal_buffers=8MB#archive_mode=offcheckpoint_completion_target=0.9Postgres version is 8.4.5\nNote also that the pg_xlog partition is 9.7GB. No other apps run on the machine besides pgbouncer, so it's highly unlikely that files are written to this partition by another process. Also, our five largest tables are the following:\ngm3_load_times | 2231 MBm_object_paper | 1692 MBm_object | 1192 MBm_report_stats | 911 MB gm3_mark | 891 MB My biggest question is: we know from the docs that there should be no more than (2 + checkpoint_completion_target) * checkpoint_segments + 1 files. For us, that would mean no more than 48 files, which equates to 384MB--far lower than the 9.7GB partition size. **Why would WAL use up so much disk space?**\nThanks for reading, and thanks in advance for any help you may provide.--Richard",
"msg_date": "Mon, 7 Nov 2011 14:18:46 -0800",
"msg_from": "Richard Yen <[email protected]>",
"msg_from_op": true,
"msg_subject": "WAL partition filling up after high WAL activity"
},
{
"msg_contents": "On 11/07/2011 05:18 PM, Richard Yen wrote:\n> My biggest question is: we know from the docs that there should be no \n> more than (2 + checkpoint_completion_target) * checkpoint_segments + 1 \n> files. For us, that would mean no more than 48 files, which equates \n> to 384MB--far lower than the 9.7GB partition size. **Why would WAL \n> use up so much disk space?**\n>\n\nThat's only true if things are operating normally. There are at least \ntwo ways this can fail to be a proper upper limit on space used:\n\n1) You are archiving to a second system, and the archiving isn't keeping \nup. Things that haven't been archived can't be re-used, so more disk \nspace is used.\n\n2) Disk I/O is slow, and the checkpoint writes take a significant period \nof time. The internal scheduling assumes each individual write will \nhappen without too much delay. That assumption can easily be untrue on \na busy system. The worst I've seen now are checkpoints that take 6 \nhours to sync, where the time is supposed to be a few seconds. Disk \nspace in that case was a giant multiple of checkpoint_segments. (The \nsource of that problem is very much improved in PostgreSQL 9.1)\n\nThe info needed to figure out which category you're in would appear \nafter tuning log_checkpoints on in the postgresql.conf ; you only need \nto reload the server config after that, doesn't require a restart. I \nwould guess you have realy long sync times there.\n\nAs for what to do about it, checkpoint_segments=16 is a low setting. \nYou might as well set it to a large number, say 128, and let checkpoints \nget driven by time instead. The existing limit isn't working \neffectively anyway, and having more segments lets the checkpoint \nspreading code work more evenly.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n\n\n\n\n\n\nOn 11/07/2011 05:18 PM, Richard Yen wrote:\n\n\n\n\nMy biggest question is: we know from the docs that there should\nbe no more than (2\n+ checkpoint_completion_target)\n* checkpoint_segments +\n1 files. For us, that would mean no more than 48 files, which equates\nto 384MB--far lower than the 9.7GB partition size. **Why would WAL use\nup so much disk space?**\n\n\n\n\n\n\nThat's only true if things are operating normally. There are at least\ntwo ways this can fail to be a proper upper limit on space used:\n\n1) You are archiving to a second system, and the archiving isn't\nkeeping up. Things that haven't been archived can't be re-used, so\nmore disk space is used.\n\n2) Disk I/O is slow, and the checkpoint writes take a significant\nperiod of time. The internal scheduling assumes each individual write\nwill happen without too much delay. That assumption can easily be\nuntrue on a busy system. The worst I've seen now are checkpoints that\ntake 6 hours to sync, where the time is supposed to be a few seconds. \nDisk space in that case was a giant multiple of checkpoint_segments. \n(The source of that problem is very much improved in PostgreSQL 9.1)\n\nThe info needed to figure out which category you're in would appear\nafter tuning log_checkpoints on in the postgresql.conf ; you only need\nto reload the server config after that, doesn't require a restart. I\nwould guess you have realy long sync times there.\n\nAs for what to do about it, checkpoint_segments=16 is a low setting. \nYou might as well set it to a large number, say 128, and let\ncheckpoints get driven by time instead. The existing limit isn't\nworking effectively anyway, and having more segments lets the\ncheckpoint spreading code work more evenly.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Wed, 09 Nov 2011 11:06:46 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL partition filling up after high WAL activity"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 11/09/2011 05:06 PM, Greg Smith wrote:\n> On 11/07/2011 05:18 PM, Richard Yen wrote:\n>> My biggest question is: we know from the docs that there should be no\n>> more than (2 + checkpoint_completion_target) * checkpoint_segments + 1\n>> files. For us, that would mean no more than 48 files, which equates\n>> to 384MB--far lower than the 9.7GB partition size. **Why would WAL\n>> use up so much disk space?**\n>>\n> \n> That's only true if things are operating normally. There are at least\n> two ways this can fail to be a proper upper limit on space used:\n> \n> 1) You are archiving to a second system, and the archiving isn't keeping\n> up. Things that haven't been archived can't be re-used, so more disk\n> space is used.\n> \n> 2) Disk I/O is slow, and the checkpoint writes take a significant period\n> of time. The internal scheduling assumes each individual write will\n> happen without too much delay. That assumption can easily be untrue on\n> a busy system. The worst I've seen now are checkpoints that take 6\n> hours to sync, where the time is supposed to be a few seconds. Disk\n> space in that case was a giant multiple of checkpoint_segments. (The\n> source of that problem is very much improved in PostgreSQL 9.1)\n> \n\n\nHello\n\nWe have a similar case in june but we did not find the cause of our\nproblem. More details and information:\nhttp://archives.postgresql.org/pgsql-docs/2011-06/msg00007.php\n\nYour explanation in 2) sounds like a good candidate for the problem we\nhad. As I said in june, I think we need to improve the documentation in\nthis area. A note in the documentation about what you have explained in\n2) with maybe some hints about how to find out if this is happening will\nbe a great improvement.\n\nWe did not understand why we experienced this problem in june when\ncreating a GIN index on a tsvector column. But we found out that a lot\nof the tsvector data was generated from \"garbage\" data (base64 encoding\nof huge attachments). When we generated the right tsvector data, the\ncreation of the GIN index ran smoothly and the problem with extra WAL\nfiles disappeared.\n\nPS.- In our case, the disk space used by all the extra WAL files was\nalmost the equivalent to the 17GB of our GIN index.\n\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.10 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/\n\niEYEARECAAYFAk688LoACgkQBhuKQurGihTbvQCfaSBdYNF2oOtErcx/e4u0Zw1J\npLIAn2Ztdbuz33es2uw8ddSIjj8UXe3s\n=olkD\n-----END PGP SIGNATURE-----\n",
"msg_date": "Fri, 11 Nov 2011 10:54:11 +0100",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL partition filling up after high WAL activity"
},
{
"msg_contents": "On 11/11/2011 04:54 AM, Rafael Martinez wrote:\n> Your explanation in 2) sounds like a good candidate for the problem we\n> had. As I said in june, I think we need to improve the documentation in\n> this area. A note in the documentation about what you have explained in\n> 2) with maybe some hints about how to find out if this is happening will\n> be a great improvement.\n> \n\nA new counter was added to pg_stat_bgwriter in PostgreSQL 9.1 that \ntracks when the problem I described happens. It's hard to identify it \nspecifically without a source code change of some sort. Initially I \nadded new logging to the server code to identify the issue before the \nnew counter was there. The only thing you can easily look at that tends \nto correlate well with the worst problems here is the output from \nturning log_checkpoint on. Specifically, the \"sync\" times going way up \nis a sign there's a problem with write speed.\n\nAs for the documentation, not much has really changed from when you \nbrought this up on the docs list. The amount of WAL files that can be \ncreated by a \"short-term peak\" is unlimited, which is why there's no \nbetter limit listed than that. Some of the underlying things that make \nthe problem worse are operating system level issues, not ones in the \ndatabase itself; the PostgreSQL documentation doesn't try to wander too \nfar into that level. There are also a large number of things you can do \nat the application level that will generate a lot of WAL activity. It \nwould be impractical to list all of them in the checkpoint documentation \nthough.\n\nOn reviewing this section of the docs again, one thing that we could do \nis make the \"WAL Configuration\" section talk more about log_checkpoints \nand interpreting its output. Right now there's no mention of that \nparameter in the section that talks about parameters to configure; there \nreally should be.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Sat, 12 Nov 2011 01:31:16 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL partition filling up after high WAL activity"
}
] |
[
{
"msg_contents": "I have a question on how the analyzer works in this type of scenario.\r\n\r\nWe calculate some results and COPY INTO some partitioned tables, which we use some selects to aggregate the data back out into reports. Everyone once in a while the aggregation step picks a bad plan due to stats on the tables that were just populated. Updating the stats and rerunning the query seems to solve the problem, this only happens if we enable nested loop query plans.\r\n\r\nWhich leads me to a few questions:\r\n\r\nAssumption: that starts aren't getting created fast enough and then the query planner then picks a bad plan since we query the tables shortly after being populated, so it decided to use a nested loop on a large set of results incorrectly.\r\n\r\n- if there are no stats on the table how does the query planner identify the best query plan?\r\n- we have tried really aggressive auto_analyze settings down to .001, so basically any insert will get the analyze running with no luck.\r\n- will an analyze block on update to the statistics tables, which makes me wonder if we are updating too often?\r\n\r\nThe other option is just to analyze each table involved in the query after the insert, but that seems a bit counterproductive.\r\n\r\nThoughts?\r\n_______________________________________________________________________________________________\r\n| John W. Strange | Vice President | Global Commodities Technology\r\n| J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C: 281-744-6476 | F: 713 236-3333\r\n| [email protected]<mailto:[email protected]> | jpmorgan.com\r\n\r\n\r\n\r\nThis email is confidential and subject to important disclaimers and\r\nconditions including on offers for the purchase or sale of\r\nsecurities, accuracy and completeness of information, viruses,\r\nconfidentiality, legal privilege, and legal entity disclaimers,\r\navailable at http://www.jpmorgan.com/pages/disclosures/email. \nI have a question on how the analyzer works in this type of scenario. We calculate some results and COPY INTO some partitioned tables, which we use some selects to aggregate the data back out into reports. Everyone once in a while the aggregation step picks a bad plan due to stats on the tables that were just populated. Updating the stats and rerunning the query seems to solve the problem, this only happens if we enable nested loop query plans. Which leads me to a few questions: Assumption: that starts aren’t getting created fast enough and then the query planner then picks a bad plan since we query the tables shortly after being populated, so it decided to use a nested loop on a large set of results incorrectly. - if there are no stats on the table how does the query planner identify the best query plan?- we have tried really aggressive auto_analyze settings down to .001, so basically any insert will get the analyze running with no luck.- will an analyze block on update to the statistics tables, which makes me wonder if we are updating too often? The other option is just to analyze each table involved in the query after the insert, but that seems a bit counterproductive. Thoughts?_______________________________________________________________________________________________| John W. Strange | Vice President | Global Commodities Technology | J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C: 281-744-6476 | F: 713 236-3333| [email protected] | jpmorgan.com",
"msg_date": "Tue, 8 Nov 2011 17:33:09 -0500",
"msg_from": "\"Strange, John W\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trying to understand Stats/Query planner issue"
},
{
"msg_contents": "\"Strange, John W\" <[email protected]> writes:\n> I have a question on how the analyzer works in this type of scenario.\n> We calculate some results and COPY INTO some partitioned tables, which we use some selects to aggregate the data back out into reports. Everyone once in a while the aggregation step picks a bad plan due to stats on the tables that were just populated. Updating the stats and rerunning the query seems to solve the problem, this only happens if we enable nested loop query plans.\n\nWell, even if auto-analyze launches instantly after you commit the\ninsertions (which it won't), it's going to take time to scan the table\nand then commit the updates to pg_statistic. So there is always going\nto be some window where queries will get planned with obsolete\ninformation. If you're inserting enough data to materially change the\nstatistics of a table, and you need to query that table right away,\ndoing a manual ANALYZE rather than waiting for auto-analyze is\nrecommended.\n\n> The other option is just to analyze each table involved in the query after the insert, but that seems a bit counterproductive.\n\nWhy would you think that? This type of scenario is exactly why ANALYZE\nisn't deprecated as a user command.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 13 Nov 2011 11:17:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to understand Stats/Query planner issue "
}
] |
[
{
"msg_contents": "Folks,\n\nAfter having some production issues, I did some testing and it seems\nthat any SQL function declared STRICT will never inline. As a result,\nit won't work with either indexes (on the underlying predicate) or\npartitioning.\n\nThis seems like a horrible gotcha for our users. At the very least I'd\nlike to document it (in CREATE FUNCTION, presumably), but it would be\nbetter to fix it. Thoughts?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Tue, 08 Nov 2011 15:29:03 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "STRICT SQL functions never inline"
},
{
"msg_contents": "On Tuesday, November 08, 2011 15:29:03 Josh Berkus wrote:\n> Folks,\n> \n> After having some production issues, I did some testing and it seems\n> that any SQL function declared STRICT will never inline. As a result,\n> it won't work with either indexes (on the underlying predicate) or\n> partitioning.\n> \n> This seems like a horrible gotcha for our users. At the very least I'd\n> like to document it (in CREATE FUNCTION, presumably), but it would be\n> better to fix it. Thoughts?\nI am all for documenting it somewhere. There were lots of people hit by it in \nthe past - e.g. the postgis folks.\nIts not so easy to fix though. The problem is that straight inlining would \nchange the behaviour because suddenly the expression might not return NULL \nanymore even though one of the parameters is NULL. Or even cause more problems \nbecause the content wasn't prepared to handle NULLs.\nIt would be possible to inline a CASE $1 IS NULL OR $2 IS NULL .... THEN NULL \nELSE orig_expression END but that would be usefull in far fewer cases because \nit won't help much in most cases and actually might hurt performance in some.\n\nAndres\n",
"msg_date": "Wed, 09 Nov 2011 01:25:36 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: STRICT SQL functions never inline"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> After having some production issues, I did some testing and it seems\n> that any SQL function declared STRICT will never inline.\n\nIt won't unless the planner can prove that the resulting expression\nbehaves the same, ie, is also strict for *all* the parameters. Which\nin most cases isn't true, or at least is very difficult to prove.\nThis is not a bug.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Nov 2011 19:29:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: STRICT SQL functions never inline "
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nI could see the following in the production server (result of the \"top\" M\ncommand) -\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND\n25265 postgres 15 0 3329m 2.5g 1.9g S 0.0 4.0\n 542:47.83 postgres: writer process\n\nThe \"writer process\" refers to bg_writer ? and we have shared_buffers set\nto 1920 MB (around 1.9 GB).\n\nIn an other similar situation, we have \"postgres writer process\" using up 7\n- 8 GB memory constantly.\n\npg_tune is suggesting to increase the shared_buffers to 8 GB.\n\nIf the shared_buffer is not enough, Postgres uses OS cache ?\n\nWe have a 64 GB RAM.\n\nWe have decided the following -\n\n1. We have 20 databases running in one cluster and all are more or less\nhighly active databases.\n2. We will be splitting across the databases across multiple clusters to\nhave multiple writer processes working across databases.\n\nPlease help us if you have any other solutions around this.\n\nThanks\nVB\n\nHello Everyone,I could see the following in the production server (result of the \"top\" M command) - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n25265 postgres 15 0 3329m 2.5g 1.9g S 0.0 4.0 542:47.83 postgres: writer processThe \"writer process\" refers to bg_writer ? and we have shared_buffers set to 1920 MB (around 1.9 GB).\nIn an other similar situation, we have \"postgres writer process\" using up 7 - 8 GB memory constantly. pg_tune is suggesting to increase the shared_buffers to 8 GB.\nIf the shared_buffer is not enough, Postgres uses OS cache ?We have a 64 GB RAM.We have decided the following -1. We have 20 databases running in one cluster and all are more or less highly active databases.\n2. We will be splitting across the databases across multiple clusters to have multiple writer processes working across databases.Please help us if you have any other solutions around this.\nThanksVB",
"msg_date": "Wed, 9 Nov 2011 14:55:37 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": ": bg_writer overloaded ?"
},
{
"msg_contents": "On Wed, Nov 9, 2011 at 2:25 AM, Venkat Balaji <[email protected]> wrote:\n> Hello Everyone,\n> I could see the following in the production server (result of the \"top\" M\n> command) -\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> COMMAND\n> 25265 postgres 15 0 3329m 2.5g 1.9g S 0.0 4.0\n> 542:47.83 postgres: writer process\n> The \"writer process\" refers to bg_writer ? and we have shared_buffers set to\n> 1920 MB (around 1.9 GB).\n\nSo it is using 2.5G of mem of which 1.9G is shared memory (i.e. shared\nbuffers) so the actual amount of RAM it's using is ~600Megs.\n\nI see no problem.\n\n> In an other similar situation, we have \"postgres writer process\" using up 7\n> - 8 GB memory constantly.\n\nI doubt it. Sounds more like you're misreading the output of top.\n\n> pg_tune is suggesting to increase the shared_buffers to 8 GB.\n\nReasonable.\n\n> If the shared_buffer is not enough, Postgres uses OS cache ?\n\nNot really how things work. The OS uses all spare memory as cache.\nPostgreSQL uses shared_buffers as a cache. The OS is much more\nefficient about caching in dozens of gigabytes than pgsql is.\n\n> We have a 64 GB RAM.\n> We have decided the following -\n> 1. We have 20 databases running in one cluster and all are more or less\n> highly active databases.\n> 2. We will be splitting across the databases across multiple clusters to\n> have multiple writer processes working across databases.\n> Please help us if you have any other solutions around this.\n\nYou have shown us no actual problem.\n",
"msg_date": "Wed, 9 Nov 2011 07:46:10 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : bg_writer overloaded ?"
},
{
"msg_contents": "On Wed, Nov 9, 2011 at 8:16 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Wed, Nov 9, 2011 at 2:25 AM, Venkat Balaji <[email protected]>\n> wrote:\n> > Hello Everyone,\n> > I could see the following in the production server (result of the \"top\" M\n> > command) -\n> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> > COMMAND\n> > 25265 postgres 15 0 3329m 2.5g 1.9g S 0.0 4.0\n> > 542:47.83 postgres: writer process\n> > The \"writer process\" refers to bg_writer ? and we have shared_buffers\n> set to\n> > 1920 MB (around 1.9 GB).\n>\n> So it is using 2.5G of mem of which 1.9G is shared memory (i.e. shared\n> buffers) so the actual amount of RAM it's using is ~600Megs.\n>\n> I see no problem.\n>\n\nIs this not the indication that the shared_buffers is undersized ?\n\n\n>\n> > In an other similar situation, we have \"postgres writer process\" using\n> up 7\n> > - 8 GB memory constantly.\n>\n> I doubt it. Sounds more like you're misreading the output of top.\n>\n\nBelow is the output directly taken from our production box. As per you, the\nwriter process is taking up 1.9g from\nshared_buffers and the remaining (around 5.6 GB) from RAM.\n\nMem: 65980808k total, 65620700k used, 360108k free, 210792k buffers\nSwap: 1052248k total, 321144k used, 731104k free, 51721468k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n10306 postgres 15 0 15.7g 7.5g 1.9g S 1.9 12.0 1037:05 postgres:\nwriter process\n\nIs this not a problem ?\n\n\n> > pg_tune is suggesting to increase the shared_buffers to 8 GB.\n>\n> Reasonable.\n>\n> > If the shared_buffer is not enough, Postgres uses OS cache ?\n>\n> Not really how things work. The OS uses all spare memory as cache.\n> PostgreSQL uses shared_buffers as a cache. The OS is much more\n> efficient about caching in dozens of gigabytes than pgsql is.\n\n\nWhat if the shared_buffers is not enough to cache the data being read from\nthe database ?\n\nThanks for your help !\n\nRegards,\nVB\n\nOn Wed, Nov 9, 2011 at 8:16 PM, Scott Marlowe <[email protected]> wrote:\nOn Wed, Nov 9, 2011 at 2:25 AM, Venkat Balaji <[email protected]> wrote:\n> Hello Everyone,\n> I could see the following in the production server (result of the \"top\" M\n> command) -\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> COMMAND\n> 25265 postgres 15 0 3329m 2.5g 1.9g S 0.0 4.0\n> 542:47.83 postgres: writer process\n> The \"writer process\" refers to bg_writer ? and we have shared_buffers set to\n> 1920 MB (around 1.9 GB).\n\nSo it is using 2.5G of mem of which 1.9G is shared memory (i.e. shared\nbuffers) so the actual amount of RAM it's using is ~600Megs.\n\nI see no problem.Is this not the indication that the shared_buffers is undersized ? \n\n> In an other similar situation, we have \"postgres writer process\" using up 7\n> - 8 GB memory constantly.\n\nI doubt it. Sounds more like you're misreading the output of top.Below is the output directly taken from our production box. As per you, the writer process is taking up 1.9g from \nshared_buffers and the remaining (around 5.6 GB) from RAM.Mem: 65980808k total, 65620700k used, 360108k free, 210792k buffersSwap: 1052248k total, 321144k used, 731104k free, 51721468k cached\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND10306 postgres 15 0 15.7g 7.5g 1.9g S 1.9 12.0 1037:05 postgres: writer process\nIs this not a problem ? \n> pg_tune is suggesting to increase the shared_buffers to 8 GB.\n\nReasonable.\n\n> If the shared_buffer is not enough, Postgres uses OS cache ?\n\nNot really how things work. The OS uses all spare memory as cache.\nPostgreSQL uses shared_buffers as a cache. The OS is much more\nefficient about caching in dozens of gigabytes than pgsql is.What if the shared_buffers is not enough to cache the data being read from the database ?Thanks for your help !\nRegards,VB",
"msg_date": "Mon, 14 Nov 2011 11:24:48 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : bg_writer overloaded ?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have some functions that select data from tables which are daily or monthly updated. My functions are marked as STABLE. I am wondering if they perform better if I mark they as IMMUTABLE?\n\nThank you,\nSorin\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI have some functions that select data from tables which are daily or monthly updated. My functions are marked as STABLE. I am wondering if they perform better\n if I mark they as IMMUTABLE?\n \nThank you,\nSorin",
"msg_date": "Thu, 10 Nov 2011 13:05:56 +0000",
"msg_from": "Sorin Dudui <[email protected]>",
"msg_from_op": true,
"msg_subject": "IMMUTABLE STABLE functions, daily updates"
},
{
"msg_contents": "On 10 November 2011 13:05, Sorin Dudui <[email protected]> wrote:\n> Hi,\n>\n>\n>\n> I have some functions that select data from tables which are daily or\n> monthly updated. My functions are marked as STABLE. I am wondering if they\n> perform better if I mark they as IMMUTABLE?\n\nNo. IMMUTABLE is only appropriate when there is no access to table\ndata from within the function. An example of IMMUTABLE functions\nwould be mathematical operations, where only the inputs and/or\nfunction constants are used to produce a result.\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 10 Nov 2011 13:25:01 +0000",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IMMUTABLE STABLE functions, daily updates"
},
{
"msg_contents": "On Nov 10, 2011 9:26 PM, \"Thom Brown\" <[email protected]> wrote:\n>\n> On 10 November 2011 13:05, Sorin Dudui <[email protected]> wrote:\n> > Hi,\n> >\n> >\n> >\n> > I have some functions that select data from tables which are daily or\n> > monthly updated. My functions are marked as STABLE. I am wondering if\nthey\n> > perform better if I mark they as IMMUTABLE?\n>\n> No. IMMUTABLE is only appropriate when there is no access to table\n> data from within the function\n\nSure it can be faster - the same way defining \"fibonacci(int)\" to always\nreturn 42 is faster, just incorrect.\n\nYou can sometimes kinda get away with it if you are willing to reindex,\ndrop prepared statements, reload functions, etc when the result changes. I\nwould not recommend it.\n\n\nOn Nov 10, 2011 9:26 PM, \"Thom Brown\" <[email protected]> wrote:\n>\n> On 10 November 2011 13:05, Sorin Dudui <[email protected]> wrote:\n> > Hi,\n> >\n> >\n> >\n> > I have some functions that select data from tables which are daily or\n> > monthly updated. My functions are marked as STABLE. I am wondering if they\n> > perform better if I mark they as IMMUTABLE?\n>\n> No. IMMUTABLE is only appropriate when there is no access to table\n> data from within the function\nSure it can be faster - the same way defining \"fibonacci(int)\" to always return 42 is faster, just incorrect.\nYou can sometimes kinda get away with it if you are willing to reindex, drop prepared statements, reload functions, etc when the result changes. I would not recommend it.",
"msg_date": "Thu, 10 Nov 2011 22:28:10 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IMMUTABLE STABLE functions, daily updates"
}
] |
[
{
"msg_contents": "Hello,\n\nA table has two columns id and EffectiveId. First is primary key.\nEffectiveId is almost always equal to id (95%) unless records are\nmerged. Many queries have id = EffectiveId condition. Both columns are\nvery distinct and Pg reasonably decides that condition has very low\nselectivity and picks sequence scan.\n\nSimple perl script that demonstrates estimation error:\nhttps://gist.github.com/1356744\n\nEstimation is ~200 times off (5 vs 950), for real situation it's very\nsimilar. Understandably difference depends on correlation coefficient.\n\nIn application such wrong estimation result in seq scan of this table\nwinning leading position in execution plans over other tables and\nindex scans.\n\nWhat can I do to avoid this problem?\n\nTested with PostgreSQL 9.0.3 on x86_64-apple-darwin10.6.0, compiled by\nGCC i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664),\n64-bit\n\n-- \nBest regards, Ruslan.\n",
"msg_date": "Fri, 11 Nov 2011 19:01:41 +0400",
"msg_from": "Ruslan Zakirov <[email protected]>",
"msg_from_op": true,
"msg_subject": "avoiding seq scans when two columns are very correlated"
},
{
"msg_contents": "Ruslan Zakirov <[email protected]> writes:\n> A table has two columns id and EffectiveId. First is primary key.\n> EffectiveId is almost always equal to id (95%) unless records are\n> merged. Many queries have id = EffectiveId condition. Both columns are\n> very distinct and Pg reasonably decides that condition has very low\n> selectivity and picks sequence scan.\n\nI think the only way is to rethink your data representation. PG doesn't\nhave cross-column statistics at all, and even if it did, you'd be asking\nfor an estimate of conditions in the \"long tail\" of the distribution.\nThat's unlikely to be very accurate.\n\nConsider adding a \"merged\" boolean, or defining effectiveid differently.\nFor instance you could set it to null in unmerged records; then you\ncould get the equivalent of the current meaning with\nCOALESCE(effectiveid, id). In either case, PG would then have\nstatistics that bear directly on the question of how many merged vs\nunmerged records there are.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Nov 2011 10:36:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: avoiding seq scans when two columns are very correlated "
},
{
"msg_contents": "On Fri, Nov 11, 2011 at 7:36 PM, Tom Lane <[email protected]> wrote:\n> Ruslan Zakirov <[email protected]> writes:\n>> A table has two columns id and EffectiveId. First is primary key.\n>> EffectiveId is almost always equal to id (95%) unless records are\n>> merged. Many queries have id = EffectiveId condition. Both columns are\n>> very distinct and Pg reasonably decides that condition has very low\n>> selectivity and picks sequence scan.\n>\n> I think the only way is to rethink your data representation. PG doesn't\n> have cross-column statistics at all, and even if it did, you'd be asking\n> for an estimate of conditions in the \"long tail\" of the distribution.\n> That's unlikely to be very accurate.\n\nRethinking schema is an option that requires more considerations as we\ndo it this way for years and run product on mysql, Pg and Oracle.\nIssue affects Oracle, but it can be worked around by dropping indexes\nor may be by building correlation statistics in 11g (didn't try it\nyet).\n\nWonder if \"CROSS COLUMN STATISTICS\" patch that floats around would\nhelp with such case?\n\n> Consider adding a \"merged\" boolean, or defining effectiveid differently.\n> For instance you could set it to null in unmerged records; then you\n> could get the equivalent of the current meaning with\n> COALESCE(effectiveid, id). In either case, PG would then have\n> statistics that bear directly on the question of how many merged vs\n> unmerged records there are.\n\nNULL in EffectiveId is the way to go, however when we actually need\nthose records (not so often situation) query becomes frightening:\n\nSELECT main.* FROM Tickets main\n JOIN Tickets te\n ON te.EffectiveId = main.id\n OR (te.id = main.id AND te.EffectiveId IS NULL)\n JOIN OtherTable ot\n ON ot.Ticket = te.id\n\nPast experience reminds that joins with ORs poorly handled by many optimizers.\n\nIn the current situation join condition is very straightforward and effective.\n\n> regards, tom lane\n\n-- \nBest regards, Ruslan.\n",
"msg_date": "Fri, 11 Nov 2011 20:39:17 +0400",
"msg_from": "Ruslan Zakirov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: avoiding seq scans when two columns are very correlated"
},
{
"msg_contents": "On Fri, Nov 11, 2011 at 10:01 PM, Ruslan Zakirov <[email protected]> wrote:\n> Hello,\n>\n> A table has two columns id and EffectiveId. First is primary key.\n> EffectiveId is almost always equal to id (95%) unless records are\n> merged. Many queries have id = EffectiveId condition. Both columns are\n> very distinct and Pg reasonably decides that condition has very low\n> selectivity and picks sequence scan.\n>\n> Simple perl script that demonstrates estimation error:\n> https://gist.github.com/1356744\n>\n> Estimation is ~200 times off (5 vs 950), for real situation it's very\n> similar. Understandably difference depends on correlation coefficient.\n>\n> In application such wrong estimation result in seq scan of this table\n> winning leading position in execution plans over other tables and\n> index scans.\n>\n> What can I do to avoid this problem?\n\nDoes a partial index help? CREATE UNIQUE INDEX foo_idx ON mytab(id)\nWHERE id = EffectiveId\n\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n",
"msg_date": "Tue, 15 Nov 2011 14:08:20 +0700",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: avoiding seq scans when two columns are very correlated"
},
{
"msg_contents": "On Tue, Nov 15, 2011 at 11:08 AM, Stuart Bishop <[email protected]> wrote:\n> On Fri, Nov 11, 2011 at 10:01 PM, Ruslan Zakirov <[email protected]> wrote:\n>> Hello,\n\n[snip]\n\n>> In application such wrong estimation result in seq scan of this table\n>> winning leading position in execution plans over other tables and\n>> index scans.\n>>\n>> What can I do to avoid this problem?\n>\n> Does a partial index help? CREATE UNIQUE INDEX foo_idx ON mytab(id)\n> WHERE id = EffectiveId\n\nIt doesn't help. Probably reason is the same as for partitions.\n\n>\n>\n> --\n> Stuart Bishop <[email protected]>\n> http://www.stuartbishop.net/\n>\n\n\n\n-- \nBest regards, Ruslan.\n",
"msg_date": "Tue, 15 Nov 2011 19:04:28 +0400",
"msg_from": "Ruslan Zakirov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: avoiding seq scans when two columns are very correlated"
}
] |
[
{
"msg_contents": "Hi,\n\nI have the following function:\n\n===============================\nCREATE OR REPLACE FUNCTION xxx(text)\n RETURNS SETOF vvvvv AS\n$BODY$\nselect a.x, a.y,\n CASE\n WHEN strpos($1,b.x) > 0\n THEN b.x\n ELSE NULL\n END AS mp_hm\nfrom a LEFT JOIN b ON a.id=b.id\n\n $BODY$\n LANGUAGE sql STABLE\n COST 1000\n ROWS 10000;\n===============================\n\nwhich I call as:\n\nselect * from xxx('test0|test1') where a.x = 'value'\n\n\nI am wondering when the where clause (a.x = 'value') is executed. After the select statement in the function finishes? Or is it appended at the select statement in the function?\n\n\nThank you,\nSorin\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI have the following function:\n \n===============================\nCREATE OR REPLACE FUNCTION xxx(text)\n RETURNS SETOF vvvvv AS\n$BODY$\nselect a.x, a.y,\n CASE\n WHEN strpos($1,b.x) > 0\n THEN b.x\n ELSE NULL\n END AS mp_hm\nfrom a LEFT JOIN b ON a.id=b.id\n\n \n $BODY$\n LANGUAGE sql STABLE\n COST 1000\n ROWS 10000;\n===============================\n \nwhich I call as:\n \nselect * from xxx(‘test0|test1‘) where a.x = ‘value’\n \n \nI am wondering when the where clause (a.x = ‘value’) is executed. After the select statement in the function finishes? Or is it appended at the select statement\n in the function?\n \n \nThank you,\nSorin",
"msg_date": "Fri, 11 Nov 2011 15:38:43 +0000",
"msg_from": "Sorin Dudui <[email protected]>",
"msg_from_op": true,
"msg_subject": "where clause + function, execution order "
},
{
"msg_contents": "Hello,\n\nOn 2011.11.11 17:38, Sorin Dudui wrote:\n>\n> Hi,\n>\n> I have the following function:\n>\n> ===============================\n>\n> CREATE OR REPLACE FUNCTION xxx(text)\n>\n> RETURNS SETOF vvvvv AS\n>\n> $BODY$\n>\n> select a.x, a.y,\n>\n> CASE\n>\n> WHEN strpos($1,b.x) > 0\n>\n> THEN b.x\n>\n> ELSE NULL\n>\n> END AS mp_hm\n>\n> from a LEFT JOIN b ON a.id=b.id\n>\n> $BODY$\n>\n> LANGUAGE sql STABLE\n>\n> COST 1000\n>\n> ROWS 10000;\n>\n> ===============================\n>\n> which I call as:\n>\n> select * from xxx(‘test0|test1‘) where a.x = ‘value’\n>\nYou should get an error as there is no \"a\" in this statement...\n>\n> I am wondering when the where clause (a.x = ‘value’) is executed. \n> After the select statement in the function finishes? Or is it appended \n> at the select statement in the function?\n>\nFunction execute plan is prepared when creating it, so the \"where\" \nclause should check the function result not altering its execution..\n\n-- \nJulius Tuskenis\nHead of the programming department\nUAB nSoft\nmob. +37068233050\n\n\n\n\n\n\n Hello,\n\n On 2011.11.11 17:38, Sorin Dudui wrote:\n \n\n\n\n\nHi,\n \nI have the following function:\n \n===============================\nCREATE OR REPLACE FUNCTION xxx(text)\n RETURNS SETOF vvvvv AS\n$BODY$\nselect a.x, a.y,\n CASE\n WHEN strpos($1,b.x) > 0\n THEN b.x\n ELSE NULL\n END AS mp_hm\nfrom a LEFT JOIN b ON a.id=b.id\n \n \n $BODY$\n LANGUAGE sql STABLE\n COST 1000\n ROWS 10000;\n===============================\n \nwhich I call as:\n \nselect\n * from xxx(‘test0|test1‘) where a.x = ‘value’\n\n\n You should get an error as there is no \"a\" in this statement...\n\n\n\n \n \nI am wondering when the where clause (a.x =\n ‘value’) is executed. After the select statement in the\n function finishes? Or is it appended at the select statement\n in the function?\n \n\n\n Function execute plan is prepared when creating it, so the \"where\"\n clause should check the function result not altering its execution..\n \n\n-- \n Julius Tuskenis\n Head of the programming department\n UAB nSoft\n mob. +37068233050",
"msg_date": "Fri, 11 Nov 2011 17:54:00 +0200",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where clause + function, execution order"
},
{
"msg_contents": "On 11/11/11 15:54, Julius Tuskenis wrote:\n> On 2011.11.11 17:38, Sorin Dudui wrote:\n>> I have the following function:\n>>\n>> CREATE OR REPLACE FUNCTION xxx(text)\n[snip]\n>> LANGUAGE sql STABLE\n\n> Function execute plan is prepared when creating it, so the \"where\"\n> clause should check the function result not altering its execution..\n\nNot true for SQL functions. They can be inlined, but I'm not sure if \nthis one will be.\n\nWhat does EXPLAIN ANALYSE show for this query?\n\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 11 Nov 2011 15:59:49 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where clause + function, execution order"
},
{
"msg_contents": "Sorin Dudui <[email protected]> writes:\n> I am wondering when the where clause (a.x = 'value') is executed. After the select statement in the function finishes? Or is it appended at the select statement in the function?\n\nEXPLAIN is your friend ...\n\nIn this case the function looks inline-able, so reasonably recent\nversions of PG should do what you want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Nov 2011 11:00:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where clause + function, execution order "
},
{
"msg_contents": "Hi,\r\n\r\nthis is the EXPLAIN ANALYSE output:\r\n\r\n\r\n\"Merge Left Join (cost=0.00..2820.34 rows=23138 width=777) (actual time=0.049..317.935 rows=26809 loops=1)\"\r\n\" Merge Cond: ((a.admin10)::text = (b.link_id)::text)\"\r\n\" -> Index Scan using admin_lookup_admin10 on admin_lookup a (cost=0.00..845.04 rows=5224 width=742) (actual time=0.015..40.263 rows=8100 loops=1)\"\r\n\" Filter: (((admin40)::text <> '-1'::text) AND (((admin40)::text = 'ITA10'::text) OR ((admin40)::text = 'ITA15'::text) OR ((admin40)::text = 'ITA19'::text) OR ((admin40)::text = 'ITA04'::text) OR ((admin40)::text = 'ITA09'::text) OR ((admin40)::text = 'ITA03'::text) OR ((admin40)::text = 'ITA08'::text) OR ((admin40)::text = 'ITA17'::text) OR ((admin40)::text = 'ITA02'::text) OR ((admin40)::text = 'ITA18'::text) OR ((admin40)::text = 'ITA01'::text) OR ((admin40)::text = 'ITA20'::text) OR ((admin40)::text = 'ITA13'::text) OR ((admin40)::text = 'ITA11'::text) OR ((admin40)::text = 'ITA14'::text) OR ((admin40)::text = 'ITA16'::text) OR ((admin40)::text = 'ITA07'::text) OR ((admin40)::text = 'ITA06'::text) OR ((admin40)::text = 'ITA12'::text) OR ((admin40)::text = 'ITA05'::text)))\"\r\n\" -> Index Scan using reg_data_a08id_copy on registrations_data b (cost=0.00..1496.89 rows=24174 width=45) (actual time=0.008..70.408 rows=24174 loops=1)\"\r\n\"Total runtime: 372.765 ms\"\r\n\r\n\r\nRegards,\r\nSorin\r\n\r\n-----Ursprüngliche Nachricht-----\r\nVon: [email protected] [mailto:[email protected]] Im Auftrag von Richard Huxton\r\nGesendet: Freitag, 11. November 2011 17:00\r\nAn: Julius Tuskenis\r\nCc: [email protected]\r\nBetreff: Re: [PERFORM] where clause + function, execution order\r\n\r\nOn 11/11/11 15:54, Julius Tuskenis wrote:\r\n> On 2011.11.11 17:38, Sorin Dudui wrote:\r\n>> I have the following function:\r\n>>\r\n>> CREATE OR REPLACE FUNCTION xxx(text)\r\n[snip]\r\n>> LANGUAGE sql STABLE\r\n\r\n> Function execute plan is prepared when creating it, so the \"where\"\r\n> clause should check the function result not altering its execution..\r\n\r\nNot true for SQL functions. They can be inlined, but I'm not sure if this one will be.\r\n\r\nWhat does EXPLAIN ANALYSE show for this query?\r\n\r\n\r\n-- \r\n Richard Huxton\r\n Archonet Ltd\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n",
"msg_date": "Fri, 11 Nov 2011 16:28:04 +0000",
"msg_from": "Sorin Dudui <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: where clause + function, execution order"
},
{
"msg_contents": "On 11/11/11 16:28, Sorin Dudui wrote:\n> Hi,\n>\n> this is the EXPLAIN ANALYSE output:\n>\n>\n> \"Merge Left Join (cost=0.00..2820.34 rows=23138 width=777) (actual time=0.049..317.935 rows=26809 loops=1)\"\n> \" Merge Cond: ((a.admin10)::text = (b.link_id)::text)\"\n> \" -> Index Scan using admin_lookup_admin10 on admin_lookup a (cost=0.00..845.04 rows=5224 width=742) (actual time=0.015..40.263 rows=8100 loops=1)\"\n> \" Filter: (((admin40)::text<> '-1'::text) AND (((admin40)::text = 'ITA10'::text) OR ((admin40)::text = 'ITA15'::text) OR ((admin40)::text = 'ITA19'::text) OR ((admin40)::text = 'ITA04'::text) OR ((admin40)::text = 'ITA09'::text) OR ((admin40)::text = 'ITA03'::text) OR ((admin40)::text = 'ITA08'::text) OR ((admin40)::text = 'ITA17'::text) OR ((admin40)::text = 'ITA02'::text) OR ((admin40)::text = 'ITA18'::text) OR ((admin40)::text = 'ITA01'::text) OR ((admin40)::text = 'ITA20'::text) OR ((admin40)::text = 'ITA13'::text) OR ((admin40)::text = 'ITA11'::text) OR ((admin40)::text = 'ITA14'::text) OR ((admin40)::text = 'ITA16'::text) OR ((admin40)::text = 'ITA07'::text) OR ((admin40)::text = 'ITA06'::text) OR ((admin40)::text = 'ITA12'::text) OR ((admin40)::text = 'ITA05'::text)))\"\n> \" -> Index Scan using reg_data_a08id_copy on registrations_data b (cost=0.00..1496.89 rows=24174 width=45) (actual time=0.008..70.408 rows=24174 loops=1)\"\n> \"Total runtime: 372.765 ms\"\n\nThat certainly looks like it's been inlined. You are testing for \n\"ITA10\", \"ITA15\" etc outside the function-call, no? It's pushing those \ntests down, using index \"admin_lookup_admin10\" to test for them then \njoining afterwards.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 11 Nov 2011 16:55:44 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where clause + function, execution order"
}
] |
[
{
"msg_contents": "Hello, just for clarification.\n\n \n\nUnlogged tables are not memory tables don't?\n\n \n\nIf we stop postgres server (normal stop) and start again, all information in\nunlogged tables still remain?\n\n \n\nSo, can I expect a data loss just in case of crash, power failure or SO\ncrash don't?\n\n \n\nIn case of crash, is possible that data corruption happened in a unlogged\ntables?\n\n \n\nFor performance purpose can I use async commit and unlogged tables?\n\n \n\n \n\nThanks!\n\n \n\n \n\n \n\n \n\n\nHello, just for clarification. Unlogged tables are not memory tables don’t? If we stop postgres server (normal stop) and start again, all information in unlogged tables still remain? So, can I expect a data loss just in case of crash, power failure or SO crash don’t? In case of crash, is possible that data corruption happened in a unlogged tables? For performance purpose can I use async commit and unlogged tables? Thanks!",
"msg_date": "Fri, 11 Nov 2011 17:08:44 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "unlogged tables"
},
{
"msg_contents": "* Anibal David Acosta ([email protected]) wrote:\n> Unlogged tables are not memory tables don't?\n\nUnlogged tables are not memory tables.\n\n> If we stop postgres server (normal stop) and start again, all information in\n> unlogged tables still remain?\n\nYes.\n\n> So, can I expect a data loss just in case of crash, power failure or SO\n> crash don't?\n\nYes.\n\n> In case of crash, is possible that data corruption happened in a unlogged\n> tables?\n\nIn a crash, unlogged tables are automatically truncated.\n\n> For performance purpose can I use async commit and unlogged tables?\n\nI'm not aware of any issues (beyond those already documented for async\ncommit..) with having async commit and unlogged tables.\n\n\tTHanks,\n\n\t\tStephen",
"msg_date": "Fri, 11 Nov 2011 15:18:22 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "Hi,\n\nOn 12 November 2011 00:18, Stephen Frost <[email protected]> wrote:\n> In a crash, unlogged tables are automatically truncated.\n\nBTW I wonder what for they are truncated in a crash case?\n\n-- \nSergey Konoplev\n\nBlog: http://gray-hemp.blogspot.com\nLinkedIn: http://ru.linkedin.com/in/grayhemp\nJID/GTalk: [email protected] Skype: gray-hemp\n",
"msg_date": "Mon, 14 Nov 2011 12:10:47 +0400",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On 14/11/11 08:10, Sergey Konoplev wrote:\n> Hi,\n>\n> On 12 November 2011 00:18, Stephen Frost<[email protected]> wrote:\n>> In a crash, unlogged tables are automatically truncated.\n>\n> BTW I wonder what for they are truncated in a crash case?\n\nBecause they bypass the transaction-log (WAL), hence unlogged.\n\nThere's no way to know whether there were partial updates applied when \nthe system restarts.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 14 Nov 2011 08:58:15 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On 14 November 2011 12:58, Richard Huxton <[email protected]> wrote:\n> Because they bypass the transaction-log (WAL), hence unlogged.\n> There's no way to know whether there were partial updates applied when the\n> system restarts.\n\nI probably did not understand the \"truncate\" meaning correct. It\ntruncates all the records of the table or several recent records only?\n\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n\n\n-- \nSergey Konoplev\n\nBlog: http://gray-hemp.blogspot.com\nLinkedIn: http://ru.linkedin.com/in/grayhemp\nJID/GTalk: [email protected] Skype: gray-hemp\n",
"msg_date": "Mon, 14 Nov 2011 14:08:25 +0400",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On 14/11/11 10:08, Sergey Konoplev wrote:\n> On 14 November 2011 12:58, Richard Huxton<[email protected]> wrote:\n>> Because they bypass the transaction-log (WAL), hence unlogged.\n>> There's no way to know whether there were partial updates applied when the\n>> system restarts.\n>\n> I probably did not understand the \"truncate\" meaning correct. It\n> truncates all the records of the table or several recent records only?\n\nAll.\n\nLet's say you were doing something like \"UPDATE unlogged_table SET x=1 \nWHERE y=2\". If a crash occurs during this command, there's no guarantee \nthat the affected disk pages were all updated. Worse, a single page \nmight be partially updated or even have rubbish in it (depending on the \nnature of the crash).\n\nWithout the WAL there's no way to check whether the table is good or \nnot, or even to know what the last updates were. So - the only safe \nthing to do is truncate the unlogged tables.\n\nIn the event of a normal shutdown, we can flush all the writes to disk \nso we know all the data has been written, so there is no need to truncate.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 14 Nov 2011 10:17:58 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On 14 November 2011 14:17, Richard Huxton <[email protected]> wrote:\n> On 14/11/11 10:08, Sergey Konoplev wrote:\n>>\n>> On 14 November 2011 12:58, Richard Huxton<[email protected]> wrote:\n> Let's say you were doing something like \"UPDATE unlogged_table SET x=1 WHERE\n> y=2\". If a crash occurs during this command, there's no guarantee that the\n> affected disk pages were all updated. Worse, a single page might be\n> partially updated or even have rubbish in it (depending on the nature of the\n> crash).\n>\n> Without the WAL there's no way to check whether the table is good or not, or\n> even to know what the last updates were. So - the only safe thing to do is\n> truncate the unlogged tables.\n>\n> In the event of a normal shutdown, we can flush all the writes to disk so we\n> know all the data has been written, so there is no need to truncate.\n\nThank you for the explanation. Now I understand it.\n\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n\n\n-- \nSergey Konoplev\n\nBlog: http://gray-hemp.blogspot.com\nLinkedIn: http://ru.linkedin.com/in/grayhemp\nJID/GTalk: [email protected] Skype: gray-hemp\n",
"msg_date": "Mon, 14 Nov 2011 14:39:03 +0400",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "Maybe an option like \"Recover from file \" will be useful\nSo, for example, daily some process do a COPY of entire table to a file \n\nIn case of crash postgres recover content from the file.\n\n:)\n\n\n\n-----Mensaje original-----\nDe: Sergey Konoplev [mailto:[email protected]] \nEnviado el: lunes, 14 de noviembre de 2011 07:39 a.m.\nPara: Richard Huxton\nCC: Stephen Frost; Anibal David Acosta; [email protected]\nAsunto: Re: [PERFORM] unlogged tables\n\nOn 14 November 2011 14:17, Richard Huxton <[email protected]> wrote:\n> On 14/11/11 10:08, Sergey Konoplev wrote:\n>>\n>> On 14 November 2011 12:58, Richard Huxton<[email protected]> wrote:\n> Let's say you were doing something like \"UPDATE unlogged_table SET x=1 \n> WHERE y=2\". If a crash occurs during this command, there's no \n> guarantee that the affected disk pages were all updated. Worse, a \n> single page might be partially updated or even have rubbish in it \n> (depending on the nature of the crash).\n>\n> Without the WAL there's no way to check whether the table is good or \n> not, or even to know what the last updates were. So - the only safe \n> thing to do is truncate the unlogged tables.\n>\n> In the event of a normal shutdown, we can flush all the writes to disk \n> so we know all the data has been written, so there is no need to truncate.\n\nThank you for the explanation. Now I understand it.\n\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n\n\n--\nSergey Konoplev\n\nBlog: http://gray-hemp.blogspot.com\nLinkedIn: http://ru.linkedin.com/in/grayhemp\nJID/GTalk: [email protected] Skype: gray-hemp\n\n",
"msg_date": "Mon, 14 Nov 2011 14:05:51 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "\"Anibal David Acosta\" <[email protected]> wrote:\n \n> Maybe an option like \"Recover from file \" will be useful\n> So, for example, daily some process do a COPY of entire table to a\n> file \n> \n> In case of crash postgres recover content from the file.\n \nIf you need to recover file contents on a crash, then an unlogged\ntable is probably not the right choice. There is always\nasynchronous commit.\n \n-Kevin\n",
"msg_date": "Mon, 14 Nov 2011 11:26:32 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "I am doing asynchronous commit but sometimes I think that there are so many\n\"things\" in an insert/update transaction, for a table that has not too much\nimportant information.\n\nMy table is a statistics counters table, so I can live with a partial data\nloss, but not with a full data loss because many counters are weekly and\nmonthly.\n\nUnlogged table can increase speed, this table has about 1.6 millions of\nupdate per hour, but unlogged with a chance of loss all information on a\ncrash are not a good idea for this.\n\nAnyway, thanks Kevin!\n\n\n\n\n\n\n\n-----Mensaje original-----\nDe: Kevin Grittner [mailto:[email protected]] \nEnviado el: lunes, 14 de noviembre de 2011 02:27 p.m.\nPara: 'Richard Huxton'; Anibal David Acosta; 'Sergey Konoplev'\nCC: [email protected]; 'Stephen Frost'\nAsunto: Re: [PERFORM] unlogged tables\n\n\"Anibal David Acosta\" <[email protected]> wrote:\n \n> Maybe an option like \"Recover from file \" will be useful So, for \n> example, daily some process do a COPY of entire table to a file\n> \n> In case of crash postgres recover content from the file.\n \nIf you need to recover file contents on a crash, then an unlogged table is\nprobably not the right choice. There is always asynchronous commit.\n \n-Kevin\n\n",
"msg_date": "Mon, 14 Nov 2011 14:38:31 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "\"Anibal David Acosta\" <[email protected]> wrote:\n \n> I am doing asynchronous commit but sometimes I think that there\n> are so many \"things\" in an insert/update transaction, for a table\n> that has not too much important information.\n> \n> My table is a statistics counters table, so I can live with a\n> partial data loss, but not with a full data loss because many\n> counters are weekly and monthly.\n> \n> Unlogged table can increase speed, this table has about 1.6\n> millions of update per hour, but unlogged with a chance of loss\n> all information on a crash are not a good idea for this.\n \npg_dump -t 'tablename' from a cron job? (Make sure to rotate dump\nfile names, maybe with day of week or some such.)\n \n-Kevin\n",
"msg_date": "Mon, 14 Nov 2011 11:50:25 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "\n>> Unlogged table can increase speed, this table has about 1.6\n>> millions of update per hour, but unlogged with a chance of loss\n>> all information on a crash are not a good idea for this.\n> \n> pg_dump -t 'tablename' from a cron job? (Make sure to rotate dump\n> file names, maybe with day of week or some such.)\n\nOr just \"CREATE TABLE AS\" copy the table every hour to a second, backup\ntable. Then it would be much easier to script automated restore of the\ndata.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Tue, 15 Nov 2011 12:51:01 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "> My table is a statistics counters table, so I can live with a partial \n> data\n> loss, but not with a full data loss because many counters are weekly and\n> monthly.\n>\n> Unlogged table can increase speed, this table has about 1.6 millions of\n> update per hour, but unlogged with a chance of loss all information on a\n> crash are not a good idea for this.\n\nYou could use an unlogged table for hourly updates, and periodically, \naccumulate those counters to a (logged) daily/weekly table...\n\nThe hourly table could be rebuilt by examining only 1 hour's worth of \ndata, so it isn't too much of a problem if it's lost. The other tables \nwould get much less updates.\n",
"msg_date": "Sun, 04 Dec 2011 13:28:03 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "\"In the event of a normal shutdown, we can flush all the writes to disk \nso we know all the data has been written, so there is no need to truncate.\"\n\nIsn't possible to periodically flush data to disk and in case of crush\npostgres to load only the data that existed at last flush? The periodic\nflush could be configurable, for example every 30 minutes or after x rows\nupdated/inserted. \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/unlogged-tables-tp4985453p5845576.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Apr 2015 12:31:42 -0700 (MST)",
"msg_from": "dgabriel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <[email protected]> wrote:\n\n> \"In the event of a normal shutdown, we can flush all the writes to disk\n> so we know all the data has been written, so there is no need to truncate.\"\n>\n> Isn't possible to periodically flush data to disk and in case of crush\n> postgres to load only the data that existed at last flush? The periodic\n> flush could be configurable, for example every 30 minutes or after x rows\n> updated/inserted.\n>\n\nThere is no such facility implemented for UNLOGGED TABLEs. That could be a\nfeature request though.\n\nBest regards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Apr 13, 2015 at 4:31 PM, dgabriel <[email protected]> wrote:\"In the event of a normal shutdown, we can flush all the writes to disk\nso we know all the data has been written, so there is no need to truncate.\"\n\nIsn't possible to periodically flush data to disk and in case of crush\npostgres to load only the data that existed at last flush? The periodic\nflush could be configurable, for example every 30 minutes or after x rows\nupdated/inserted.There is no such facility implemented for UNLOGGED TABLEs. That could be a feature request though.Best regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 13 Apr 2015 16:36:10 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "That will be a very useful feature. I don' \nt care if i loss 1-2 hours of data. I know we could have some cron jobs to\ndump the table periodically but the table could be big, so this operation\ncould be expensive. Also i have to detect when postgres crush, i have no\nidea how i can detect if postgres crushed. Then i have somehow to attache a\nscript at postgres start, to restore the dumps...the dump solution is very\ncomplicate and unreliable. A periodic flush feature will be amazing!\n\nHow is the procedure for feature request on postgres, github?\n\nThanks!\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/unlogged-tables-tp4985453p5845580.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Apr 2015 13:16:14 -0700 (MST)",
"msg_from": "dgabriel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On 2015-04-13 14:16, dgabriel wrote:\n> That will be a very useful feature.\n\nI agree, unlogged tables would be a lot more useful if they didn't \"disappear\"\non re-start.\n\n\n> could be expensive. Also i have to detect when postgres crush, i have no\n> idea how i can detect if postgres crushed. Then i have somehow to attache a\n> script at postgres start, to restore the dumps...the dump solution is very\n> complicate and unreliable. A periodic flush feature will be amazing!\n\nIn my experience postgres is very aggressive in getting rid of unlogged\ntables, it does get rid of them from shutdowns that seem perfectly fine (no\ncrash). A lot of people get surprised by this.\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Apr 2015 14:30:49 -0600",
"msg_from": "Yves Dorfsman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On Mon, Apr 13, 2015 at 5:30 PM, Yves Dorfsman <[email protected]> wrote:\n\n>\n> In my experience postgres is very aggressive in getting rid of unlogged\n> tables, it does get rid of them from shutdowns that seem perfectly fine (no\n> crash). A lot of people get surprised by this.\n\n\nShutdowns in \"fast\" or \"smart\" modes does not get rid of unlogged tables.\nBut if you do \"immediate\", then it does, and I don't see why people get\nsurprised by it, as you probably shouldn't be using \"immediate\" mode in\nnormal circumstances.\n\nBest regards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Apr 13, 2015 at 5:30 PM, Yves Dorfsman <[email protected]> wrote:\n\nIn my experience postgres is very aggressive in getting rid of unlogged\ntables, it does get rid of them from shutdowns that seem perfectly fine (no\ncrash). A lot of people get surprised by this.Shutdowns in \"fast\" or \"smart\" modes does not get rid of unlogged tables. But if you do \"immediate\", then it does, and I don't see why people get surprised by it, as you probably shouldn't be using \"immediate\" mode in normal circumstances.Best regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 13 Apr 2015 17:40:33 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On Monday, April 13, 2015, Matheus de Oliveira <[email protected]>\nwrote:\n\n>\n> On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <[email protected]\n> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:\n>\n>> \"In the event of a normal shutdown, we can flush all the writes to disk\n>> so we know all the data has been written, so there is no need to\n>> truncate.\"\n>>\n>> Isn't possible to periodically flush data to disk and in case of crush\n>> postgres to load only the data that existed at last flush? The periodic\n>> flush could be configurable, for example every 30 minutes or after x rows\n>> updated/inserted.\n>>\n>\n> There is no such facility implemented for UNLOGGED TABLEs. That could be a\n> feature request though.\n>\n>\nWell, that is half right anyway. UNLOGGED tables obey checkpoints just\nlike any other table. The missing feature is an option to leaved restored\nthe last checkpoint. Instead, not knowing whether there were changes since\nthe last checkpoint, the system truncated the relation.\n\nWhat use case is there for a behavior that the last checkpoint data is left\non the relation upon restarting - not knowing whether it was possible the\nother data could have been written subsequent?\n\nDavid J.\n\nOn Monday, April 13, 2015, Matheus de Oliveira <[email protected]> wrote:On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <[email protected]> wrote:\"In the event of a normal shutdown, we can flush all the writes to disk\nso we know all the data has been written, so there is no need to truncate.\"\n\nIsn't possible to periodically flush data to disk and in case of crush\npostgres to load only the data that existed at last flush? The periodic\nflush could be configurable, for example every 30 minutes or after x rows\nupdated/inserted.There is no such facility implemented for UNLOGGED TABLEs. That could be a feature request though.Well, that is half right anyway. UNLOGGED tables obey checkpoints just like any other table. The missing feature is an option to leaved restored the last checkpoint. Instead, not knowing whether there were changes since the last checkpoint, the system truncated the relation.What use case is there for a behavior that the last checkpoint data is left on the relation upon restarting - not knowing whether it was possible the other data could have been written subsequent?David J.",
"msg_date": "Mon, 13 Apr 2015 13:49:16 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On 4/13/15 3:49 PM, David G. Johnston wrote:\n> On Monday, April 13, 2015, Matheus de Oliveira\n> <[email protected] <mailto:[email protected]>> wrote:\n> On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <[email protected]\n> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:\n>\n> \"In the event of a normal shutdown, we can flush all the writes\n> to disk\n> so we know all the data has been written, so there is no need to\n> truncate.\"\n>\n> Isn't possible to periodically flush data to disk and in case of\n> crush\n> postgres to load only the data that existed at last flush? The\n> periodic\n> flush could be configurable, for example every 30 minutes or\n> after x rows\n> updated/inserted.\n>\n> There is no such facility implemented for UNLOGGED TABLEs. That\n> could be a feature request though.\n>\n> Well, that is half right anyway. UNLOGGED tables obey checkpoints just\n> like any other table. The missing feature is an option to\n> leaved restored the last checkpoint. Instead, not knowing whether there\n> were changes since the last checkpoint, the system truncated the relation.\n>\n> What use case is there for a behavior that the last checkpoint data is\n> left on the relation upon restarting - not knowing whether it was\n> possible the other data could have been written subsequent?\n\nYeah, this is not something that would be very easy to accomplish, \nbecause a buffer can get evicted and written to disk at any point. It \nwouldn't be too hard to read every unlogged table during recovery and \nsee if there are any pages that were written after the last checkpoint, \nbut that obviously won't be very fast.\n\nActually, I suppose we could dedicate a fork for unlogged tables and use \nthat to record the newest LSN of any page that's been written out. But \nif you have much of any write activity on the table that's probably \ngoing to be completely useless.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Apr 2015 15:58:43 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "Jim Nasby wrote:\n\n> Yeah, this is not something that would be very easy to accomplish, because a\n> buffer can get evicted and written to disk at any point. It wouldn't be too\n> hard to read every unlogged table during recovery and see if there are any\n> pages that were written after the last checkpoint, but that obviously won't\n> be very fast.\n\nIf you consider only tables, then yeah perhaps this is easy to\naccomplish (not really convinced myself). But if you consider indexes,\nthings are not so easy anymore.\n\n\nIn the thread from 2011 (which this started as a reply to) the OP was\ndoing frequent UPDATEs to keep track of counts of something. I think\nthat would be better served by using INSERTs of deltas and periodic\naccumulation of grouped values, as suggested in\nhttp://www.postgresql.org/message-id/[email protected]\nThis has actually been suggested many times over the years.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Apr 2015 18:13:12 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On 4/13/15 4:13 PM, Alvaro Herrera wrote:\n> Jim Nasby wrote:\n>\n>> Yeah, this is not something that would be very easy to accomplish, because a\n>> buffer can get evicted and written to disk at any point. It wouldn't be too\n>> hard to read every unlogged table during recovery and see if there are any\n>> pages that were written after the last checkpoint, but that obviously won't\n>> be very fast.\n>\n> If you consider only tables, then yeah perhaps this is easy to\n> accomplish (not really convinced myself). But if you consider indexes,\n> things are not so easy anymore.\n\nAre indexes not guaranteed to have LSNs? I thought they basically \nfollowed the same write rules as heap pages in regard to WAL first.\n\nThough, if you have an index that doesn't support logging (like hash) \nyou're still hosed...\n\n> In the thread from 2011 (which this started as a reply to) the OP was\n\nI don't keep PGSQL emails from that far back... ;)\n\n> doing frequent UPDATEs to keep track of counts of something. I think\n> that would be better served by using INSERTs of deltas and periodic\n> accumulation of grouped values, as suggested in\n> http://www.postgresql.org/message-id/[email protected]\n> This has actually been suggested many times over the years.\n\nWhat I was suggesting certainly wouldn't help you if you were getting \nany serious amount of changes to the count.\n\nI am wondering though what the bottleneck in HEAD is with doing an \nUPDATE instead of an INSERT, at least where unlogged would help \nsignificantly. I didn't think we logged all that much more for an \nUPDATE. Heck, with HOT you might even be able to log less.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Apr 2015 17:38:25 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On Mon, Apr 13, 2015 at 1:49 PM, David G. Johnston <\[email protected]> wrote:\n\n> On Monday, April 13, 2015, Matheus de Oliveira <[email protected]>\n> wrote:\n>\n>>\n>> On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <[email protected]>\n>> wrote:\n>>\n>>> \"In the event of a normal shutdown, we can flush all the writes to disk\n>>> so we know all the data has been written, so there is no need to\n>>> truncate.\"\n>>>\n>>> Isn't possible to periodically flush data to disk and in case of crush\n>>> postgres to load only the data that existed at last flush? The periodic\n>>> flush could be configurable, for example every 30 minutes or after x rows\n>>> updated/inserted.\n>>>\n>>\n>> There is no such facility implemented for UNLOGGED TABLEs. That could be\n>> a feature request though.\n>>\n>\nOne way would be to lock dirty buffers from unlogged relations into\nshared_buffers (which hardly seems like a good thing) until the start of a\n\"super-checkpoint\" and then write them all out as fast as possible (which\nkind of defeats checkpoint_completion_target). And then if the crash\nhappened during a super-checkpoint, the data would still be inconsistent\nand need to be truncated.\n\n\n>\n>>\n> Well, that is half right anyway. UNLOGGED tables obey checkpoints just\n> like any other table.\n>\n\nDo they? I thought they only obeyed shutdown checkpoints, not online\ncheckpoints. I do remember some changes around this area, but none that\ncompletely reverted that logic.\n\n\n\n> The missing feature is an option to leaved restored the last checkpoint.\n> Instead, not knowing whether there were changes since the last checkpoint,\n> the system truncated the relation.\n>\n> What use case is there for a behavior that the last checkpoint data is\n> left on the relation upon restarting - not knowing whether it was possible\n> the other data could have been written subsequent?\n>\n\nI would like a way to have unlogged tables be available on a replica\nprovided that no changes were made to them between the pg_basebackup and\nthe recovery point.\n\nMy use case is that I mark certain read-only-after-bulk-loading tables as\nunlogged solely to avoid blowing out the log archive during the loading\nphase and refresh phase. This is stuff like vendor catalogs, NCBI\ndatasets, ChEMBL datasets, etc, which can simply be re-derived from the\nreference. It would be nice if these were still available (without having\nto repeat the ETL) after crashes provided they were not written to since a\ncheckpoint, and available on cloned test servers without having to repeat\nthe ETL on those as well.\n\nAs for \"maybe its corrupt, maybe it isn't, but lets keep them anyway\",\nyeah, I have little use for that.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 13, 2015 at 1:49 PM, David G. Johnston <[email protected]> wrote:On Monday, April 13, 2015, Matheus de Oliveira <[email protected]> wrote:On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <[email protected]> wrote:\"In the event of a normal shutdown, we can flush all the writes to disk\nso we know all the data has been written, so there is no need to truncate.\"\n\nIsn't possible to periodically flush data to disk and in case of crush\npostgres to load only the data that existed at last flush? The periodic\nflush could be configurable, for example every 30 minutes or after x rows\nupdated/inserted.There is no such facility implemented for UNLOGGED TABLEs. That could be a feature request though.One way would be to lock dirty buffers from unlogged relations into shared_buffers (which hardly seems like a good thing) until the start of a \"super-checkpoint\" and then write them all out as fast as possible (which kind of defeats checkpoint_completion_target). And then if the crash happened during a super-checkpoint, the data would still be inconsistent and need to be truncated. Well, that is half right anyway. UNLOGGED tables obey checkpoints just like any other table. Do they? I thought they only obeyed shutdown checkpoints, not online checkpoints. I do remember some changes around this area, but none that completely reverted that logic. The missing feature is an option to leaved restored the last checkpoint. Instead, not knowing whether there were changes since the last checkpoint, the system truncated the relation.What use case is there for a behavior that the last checkpoint data is left on the relation upon restarting - not knowing whether it was possible the other data could have been written subsequent?I would like a way to have unlogged tables be available on a replica provided that no changes were made to them between the pg_basebackup and the recovery point.My use case is that I mark certain read-only-after-bulk-loading tables as unlogged solely to avoid blowing out the log archive during the loading phase and refresh phase. This is stuff like vendor catalogs, NCBI datasets, ChEMBL datasets, etc, which can simply be re-derived from the reference. It would be nice if these were still available (without having to repeat the ETL) after crashes provided they were not written to since a checkpoint, and available on cloned test servers without having to repeat the ETL on those as well. As for \"maybe its corrupt, maybe it isn't, but lets keep them anyway\", yeah, I have little use for that.Cheers,Jeff",
"msg_date": "Mon, 13 Apr 2015 16:49:08 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On Mon, Apr 13, 2015 at 4:49 PM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Apr 13, 2015 at 1:49 PM, David G. Johnston <\n> [email protected]> wrote:\n>\n>> On Monday, April 13, 2015, Matheus de Oliveira <[email protected]>\n>> wrote:\n>>\n>>>\n>>> On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <[email protected]>\n>>> wrote:\n>>>\n>>>> \"In the event of a normal shutdown, we can flush all the writes to disk\n>>>> so we know all the data has been written, so there is no need to\n>>>> truncate.\"\n>>>>\n>>>> Isn't possible to periodically flush data to disk and in case of crush\n>>>> postgres to load only the data that existed at last flush? The periodic\n>>>> flush could be configurable, for example every 30 minutes or after x\n>>>> rows\n>>>> updated/inserted.\n>>>>\n>>>\n>>> There is no such facility implemented for UNLOGGED TABLEs. That could be\n>>> a feature request though.\n>>>\n>>\n> One way would be to lock dirty buffers from unlogged relations into\n> shared_buffers (which hardly seems like a good thing) until the start of a\n> \"super-checkpoint\" and then write them all out as fast as possible (which\n> kind of defeats checkpoint_completion_target). And then if the crash\n> happened during a super-checkpoint, the data would still be inconsistent\n> and need to be truncated.\n>\n>\n>>\n>>>\n>> Well, that is half right anyway. UNLOGGED tables obey checkpoints just\n>> like any other table.\n>>\n>\n> Do they? I thought they only obeyed shutdown checkpoints, not online\n> checkpoints. I do remember some changes around this area, but none that\n> completely reverted that logic.\n>\n>\nI vaguely recall that conversation now...I'm not positive on the exact\nmechanics here and, as it pertains to the OP, the difference you describe\nis immaterial since in either case the status quo mandates an \"all or\nnothing\" approach to an unlogged table's contents.\n\n\n\n>\n>\n>> The missing feature is an option to leaved restored the last checkpoint.\n>> Instead, not knowing whether there were changes since the last checkpoint,\n>> the system truncated the relation.\n>>\n>> What use case is there for a behavior that the last checkpoint data is\n>> left on the relation upon restarting - not knowing whether it was possible\n>> the other data could have been written subsequent?\n>>\n>\n> I would like a way to have unlogged tables be available on a replica\n> provided that no changes were made to them between the pg_basebackup and\n> the recovery point.\n>\n\n> My use case is that I mark certain read-only-after-bulk-loading tables as\n> unlogged solely to avoid blowing out the log archive during the loading\n> phase and refresh phase. This is stuff like vendor catalogs, NCBI\n> datasets, ChEMBL datasets, etc, which can simply be re-derived from the\n> reference. It would be nice if these were still available (without having\n> to repeat the ETL) after crashes provided they were not written to since a\n> checkpoint, and available on cloned test servers without having to repeat\n> the ETL on those as well.\n>\n>\n\nMy gut reaction is that those should be in their own clusters and accessed\nvia postgres_fdw...\n\nThat particular use-case would probably best be served with a separate\nreplication channel which pushes data files from the primary to the slaves\nand allows for the slave to basically \"rewrite\" its existing table by\npointing to the newly supplied version. Some kind of \"CREATE STATIC TABLE\"\nand \"PUSH STATIC TABLE TO {all | replica name}\" command combo...though\nideally with less manual intervention...\n\nDavid J.\n\nOn Mon, Apr 13, 2015 at 4:49 PM, Jeff Janes <[email protected]> wrote:On Mon, Apr 13, 2015 at 1:49 PM, David G. Johnston <[email protected]> wrote:On Monday, April 13, 2015, Matheus de Oliveira <[email protected]> wrote:On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <[email protected]> wrote:\"In the event of a normal shutdown, we can flush all the writes to disk\nso we know all the data has been written, so there is no need to truncate.\"\n\nIsn't possible to periodically flush data to disk and in case of crush\npostgres to load only the data that existed at last flush? The periodic\nflush could be configurable, for example every 30 minutes or after x rows\nupdated/inserted.There is no such facility implemented for UNLOGGED TABLEs. That could be a feature request though.One way would be to lock dirty buffers from unlogged relations into shared_buffers (which hardly seems like a good thing) until the start of a \"super-checkpoint\" and then write them all out as fast as possible (which kind of defeats checkpoint_completion_target). And then if the crash happened during a super-checkpoint, the data would still be inconsistent and need to be truncated. Well, that is half right anyway. UNLOGGED tables obey checkpoints just like any other table. Do they? I thought they only obeyed shutdown checkpoints, not online checkpoints. I do remember some changes around this area, but none that completely reverted that logic.I vaguely recall that conversation now...I'm not positive on the exact mechanics here and, as it pertains to the OP, the difference you describe is immaterial since in either case the status quo mandates an \"all or nothing\" approach to an unlogged table's contents. The missing feature is an option to leaved restored the last checkpoint. Instead, not knowing whether there were changes since the last checkpoint, the system truncated the relation.What use case is there for a behavior that the last checkpoint data is left on the relation upon restarting - not knowing whether it was possible the other data could have been written subsequent?I would like a way to have unlogged tables be available on a replica provided that no changes were made to them between the pg_basebackup and the recovery point.My use case is that I mark certain read-only-after-bulk-loading tables as unlogged solely to avoid blowing out the log archive during the loading phase and refresh phase. This is stuff like vendor catalogs, NCBI datasets, ChEMBL datasets, etc, which can simply be re-derived from the reference. It would be nice if these were still available (without having to repeat the ETL) after crashes provided they were not written to since a checkpoint, and available on cloned test servers without having to repeat the ETL on those as well. My gut reaction is that those should be in their own clusters and accessed via postgres_fdw...That particular use-case would probably best be served with a separate replication channel which pushes data files from the primary to the slaves and allows for the slave to basically \"rewrite\" its existing table by pointing to the newly supplied version. Some kind of \"CREATE STATIC TABLE\" and \"PUSH STATIC TABLE TO {all | replica name}\" command combo...though ideally with less manual intervention...David J.",
"msg_date": "Mon, 13 Apr 2015 17:32:20 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On 4/13/15 7:32 PM, David G. Johnston wrote:\n> The missing feature is an option to leaved restored the last\n> checkpoint. Instead, not knowing whether there were changes\n> since the last checkpoint, the system truncated the relation.\n>\n> What use case is there for a behavior that the last checkpoint\n> data is left on the relation upon restarting - not knowing\n> whether it was possible the other data could have been written\n> subsequent?\n>\n>\n> I would like a way to have unlogged tables be available on a replica\n> provided that no changes were made to them between the pg_basebackup\n> and the recovery point.\n>\n>\n> My use case is that I mark certain read-only-after-bulk-loading\n> tables as unlogged solely to avoid blowing out the log archive\n> during the loading phase and refresh phase. This is stuff like\n> vendor catalogs, NCBI datasets, ChEMBL datasets, etc, which can\n> simply be re-derived from the reference. It would be nice if these\n> were still available (without having to repeat the ETL) after\n> crashes provided they were not written to since a checkpoint, and\n> available on cloned test servers without having to repeat the ETL on\n> those as well.\n>\n>\n> My gut reaction is that those should be in their own clusters and\n> accessed via postgres_fdw...\n\nLikely to produce really crappy plans if the tables are of any real size...\n\n> That particular use-case would probably best be served with a separate\n> replication channel which pushes data files from the primary to the\n> slaves and allows for the slave to basically \"rewrite\" its existing\n> table by pointing to the newly supplied version. Some kind of \"CREATE\n> STATIC TABLE\" and \"PUSH STATIC TABLE TO {all | replica name}\" command\n> combo...though ideally with less manual intervention...\n\nYou still have the same problem of knowing if someone has scribbled on \nthe data since the last checkpoint.\n\nThere's been recent discussion of adding support for read-only tables. \nIf we had those, we might be able to support something like...\n\nINSERT INTO unlogged;\nALTER TABLE unlogged SET READ ONLY;\nCHECKPOINT;\n/* take backup */\n\nThis should be safe as long as we WAL log changes to read-only status \n(which presumably we would).\n\nHow much work that would entail though, I don't know.\n\nUltimately you still have to get the data over to the other machine \nanyway. ISTM it'd be a LOT more useful to look at ways to make the WAL \nlogging of bulk inserts (and especially COPY into a known empty table) a \nlot more efficient.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Apr 2015 21:45:21 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On Mon, Apr 13, 2015 at 7:45 PM, Jim Nasby <[email protected]> wrote:\n\n> On 4/13/15 7:32 PM, David G. Johnston wrote:\n>\n> That particular use-case would probably best be served with a separate\n>> replication channel which pushes data files from the primary to the\n>> slaves and allows for the slave to basically \"rewrite\" its existing\n>> table by pointing to the newly supplied version. Some kind of \"CREATE\n>> STATIC TABLE\" and \"PUSH STATIC TABLE TO {all | replica name}\" command\n>> combo...though ideally with less manual intervention...\n>>\n>\n> You still have the same problem of knowing if someone has scribbled on the\n> data since the last checkpoint.\n>\n\nThat seems like an automation concern though...the more limited idea was\nto simply have a means for a table to exist on the master and allow the\nuser to cause an exact copy of that table to appear on a replica via direct\ndata transfer (i.e., without need to create a backup/dump). If the table\nalready exists on the replica the existing version remains as-is until the\nnew table is fully push and then a filenode pointer update happens. If\nchanges are made to the master the two tables will remain diverged until a\nnew push occurs.\n\nI imaging this same idea could be handled external to the database though\nI'm don't know enough to comment on the specific technical merits of each.\n\n\n> There's been recent discussion of adding support for read-only tables. If\n> we had those, we might be able to support something like...\n>\n> INSERT INTO unlogged;\n> ALTER TABLE unlogged SET READ ONLY;\n> CHECKPOINT;\n> /* take backup */\n>\n> This should be safe as long as we WAL log changes to read-only status\n> (which presumably we would).\n>\n> How much work that would entail though, I don't know.\n>\n> Ultimately you still have to get the data over to the other machine\n> anyway. ISTM it'd be a LOT more useful to look at ways to make the WAL\n> logging of bulk inserts (and especially COPY into a known empty table) a\n> lot more efficient.\n>\n>\nJeff Janes makes a comment about wanting \"...to avoid blowing out the log\narchive...\"; which I also don't quite follow...\n\nWAL does seem to be designed to solve a different problem that what is\ndescribed here - lots of small changes versus few large changes. Improving\nWAL to move the size at which small becomes large is a win but another\nchannel designed for few large changes may be less complex to implement.\nThe current work in logical replication likely has merit here as well but\nmy familiarity with that technology is fairly limited.\n\nDavid J.\n\nOn Mon, Apr 13, 2015 at 7:45 PM, Jim Nasby <[email protected]> wrote:On 4/13/15 7:32 PM, David G. Johnston wrote:\n\n\nThat particular use-case would probably best be served with a separate\nreplication channel which pushes data files from the primary to the\nslaves and allows for the slave to basically \"rewrite\" its existing\ntable by pointing to the newly supplied version. Some kind of \"CREATE\nSTATIC TABLE\" and \"PUSH STATIC TABLE TO {all | replica name}\" command\ncombo...though ideally with less manual intervention...\n\n\nYou still have the same problem of knowing if someone has scribbled on the data since the last checkpoint.That seems like an automation concern though...the more limited idea was to simply have a means for a table to exist on the master and allow the user to cause an exact copy of that table to appear on a replica via direct data transfer (i.e., without need to create a backup/dump). If the table already exists on the replica the existing version remains as-is until the new table is fully push and then a filenode pointer update happens. If changes are made to the master the two tables will remain diverged until a new push occurs.I imaging this same idea could be handled external to the database though I'm don't know enough to comment on the specific technical merits of each.\n\nThere's been recent discussion of adding support for read-only tables. If we had those, we might be able to support something like...\n\nINSERT INTO unlogged;\nALTER TABLE unlogged SET READ ONLY;\nCHECKPOINT;\n/* take backup */\n\nThis should be safe as long as we WAL log changes to read-only status (which presumably we would).\n\nHow much work that would entail though, I don't know.\n\nUltimately you still have to get the data over to the other machine anyway. ISTM it'd be a LOT more useful to look at ways to make the WAL logging of bulk inserts (and especially COPY into a known empty table) a lot more efficient.Jeff Janes makes a comment about wanting \"...to avoid blowing out the log archive...\"; which I also don't quite follow...WAL does seem to be designed to solve a different problem that what is described here - lots of small changes versus few large changes. Improving WAL to move the size at which small becomes large is a win but another channel designed for few large changes may be less complex to implement. The current work in logical replication likely has merit here as well but my familiarity with that technology is fairly limited.David J.",
"msg_date": "Mon, 13 Apr 2015 20:28:31 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On 2015-04-13 17:49, Jeff Janes wrote:\n> \n> One way would be to lock dirty buffers from unlogged relations into\n> shared_buffers (which hardly seems like a good thing) until the start of a\n> \"super-checkpoint\" and then write them all out as fast as possible (which kind\n> of defeats checkpoint_completion_target). And then if the crash happened\n> during a super-checkpoint, the data would still be inconsistent and need to be\n> truncated.\n> \n\nWhat do you call a \"super-checkpoint\"?\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 Apr 2015 09:41:38 -0600",
"msg_from": "Yves Dorfsman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "David G Johnston wrote\n> Well, that is half right anyway. UNLOGGED tables obey checkpoints just\n> like any other table. The missing feature is an option to leaved restored\n> the last checkpoint. Instead, not knowing whether there were changes\n> since the last checkpoint, the system truncated the relation.\n> \n> What use case is there for a behavior that the last checkpoint data is\n> left on the relation upon restarting - not knowing whether it was possible\n> the other data could have been written subsequent?\n\nIf is possible to restore the table at last checkpoint state that will be\nmore than enough. I don't care about the changes since last checkpoint, I am\nwilling to lose those changes. There are use cases where is acceptable to\nlose some data, for example in a cache system, it is not a big issue if we\nlose some cached data. \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/unlogged-tables-tp4985453p5845650.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 Apr 2015 08:56:10 -0700 (MST)",
"msg_from": "dgabriel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On Tue, Apr 14, 2015 at 8:41 AM, Yves Dorfsman <[email protected]> wrote:\n\n> On 2015-04-13 17:49, Jeff Janes wrote:\n> >\n> > One way would be to lock dirty buffers from unlogged relations into\n> > shared_buffers (which hardly seems like a good thing) until the start of\n> a\n> > \"super-checkpoint\" and then write them all out as fast as possible\n> (which kind\n> > of defeats checkpoint_completion_target). And then if the crash happened\n> > during a super-checkpoint, the data would still be inconsistent and need\n> to be\n> > truncated.\n> >\n>\n> What do you call a \"super-checkpoint\"?\n>\n\nA hypothetical checkpoint which includes writing and flushing pages of\nunlogged tables.\n\nPresumably you wouldn't want every checkpoint to do this, because if done\nthe way I described the super-checkpoint is a vulnerable period. Crashes\nthat happen during it would result in truncation of the unlogged relation.\nSince that is the very thing we want to avoid, you would want to make these\nvulnerable periods rare.\n\nCheers,\n\nJeff\n\nOn Tue, Apr 14, 2015 at 8:41 AM, Yves Dorfsman <[email protected]> wrote:On 2015-04-13 17:49, Jeff Janes wrote:\n>\n> One way would be to lock dirty buffers from unlogged relations into\n> shared_buffers (which hardly seems like a good thing) until the start of a\n> \"super-checkpoint\" and then write them all out as fast as possible (which kind\n> of defeats checkpoint_completion_target). And then if the crash happened\n> during a super-checkpoint, the data would still be inconsistent and need to be\n> truncated.\n>\n\nWhat do you call a \"super-checkpoint\"?A hypothetical checkpoint which includes writing and flushing pages of unlogged tables.Presumably you wouldn't want every checkpoint to do this, because if done the way I described the super-checkpoint is a vulnerable period. Crashes that happen during it would result in truncation of the unlogged relation. Since that is the very thing we want to avoid, you would want to make these vulnerable periods rare.Cheers,Jeff",
"msg_date": "Tue, 14 Apr 2015 09:58:46 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On Mon, Apr 13, 2015 at 8:28 PM, David G. Johnston <\[email protected]> wrote:\n\n> On Mon, Apr 13, 2015 at 7:45 PM, Jim Nasby <[email protected]>\n> wrote:\n>\n>>\n>>\n>> There's been recent discussion of adding support for read-only tables. If\n>> we had those, we might be able to support something like...\n>>\n>> INSERT INTO unlogged;\n>> ALTER TABLE unlogged SET READ ONLY;\n>> CHECKPOINT;\n>> /* take backup */\n>>\n>> This should be safe as long as we WAL log changes to read-only status\n>> (which presumably we would).\n>>\n>> How much work that would entail though, I don't know.\n>>\n>\nRight. I've been keeping an eye on that discussion with the same\nintention. The big question is how, during recovery, does it know what\nstate the table was in without being able to read from the system\ncatalogs? Perhaps it would be the checkpointer's duty at the end of the\ncheckpoint to remove the init fork for unlogged relations which were turned\nto read only before that checkpoint started.\n\n\n>\n>> Ultimately you still have to get the data over to the other machine\n>> anyway. ISTM it'd be a LOT more useful to look at ways to make the WAL\n>> logging of bulk inserts (and especially COPY into a known empty table) a\n>> lot more efficient.\n>>\n>>\n> Jeff Janes makes a comment about wanting \"...to avoid blowing out the\n> log archive...\"; which I also don't quite follow...\n>\n\nI think the WAL logging of bulk COPY is pretty space-efficient already,\nprovided it is not indexed at the time of the COPY. But no amount of\nefficiency improvement is going to make them small enough for me want to\nkeep the WAL logs around beyond the next base backup.\n\nWhat I would really want is a way to make two separate WAL streams; changes\nto this set of tables goes to the \"keep forever, for PITR\" stream, and\nchanges to this other set of tables go to the \"keep until pg_basebackup is\nnext run\" stream. Of course you couldn't have fk constraints between the\ntwo different sets of tables.\n\nHaving to get the data over to the other machine doesn't bother me, it is\njust a question of how to do it without permanently intermingling it with\nWAL logs which I want to keep forever.\n\nThe FDW would be a good option, except the overhead (both execution\noverhead and the overhead of poor plans) seems to be too large. I haven't\nexplored it as much as I would like.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 13, 2015 at 8:28 PM, David G. Johnston <[email protected]> wrote:On Mon, Apr 13, 2015 at 7:45 PM, Jim Nasby <[email protected]> wrote:\n\nThere's been recent discussion of adding support for read-only tables. If we had those, we might be able to support something like...\n\nINSERT INTO unlogged;\nALTER TABLE unlogged SET READ ONLY;\nCHECKPOINT;\n/* take backup */\n\nThis should be safe as long as we WAL log changes to read-only status (which presumably we would).\n\nHow much work that would entail though, I don't know.Right. I've been keeping an eye on that discussion with the same intention. The big question is how, during recovery, does it know what state the table was in without being able to read from the system catalogs? Perhaps it would be the checkpointer's duty at the end of the checkpoint to remove the init fork for unlogged relations which were turned to read only before that checkpoint started. \n\nUltimately you still have to get the data over to the other machine anyway. ISTM it'd be a LOT more useful to look at ways to make the WAL logging of bulk inserts (and especially COPY into a known empty table) a lot more efficient.Jeff Janes makes a comment about wanting \"...to avoid blowing out the log archive...\"; which I also don't quite follow...I think the WAL logging of bulk COPY is pretty space-efficient already, provided it is not indexed at the time of the COPY. But no amount of efficiency improvement is going to make them small enough for me want to keep the WAL logs around beyond the next base backup.What I would really want is a way to make two separate WAL streams; changes to this set of tables goes to the \"keep forever, for PITR\" stream, and changes to this other set of tables go to the \"keep until pg_basebackup is next run\" stream. Of course you couldn't have fk constraints between the two different sets of tables.Having to get the data over to the other machine doesn't bother me, it is just a question of how to do it without permanently intermingling it with WAL logs which I want to keep forever.The FDW would be a good option, except the overhead (both execution overhead and the overhead of poor plans) seems to be too large. I haven't explored it as much as I would like.Cheers,Jeff",
"msg_date": "Tue, 14 Apr 2015 10:37:19 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
},
{
"msg_contents": "On 4/14/15 10:56 AM, dgabriel wrote:\n> David G Johnston wrote\n>> Well, that is half right anyway. UNLOGGED tables obey checkpoints just\n>> like any other table. The missing feature is an option to leaved restored\n>> the last checkpoint. Instead, not knowing whether there were changes\n>> since the last checkpoint, the system truncated the relation.\n>>\n>> What use case is there for a behavior that the last checkpoint data is\n>> left on the relation upon restarting - not knowing whether it was possible\n>> the other data could have been written subsequent?\n>\n> If is possible to restore the table at last checkpoint state that will be\n> more than enough. I don't care about the changes since last checkpoint, I am\n> willing to lose those changes. There are use cases where is acceptable to\n> lose some data, for example in a cache system, it is not a big issue if we\n> lose some cached data.\n\nIt is not. Unless you ensure that data is written to WAL (on disk) \nBEFORE it is written to the data pages, you will probably have \ncorruption after a crash, and have no way to prevent or possibly even \ndetect the corruption.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 Apr 2015 15:34:27 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unlogged tables"
}
] |
[
{
"msg_contents": "Hey guys,\n\nI've been running some tests while setting up some tiered storage, and I \nnoticed something. Even having an empty 'echo' as archive_command \ndrastically slows down certain operations. For instance:\n\n=> ALTER TABLE foo SET TABLESPACE slow_tier;\nALTER TABLE\nTime: 3969.962 ms\n\nWhen I set archive_command to anything:\n\n=> ALTER TABLE foo SET TABLESPACE slow_tier;\nALTER TABLE\nTime: 11969.962 ms\n\nI'm guessing it has something to do with the forking code, but I haven't \ndug into it very deeply yet.\n\nI remembered seeing incrond as a way to grab file triggers, and did some \ntests with an incrontab of this:\n\n/db/wal/ IN_CLOSE_WRITE cp -a $@/$# /db/archive/$#\n\nSure enough, files don't appear there until PG closes them after \nwriting. The background writing also doesn't appear to affect speed of \nmy test command.\n\nSo my real question: is this safe? Supposedly the trigger only gets \nactivated when the xlog file is closed, which only the PG daemon should \nbe doing. I was only testing, so I didn't add a 'test -f' command to \nprevent overwriting existing archives, but I figured... why bother if \nthere's no future there?\n\nI'd say tripling the latency for some database writes is a pretty \nsignificant difference, though. I'll defer to the experts in case this \nis sketchy. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Fri, 11 Nov 2011 16:21:18 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using incrond for archiving"
},
{
"msg_contents": "On 11/11/2011 04:21 PM, Shaun Thomas wrote:\n\n> So my real question: is this safe?\n\nSo to answer my own question: no, it's not safe. The PG backends \napparently write to the xlog files periodically and *close* them after \ndoing so. There's no open filehandle that gets written until the file is \nfull and switched to the next one.\n\nKnowing that, I used pg_xlogfile_name(pg_current_xlog_location()) to get \nthe most recent xlog file, and wrote a small script incrond would \nlaunch. In the script, it gets the current xlog, and will refuse to copy \nthat one.\n\nWhat I don't quite understand is that after calling pg_xlog_switch(), it \nwill sometimes still write to an old xlog several minutes later. Here's \nan example (0069 is the current xlog):\n\n2011-11-14 15:05:01 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:05:06 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:05:20 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:06:01 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:06:06 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:06:06 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:06:37 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:06:58 : 0000000200000ED500000045 : xlog : copying\n2011-11-14 15:07:01 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:07:06 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:07:08 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:07:20 : 0000000200000ED500000064 : xlog : copying\n2011-11-14 15:07:24 : 0000000200000ED500000014 : xlog : copying\n2011-11-14 15:07:39 : 0000000200000ED500000069 : xlog : too current\n2011-11-14 15:07:45 : 0000000200000ED500000061 : xlog : copying\n2011-11-14 15:08:01 : 0000000200000ED500000069 : xlog : too current\n\nWhy on earth is it sending IN_CLOSE_WRITE messages for 0014, 1145, and \n0061? Is that just old threads closing their old filehandles after they \nrealize they can't write to that particular xlog? Either way, adding \nlsof or (ironically much faster pg_current_xlog_location) checking for \nthe most recent xlog to ignore, I can \"emulate\" PG archive mode using an \nasynchronous background process.\n\nOn another note, watching kernel file IO messages is quite fascinating.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 14 Nov 2011 15:33:35 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using incrond for archiving"
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n \n> Why on earth is it sending IN_CLOSE_WRITE messages for 0014, 1145,\n> and 0061?\n \nThis sounds like it might be another manifestation of something that\nconfused me a while back:\n \nhttp://archives.postgresql.org/pgsql-hackers/2009-11/msg01754.php\nhttp://archives.postgresql.org/pgsql-hackers/2009-12/msg00060.php\n \n-Kevin\n",
"msg_date": "Mon, 14 Nov 2011 15:47:00 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using incrond for archiving"
},
{
"msg_contents": "On 11/14/2011 03:47 PM, Kevin Grittner wrote:\n\n> This sounds like it might be another manifestation of something that\n> confused me a while back:\n>\n> http://archives.postgresql.org/pgsql-hackers/2009-11/msg01754.php\n> http://archives.postgresql.org/pgsql-hackers/2009-12/msg00060.php\n\nInteresting. That was probably the case for a couple of the older xlogs. \nI'm not sure what to think about the ones that were *not* deleted and \nstill being \"closed\" occasionally. I'm just going to chalk it up to \nconnection turnover, since it seems normal for multiple connections to \nhave connections open to various transaction logs, even it they're not \nwriting to them.\n\nI can handle the checks to pg_current_xlog_location so long as it's \naccurate. If pre-rotated transaction logs are still being written to, \nit's back to the drawing board for me. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 14 Nov 2011 15:58:18 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using incrond for archiving"
}
] |
[
{
"msg_contents": "I know there were a lot of performance issues with ext4, but i don't know the state of it now.\nI have a private openstreetmap server installed on a ubuntu 11.10 64bit pc with both partitions (/ and /home) formated with ext4. My problem is that the server works very slow.\n\nI know there were a lot of performance issues with ext4, but i don't know the state of it now.I have a private openstreetmap server installed on a ubuntu 11.10 64bit pc with both partitions (/ and /home) formated with ext4. My problem is that the server works very slow.",
"msg_date": "Mon, 14 Nov 2011 02:00:47 -0800 (PST)",
"msg_from": "Alexandru <[email protected]>",
"msg_from_op": true,
"msg_subject": "What's the state of postgresql on ext4 now?"
},
{
"msg_contents": "> My problem is that the server works very slow.\n\nSomeone may chime in with general advice, but for more details, can\nyou be more specific? E.g.,\nhttp://wiki.postgresql.org/wiki/Slow_Query_Questions\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Mon, 14 Nov 2011 23:32:34 -0800",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's the state of postgresql on ext4 now?"
},
{
"msg_contents": "On 11/14/11 2:00 AM, Alexandru wrote:\n> I know there were a lot of performance issues with ext4, but i don't know the state of it now.\n> I have a private openstreetmap server installed on a ubuntu 11.10 64bit pc with both partitions (/ and /home) formated with ext4. My problem is that the server works very slow.\n\nWhy would you assume this has anything to do with Ext4? Other\nconfiguration issues seem more likely.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Tue, 15 Nov 2011 12:48:33 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's the state of postgresql on ext4 now?"
},
{
"msg_contents": "On Tue, Nov 15, 2011 at 1:48 PM, Josh Berkus <[email protected]> wrote:\n> On 11/14/11 2:00 AM, Alexandru wrote:\n>> I know there were a lot of performance issues with ext4, but i don't know the state of it now.\n>> I have a private openstreetmap server installed on a ubuntu 11.10 64bit pc with both partitions (/ and /home) formated with ext4. My problem is that the server works very slow.\n>\n> Why would you assume this has anything to do with Ext4? Other\n> configuration issues seem more likely.\n\nI think that about a year or so ago the linux kernel finally started\nmaking ext4 obey barriers and / or fsync properly, and the performance\nfell back into line with other file systems that also obeyed fsync.\nCombined with a rather histrionic report from phoronix many people\nwere led to believe that ext4 suddenly became very slow when it fact\nit just stopped being artificially fast.\n\nJust a guess tho.\n",
"msg_date": "Tue, 15 Nov 2011 13:54:07 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's the state of postgresql on ext4 now?"
},
{
"msg_contents": "On 11/14/2011 05:00 AM, Alexandru wrote:\n> I know there were a lot of performance issues with ext4, but i don't \n> know the state of it now.\n> I have a private openstreetmap server installed on a ubuntu 11.10 \n> 64bit pc with both partitions (/ and /home) formated with ext4. My \n> problem is that the server works very slow.\n\nThe only performance issue with ext4 is that it is careful to flush data \nto disk when the database asks it to. That is slow on most hard drives, \nand it wasn't done correctly by ext3 on older Linux kernels. So to many \npeople this looked like a performance problem, when it was actually a \nreliability improvement as far as PostgreSQL is concerned. The reliable \nbehavior just takes longer. See \nhttp://wiki.postgresql.org/wiki/Reliable_Writes for more information \nabout why this is important. Reports about ext4 being slower by sources \nlike Phoronix were very misinformed about what was going on.\n\nWhen you turn off the synchronous_commit parameter in the \npostgresql.conf, the database will stop asking the filesystem to ensure \nthings are on disk this way. You can lose some data in the event of a \ncrash, but things will be faster. Try your system with that \nconfiguration change. If it gets rid of what you see as a performance \nproblem, then the reliability changes made going from ext3 to ext4 are \nyour problem. If instead the server is still slow to you even with \nsynchronous_commit disabled, whatever is happening is unlikely to be \ncaused by the ext4 changes. In just about every other way but commit \nperformance, ext4 is faster than most other filesystems.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n\n\n\n\n\nOn 11/14/2011 05:00 AM, Alexandru wrote:\n\n\n\nI know there were a lot of performance issues with ext4, but i\ndon't know the state of it now.\nI have a private openstreetmap server installed on a ubuntu\n11.10 64bit pc with both partitions (/ and /home) formated with ext4.\nMy problem is that the server works very slow.\n\n\n\n\nThe only performance issue with ext4 is that it is careful to flush\ndata to disk when the database asks it to. That is slow on most hard\ndrives, and it wasn't done correctly by ext3 on older Linux kernels. \nSo to many people this looked like a performance problem, when it was\nactually a reliability improvement as far as PostgreSQL is concerned. \nThe reliable behavior just takes longer. See\nhttp://wiki.postgresql.org/wiki/Reliable_Writes for more information\nabout why this is important. Reports about ext4 being slower by\nsources like Phoronix were very misinformed about what was going on.\n\nWhen you turn off the synchronous_commit parameter in the\npostgresql.conf, the database will stop asking the filesystem to ensure\nthings are on disk this way. You can lose some data in the event of a\ncrash, but things will be faster. Try your system with that\nconfiguration change. If it gets rid of what you see as a performance\nproblem, then the reliability changes made going from ext3 to ext4 are\nyour problem. If instead the server is still slow to you even with\nsynchronous_commit disabled, whatever is happening is unlikely to be\ncaused by the ext4 changes. In just about every other way but commit\nperformance, ext4 is faster than most other filesystems.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us",
"msg_date": "Tue, 15 Nov 2011 22:48:07 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's the state of postgresql on ext4 now?"
},
{
"msg_contents": "On Tue, Nov 15, 2011 at 10:48 PM, Greg Smith <[email protected]> wrote:\n\n> In just about every other way\n> but commit performance, ext4 is faster than most other filesystems.\n\nAs someone who is looked at as an expert and knowledgable my many of\nus, are you getting to the point of migrating large XFS filesystems to\next4 for production databases yet? Or at least using ext4 in new\nlarge-scale filesystems for PG?\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.\n",
"msg_date": "Tue, 15 Nov 2011 22:54:56 -0500",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's the state of postgresql on ext4 now?"
},
{
"msg_contents": "On Tue, Nov 15, 2011 at 8:48 PM, Greg Smith <[email protected]> wrote:\n> When you turn off the synchronous_commit parameter in the postgresql.conf,\n> the database will stop asking the filesystem to ensure things are on disk\n> this way. You can lose some data in the event of a crash, but things will\n> be faster.\n\nAn important bit here is that unlike turning fsync off, your\nfilesystem and database will still be coherent after a power loss\nevent. so it's semi-safe, in that you won't be recovering your whole\ndb in the event of a power loss / crash.\n\n> unlikely to be caused by the ext4 changes. In just about every other way\n> but commit performance, ext4 is faster than most other filesystems.\n\nOn fast hardware, ext4 is a good performer overall and comes within a\npretty close reach of the other fast file systems. And since it's in\nthe mainline kernel and used by lots of distros, it gets a lot of real\nworld testing and bug fixing to boot. I'm with you, if there's a real\nperformance problem I'd suspect something other than ext4.\n",
"msg_date": "Tue, 15 Nov 2011 20:55:17 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's the state of postgresql on ext4 now?"
},
{
"msg_contents": "On 11/15/2011 10:54 PM, Aidan Van Dyk wrote:\n> are you getting to the point of migrating large XFS filesystems to\n> ext4 for production databases yet? Or at least using ext4 in new\n> large-scale filesystems for PG?\n> \n\nNot really. Last time I checked (a few months ago), there were still \nsome scary looking bugs in ext4 that concerned me, and I've seen one \ndifficult to explain bit of corruption on it. Lots of these bugs had \nbeen fixed in the mainline Linux kernel, but I wasn't sure they'd all \nbeen backported to the sort of distributions my customers run. I expect \nto re-evaluate RHEL6 to see how it's doing soon, just haven't had \ntime/requests for it.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 16 Nov 2011 01:51:15 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's the state of postgresql on ext4 now?"
}
] |
[
{
"msg_contents": "Hi, running Postgres 9.1.1 on an EC2 m1.xlarge instance. Machine is a\ndedicated master with 2 streaming replication nodes.\n\nThe machine has 16GB of RAM and 4 cores.\n\nWe're starting to see some slow queries, especially COMMITs that are\nhappening more frequently. The slow queries are against seemingly\nwell-indexed tables.\n\nI have log_min_duration = 150ms\n\nSlow commits like:\n\n2011-11-14 17:47:11 UTC pid:14366 (44/0-0) LOG: duration: 3062.784 ms\n statement: COMMIT\n2011-11-14 17:47:11 UTC pid:14604 (48/0-0) LOG: duration: 2593.351 ms\n statement: COMMIT\n\nThese slow COMMITs are against tables that received a large number of\nUPDATEs and are growing fairly rapidly.\n\nAnd slow queries like:\n\n2011-11-14 17:50:20 UTC pid:6519 (16/0-0) LOG: duration: 1694.456 ms\nstatement: SELECT \"facebook_wall_posts\".* FROM \"facebook_wall_posts\"\nWHERE \"facebook_wall_posts\".\"token\" =\n'984c44e75975b224b38197cf8f8fc76a' LIMIT 1\n\nquery plan: http://explain.depesz.com/s/wbm\nThe # of rows in facebook_wall_posts is 5841\n\nStructure of facebook_wall_posts:\n Table \"public.facebook_wall_posts\"\n Column | Type |\n Modifiers\n--------------------+-----------------------------+------------------------------------------------------------------\n id | integer | not null default\nnextval('facebook_wall_posts_id_seq'::regclass)\n album_id | integer | not null\n token | character varying(32) | not null\n fb_recipient_id | character varying(64) | not null\n post_id | character varying(100) | not null\n fb_post_created_at | timestamp without time zone |\n data | text |\n created_at | timestamp without time zone |\n updated_at | timestamp without time zone |\n fb_post_deleted_at | timestamp without time zone |\nIndexes:\n \"facebook_wall_posts_pkey\" PRIMARY KEY, btree (id)\n \"index_facebook_wall_posts_on_token\" UNIQUE, btree (token)\n \"index_facebook_wall_posts_on_album_id\" btree (album_id)\n\n\nAnd another slow query:\n\n2011-11-14 17:52:44 UTC pid:14912 (58/0-0) LOG: duration: 979.740 ms\nstatement: SELECT \"facebook_friends\".\"friend_id\" FROM\n\"facebook_friends\" WHERE \"facebook_friends\".\"user_id\" = 9134671\n\nQuery plan: http://explain.depesz.com/s/x1q\n# of rows in facebook_friends is 27075088\n\nStructure of facebook_friends:\n Table \"public.facebook_friends\"\n Column | Type |\nModifiers\n------------+-----------------------------+---------------------------------------------------------------\n id | integer | not null default\nnextval('facebook_friends_id_seq'::regclass)\n user_id | integer | not null\n friend_id | integer | not null\n created_at | timestamp without time zone |\nIndexes:\n \"facebook_friends_pkey\" PRIMARY KEY, btree (id)\n \"index_facebook_friends_on_user_id_and_friend_id\" UNIQUE, btree\n(user_id, friend_id)\n\nWe have auto-vacuum enabled and running. But yesterday I manually ran\nvacuum on the database. Autovacuum settings:\n\nautovacuum | on\nautovacuum_analyze_scale_factor | 0.1\nautovacuum_analyze_threshold | 50\nautovacuum_freeze_max_age | 200000000\nautovacuum_max_workers | 3\nautovacuum_naptime | 60\nautovacuum_vacuum_cost_delay | 20\nautovacuum_vacuum_cost_limit | -1\nautovacuum_vacuum_scale_factor | 0.2\nautovacuum_vacuum_threshold | 50\n\nother postgresql.conf settings:\n\nshared_buffers = 3584MB\nwal_buffers = 16MB\ncheckpoint_segments = 32\nmax_wal_senders = 10\ncheckpoint_completion_target = 0.9\nwal_keep_segments = 1024\nmaintenance_work_mem = 256MB\nwork_mem = 88MB\nshared_buffers = 3584MB\neffective_cache_size = 10GB\n\nThe PGDATA dir is a RAID10 on 4 local (\"ephemeral\" in EC2 speak)\ndrives. I ran some dd tests and received the following output:\n\n--- WRITING ---\nroot@sql03:/data# time sh -c \"dd if=/dev/zero of=/data/tmp/bigfile\nbs=8k count=4000000 && sync\"\n4000000+0 records in\n4000000+0 records out\n32768000000 bytes (33 GB) copied, 670.663 s, 48.9 MB/s\n\nreal\t11m52.199s\nuser\t0m2.720s\nsys\t0m45.330s\n\n\n--- READING ---\nroot@sql03:/data# time dd of=/dev/zero if=/data/tmp/bigfile bs=8k\n4000000+0 records in\n4000000+0 records out\n32768000000 bytes (33 GB) copied, 155.429 s, 211 MB/s\n\nreal\t2m35.434s\nuser\t0m2.400s\nsys\t0m33.160s\n\n\nI have enabled log_checkpoints and here is a recent sample from the log:\n\n2011-11-14 17:38:48 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n15121 buffers (3.3%); 0 transaction log file(s) added, 0 removed, 8\nrecycled; write=270.101 s, sync=2.989 s, total=273.112 s; sync\nfiles=60, longest=1.484 s, average=0.049 s\n2011-11-14 17:39:15 UTC pid:3965 (-0) LOG: checkpoint starting: time\n2011-11-14 17:43:49 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n16462 buffers (3.6%); 0 transaction log file(s) added, 0 removed, 9\nrecycled; write=269.978 s, sync=4.106 s, total=274.117 s; sync\nfiles=82, longest=2.943 s, average=0.050 s\n2011-11-14 17:44:15 UTC pid:3965 (-0) LOG: checkpoint starting: time\n2011-11-14 17:48:47 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n14159 buffers (3.1%); 0 transaction log file(s) added, 0 removed, 6\nrecycled; write=269.818 s, sync=2.119 s, total=271.948 s; sync\nfiles=71, longest=1.192 s, average=0.029 s\n2011-11-14 17:49:15 UTC pid:3965 (-0) LOG: checkpoint starting: time\n2011-11-14 17:53:47 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n11337 buffers (2.5%); 0 transaction log file(s) added, 0 removed, 7\nrecycled; write=269.901 s, sync=2.508 s, total=272.419 s; sync\nfiles=71, longest=1.867 s, average=0.035 s\n2011-11-14 17:54:15 UTC pid:3965 (-0) LOG: checkpoint starting: time\n2011-11-14 17:58:48 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n15706 buffers (3.4%); 0 transaction log file(s) added, 0 removed, 7\nrecycled; write=270.104 s, sync=3.612 s, total=273.727 s; sync\nfiles=67, longest=3.051 s, average=0.053 s\n2011-11-14 17:59:15 UTC pid:3965 (-0) LOG: checkpoint starting: time\n\n\nI've been collecting random samples from pg_stat_bgwriter:\nhttps://gist.github.com/4faec2ca9a79ede281e1\n\nSo given all this information (if you need more just let me know), is\nthere something fundamentally wrong or mis-configured? Do I have an\nI/O issue?\n\nThanks for any insight.\n",
"msg_date": "Mon, 14 Nov 2011 10:16:46 -0800",
"msg_from": "Cody Caughlan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow queries / commits, mis-configuration or hardware issues?"
},
{
"msg_contents": "On 14 Listopad 2011, 19:16, Cody Caughlan wrote:\n> shared_buffers = 3584MB\n> wal_buffers = 16MB\n> checkpoint_segments = 32\n> max_wal_senders = 10\n> checkpoint_completion_target = 0.9\n> wal_keep_segments = 1024\n> maintenance_work_mem = 256MB\n> work_mem = 88MB\n> shared_buffers = 3584MB\n> effective_cache_size = 10GB\n\nSeems reasonable, although I'd bump up the checkpoint_timeout (the 5m is\nusually too low).\n\n> The PGDATA dir is a RAID10 on 4 local (\"ephemeral\" in EC2 speak)\n> drives. I ran some dd tests and received the following output:\n>\n> --- WRITING ---\n> root@sql03:/data# time sh -c \"dd if=/dev/zero of=/data/tmp/bigfile\n> bs=8k count=4000000 && sync\"\n> 4000000+0 records in\n> 4000000+0 records out\n> 32768000000 bytes (33 GB) copied, 670.663 s, 48.9 MB/s\n>\n> real\t11m52.199s\n> user\t0m2.720s\n> sys\t0m45.330s\n\nThis measures sequential write performance (and the same holds for the\nread test). We need to know the random I/O performance too - use bonnie++\nor similar tool.\n\nBased on the AWS benchmarks I've seen so far, I'd expect about 90 MB/s for\nsequential read/writes, and about twice that performance for a 4-drive\nRAID10. So while the reads (211 MB/s) seem perfectly OK, the writes\n(50MB/s) are rather slow. Have you measured this on an idle system, or\nwhen the db was running?\n\nSee for example this:\n\n[1] http://victortrac.com/EC2_Ephemeral_Disks_vs_EBS_Volumes\n[2]\nhttp://www.gabrielweinberg.com/blog/2011/05/raid0-ephemeral-storage-on-aws-ec2.html\n\n> I have enabled log_checkpoints and here is a recent sample from the log:\n> 2011-11-14 17:39:15 UTC pid:3965 (-0) LOG: checkpoint starting: time\n> 2011-11-14 17:43:49 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n> 16462 buffers (3.6%); 0 transaction log file(s) added, 0 removed, 9\n> recycled; write=269.978 s, sync=4.106 s, total=274.117 s; sync\n> files=82, longest=2.943 s, average=0.050 s\n\nNothing special here - this just says that the checkpoints were timed and\nfinished on time (the default checkpoint timeout is 5 minutes, with\ncompletion target 0.9 the expected checkpoint time is about 270s). Not a\ncheckpoint issue, probably.\n\n> I've been collecting random samples from pg_stat_bgwriter:\n> https://gist.github.com/4faec2ca9a79ede281e1\n\nAlthough it's a bit difficult to interpret this (collect the data in\nregular intervals - e.g. every hour - and post the differences, please),\nbut it seems reasonable.\n\n> So given all this information (if you need more just let me know), is\n> there something fundamentally wrong or mis-configured? Do I have an\n> I/O issue?\n\nProbably - the discrepancy between read/write performance is a bit\nsuspicious.\n\nTry to watch the I/O performance when this happens, i.e. run \"iostat -x\"\nand watch the output (especially %util, r_await, w_await) and post several\nlines of the output.\n\nTomas\n\n",
"msg_date": "Mon, 14 Nov 2011 20:22:52 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware\n issues?"
},
{
"msg_contents": "Thanks for your response. Please see below for answers to your questions.\n\nOn Mon, Nov 14, 2011 at 11:22 AM, Tomas Vondra <[email protected]> wrote:\n> On 14 Listopad 2011, 19:16, Cody Caughlan wrote:\n>> shared_buffers = 3584MB\n>> wal_buffers = 16MB\n>> checkpoint_segments = 32\n>> max_wal_senders = 10\n>> checkpoint_completion_target = 0.9\n>> wal_keep_segments = 1024\n>> maintenance_work_mem = 256MB\n>> work_mem = 88MB\n>> shared_buffers = 3584MB\n>> effective_cache_size = 10GB\n>\n> Seems reasonable, although I'd bump up the checkpoint_timeout (the 5m is\n> usually too low).\n\nOk, will do.\n\n>\n>> The PGDATA dir is a RAID10 on 4 local (\"ephemeral\" in EC2 speak)\n>> drives. I ran some dd tests and received the following output:\n>>\n>> --- WRITING ---\n>> root@sql03:/data# time sh -c \"dd if=/dev/zero of=/data/tmp/bigfile\n>> bs=8k count=4000000 && sync\"\n>> 4000000+0 records in\n>> 4000000+0 records out\n>> 32768000000 bytes (33 GB) copied, 670.663 s, 48.9 MB/s\n>>\n>> real 11m52.199s\n>> user 0m2.720s\n>> sys 0m45.330s\n>\n> This measures sequential write performance (and the same holds for the\n> read test). We need to know the random I/O performance too - use bonnie++\n> or similar tool.\n>\n> Based on the AWS benchmarks I've seen so far, I'd expect about 90 MB/s for\n> sequential read/writes, and about twice that performance for a 4-drive\n> RAID10. So while the reads (211 MB/s) seem perfectly OK, the writes\n> (50MB/s) are rather slow. Have you measured this on an idle system, or\n> when the db was running?\n>\n\nI ran bonnie++ on a slave node, doing active streaming replication but\notherwise idle:\nhttp://batch-files-test.s3.amazonaws.com/sql03.prod.html\n\nbonnie++ on the master node:\nhttp://batch-files-test.s3.amazonaws.com/sql01.prod.html\n\nIf I am reading this right, this is my first time using it, the\nnumbers dont look too good.\n\n> See for example this:\n>\n> [1] http://victortrac.com/EC2_Ephemeral_Disks_vs_EBS_Volumes\n> [2]\n> http://www.gabrielweinberg.com/blog/2011/05/raid0-ephemeral-storage-on-aws-ec2.html\n>\n>> I have enabled log_checkpoints and here is a recent sample from the log:\n>> 2011-11-14 17:39:15 UTC pid:3965 (-0) LOG: checkpoint starting: time\n>> 2011-11-14 17:43:49 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n>> 16462 buffers (3.6%); 0 transaction log file(s) added, 0 removed, 9\n>> recycled; write=269.978 s, sync=4.106 s, total=274.117 s; sync\n>> files=82, longest=2.943 s, average=0.050 s\n>\n> Nothing special here - this just says that the checkpoints were timed and\n> finished on time (the default checkpoint timeout is 5 minutes, with\n> completion target 0.9 the expected checkpoint time is about 270s). Not a\n> checkpoint issue, probably.\n>\n>> I've been collecting random samples from pg_stat_bgwriter:\n>> https://gist.github.com/4faec2ca9a79ede281e1\n>\n> Although it's a bit difficult to interpret this (collect the data in\n> regular intervals - e.g. every hour - and post the differences, please),\n> but it seems reasonable.\n\nOk, I have a cron running every hour to grab this data. I will post\nback in a few hours or tomorrow.\n\n>\n>> So given all this information (if you need more just let me know), is\n>> there something fundamentally wrong or mis-configured? Do I have an\n>> I/O issue?\n>\n> Probably - the discrepancy between read/write performance is a bit\n> suspicious.\n>\n> Try to watch the I/O performance when this happens, i.e. run \"iostat -x\"\n> and watch the output (especially %util, r_await, w_await) and post several\n> lines of the output.\n>\n\nHeres a gist of running \"iostat -x 3\" for about a few minutes:\n\nhttps://gist.github.com/f94d98f2ef498a522ac2\n\nIndeed, the %iowat and await values can spike up drastically.\n",
"msg_date": "Mon, 14 Nov 2011 13:58:52 -0800",
"msg_from": "Cody Caughlan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware issues?"
},
{
"msg_contents": "On 14 Listopad 2011, 22:58, Cody Caughlan wrote:\n>> Seems reasonable, although I'd bump up the checkpoint_timeout (the 5m is\n>> usually too low).\n>\n> Ok, will do.\n\nYes, but find out what that means and think about the possible impact\nfirst. It usually improves the checkpoint behaviour but increases the\nrecovery time and you may need more checkpoint segments. And I doubt this\nwill fix the issue you've described.\n\n> I ran bonnie++ on a slave node, doing active streaming replication but\n> otherwise idle:\n> http://batch-files-test.s3.amazonaws.com/sql03.prod.html\n>\n> bonnie++ on the master node:\n> http://batch-files-test.s3.amazonaws.com/sql01.prod.html\n>\n> If I am reading this right, this is my first time using it, the\n> numbers dont look too good.\n\nAre those instances equal, i.e. use tha same RAID10 config etc.? It\nsurprises me a bit that the slave performs much better than the master,\nfor example the sequential reads are much faster (210MB/s vs. 60MB/s) and\nit handles about twice the number of seeks (345 vs. 170). But this may be\nskewed because of the workload.\n\n> Heres a gist of running \"iostat -x 3\" for about a few minutes:\n>\n> https://gist.github.com/f94d98f2ef498a522ac2\n>\n> Indeed, the %iowat and await values can spike up drastically.\n\nOK, so xvdb-xvde are individual drives and dm-0 is the RAID10 device,\nright? According to the log_checkpoint info, you're writing about 15000\n(120MB) buffers in 270s, i.e. about 440kB/s. But according to the iostat\nyou're writing up to 4MB/s, so it's not just about the checkpoints.\n\nWhat else is going on there? How much WAL do you write?\n\nDo you have iotop installed? That might give you a hint what processes are\nwriting data etc.\n\nI'm a bit confused that the w/s don't add up while r/s do.\n\nTomas\n\n",
"msg_date": "Mon, 14 Nov 2011 23:57:49 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware\n issues?"
},
{
"msg_contents": "On Mon, Nov 14, 2011 at 2:57 PM, Tomas Vondra <[email protected]> wrote:\n> On 14 Listopad 2011, 22:58, Cody Caughlan wrote:\n>>> Seems reasonable, although I'd bump up the checkpoint_timeout (the 5m is\n>>> usually too low).\n>>\n>> Ok, will do.\n>\n> Yes, but find out what that means and think about the possible impact\n> first. It usually improves the checkpoint behaviour but increases the\n> recovery time and you may need more checkpoint segments. And I doubt this\n> will fix the issue you've described.\n>\n\nOk, I understand the ramifications with increasing the checkpoint\ntimeout. But I will investigate more before I jump in.\n\n\n>> I ran bonnie++ on a slave node, doing active streaming replication but\n>> otherwise idle:\n>> http://batch-files-test.s3.amazonaws.com/sql03.prod.html\n>>\n>> bonnie++ on the master node:\n>> http://batch-files-test.s3.amazonaws.com/sql01.prod.html\n>>\n>> If I am reading this right, this is my first time using it, the\n>> numbers dont look too good.\n>\n> Are those instances equal, i.e. use tha same RAID10 config etc.? It\n> surprises me a bit that the slave performs much better than the master,\n> for example the sequential reads are much faster (210MB/s vs. 60MB/s) and\n> it handles about twice the number of seeks (345 vs. 170). But this may be\n> skewed because of the workload.\n\nYes, these two machines are the same. sql01 is the master node and is\nquite busy. Running bonnie++ on it during its normal workload spiked\nI/O for the duration. sql03 is a pure slave and is quite idle, save\nfor receiving WAL segments.\n\n>\n>> Heres a gist of running \"iostat -x 3\" for about a few minutes:\n>>\n>> https://gist.github.com/f94d98f2ef498a522ac2\n>>\n>> Indeed, the %iowat and await values can spike up drastically.\n>\n> OK, so xvdb-xvde are individual drives and dm-0 is the RAID10 device,\n> right? According to the log_checkpoint info, you're writing about 15000\n> (120MB) buffers in 270s, i.e. about 440kB/s. But according to the iostat\n> you're writing up to 4MB/s, so it's not just about the checkpoints.\n>\n> What else is going on there? How much WAL do you write?\n\nYes, dm-0 is the RAID10 device. The WAL config is:\n\nwal_buffers = 16MB\ncheckpoint_segments = 32\nmax_wal_senders = 10\ncheckpoint_completion_target = 0.9\ncheckpoint_timeout = 300\nwal_keep_segments = 1024\n\n>\n> Do you have iotop installed? That might give you a hint what processes are\n> writing data etc.\n\nI do have iotop and have been watching it. The only I/O users are\npostgres and its backends. I dont see anything else consuming any I/O.\nBy eyeballing iotop, big consumers of disk writes are:\n\nidle in transaction, SELECT, COMMIT\n\nThe first two are what I would think would be largely read operations\n(certainly the SELECT) so its not clear why a SELECT consumes write\ntime.\n\nHere is the output of some pg_stat_bgwriter stats from the last couple of hours:\n\nhttps://gist.github.com/41ee26caca01471a9b77\n\nOne thing that I might not have made very clear earlier is that this\nDB, especially a single table receives a very large number of UPDATEs.\nHowever, it seems to be well cached, I have shared_buffers = 3584MB\nand a view of pg_buffercache shows:\nhttps://gist.github.com/53c520571290cae14613\n\nIs it possible that we're just trying to handle too many UPDATEs and\nthey are all trying to hit disk all at once - causing this I/O\ncontention? Here is a view of pg_stat_user_tables that shows the\namount of live/dead tuples:\n\nhttps://gist.github.com/5ac1ae7d11facd72913f\n",
"msg_date": "Mon, 14 Nov 2011 16:13:41 -0800",
"msg_from": "Cody Caughlan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware issues?"
},
{
"msg_contents": "Dne 15.11.2011 01:13, Cody Caughlan napsal(a):\n> The first two are what I would think would be largely read operations\n> (certainly the SELECT) so its not clear why a SELECT consumes write\n> time.\n> \n> Here is the output of some pg_stat_bgwriter stats from the last couple of hours:\n> \n> https://gist.github.com/41ee26caca01471a9b77\n\nHmm, the difference between 22:00 and 23:00 seems a bit suspucious.\nThere are 11 checkpoints (timed + requested), i.e. 352 segments, i.e.\nabout 5632 MB of WAL data. The checkpoints had to write 504.135 buffers\n(i.e. 4 GB of data) and background writer additional 10.494 buffers (100MB).\n\nBut the backends themselves had to write 173.208 buffers. That's way too\nmuch I guess and it's probably one of the reasons why the queries take\nso long.\n\nSo your primary goal probably should be to write less buffers from the\nbackends. Spread checkpoints are the most effective way and background\nwriter is fine.\n\nTry to eliminate the requested checkpoints (increase the number of\ncheckpoint segments), and eliminate the buffers written by backends.\nIncrease the shared buffers and watch if that helps.\n\nIf that does not help, make the background writer a bit more aggressive.\nIncrease bgwriter_lru_maxpages / decrease bgwriter_delay, that should\nwrite the buffers a bit more aggressive.\n\nBut if there was something extraordinary happening between 22:00 and\n23:00 (e.g. a backup, a batch job), this may be completely nonsense.\n\n> One thing that I might not have made very clear earlier is that this\n> DB, especially a single table receives a very large number of UPDATEs.\n> However, it seems to be well cached, I have shared_buffers = 3584MB\n> and a view of pg_buffercache shows:\n> https://gist.github.com/53c520571290cae14613\n\nThat means almost 100% of the buffers is used. But this is not a very\ninteresting view - the buffer may be occupied and not used. Can you\ngroup the buffers by \"isdirty\" so we can see what portion of the buffers\nwas modified (and needs to be written)?\n\nMuch more interesting is the view from the other side - how many\nrequests were handled from shared buffers. For a database you can do\nthat like this\n\n select datname, 100* blks_hit / (blks_hit + blks_read + 1)\n from pg_stat_database;\n\nThe \"+1\" is there just because of zero values, and you should evaluate\nthat using two snapshots (difference of). And you can compute the same\nthing (cache hit ratio) for tables/indexes.\n\nThe problem here is that there's a separate cache (page cache), and\nthat's not included here. So a read may be a \"miss\" and yet not require\nan actual I/O.\n\n> Is it possible that we're just trying to handle too many UPDATEs and\n> they are all trying to hit disk all at once - causing this I/O\n\nNot sure what you mean by \"trying to hit disk all at once\"? The updates\nare written to a WAL (which is mostly sequential I/O) and the actual\npages are updated in memory (shared buffers). And those shared buffers\nare written only when a checkpoint happens, but this is a single write\nof the whole block, not many small writes.\n\nSure, the clients need to grab a lock on a buffer when modifying it, and\na lock on WAL, but that wouldn't demonstrate as an I/O utilization.\n\nIn short - I don't think this is happening here.\n\nWhat actually matters here is that the dirty buffers are spread across\nthe drive - that's where the random I/O comes from. And the fact that\nthe backends need to flush the dirty buffers on their own.\n\n> contention? Here is a view of pg_stat_user_tables that shows the\n> amount of live/dead tuples:\n> \n> https://gist.github.com/5ac1ae7d11facd72913f\n\nWhat could happen due to UPDATES is a table bloat (table growing due to\nupdates), but I think that's not happening there. The number of dead\ntuples is very low - less than 1% in most cases.\n\nFor example the largest table \"users\" has less than 0.5% of dead tuples\nand most of the updates are handled by HOT. So I don't think this is an\nissue here.\n\nkind regards\nTomas\n",
"msg_date": "Wed, 16 Nov 2011 02:02:08 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware\n issues?"
},
{
"msg_contents": "Dne 14.11.2011 22:58, Cody Caughlan napsal(a):\n> I ran bonnie++ on a slave node, doing active streaming replication but\n> otherwise idle:\n> http://batch-files-test.s3.amazonaws.com/sql03.prod.html\n> \n> bonnie++ on the master node:\n> http://batch-files-test.s3.amazonaws.com/sql01.prod.html\n> \n> If I am reading this right, this is my first time using it, the\n> numbers dont look too good.\n\nI've done some benchmarks on my own (m1.xlarge instance), and the\nresults are these (http://pastebin.com/T1LXHru0):\n\nsingle drive\n------------\ndd writes: 62 MB/s\ndd reads: 110 MB/s\nbonnie seq. writes: 55 MB/s\nbonnie seq. rewrite: 33 MB/s\nbonnie seq. reads: 91 MB/s\nbonnie seeks: 370/s\n\nraid 0 (4 devices)\n-----------------------------\ndd writes: 220 MB/s\ndd reads: 380 MB/s\nbonnie seq. writes: 130 MB/s\nbonnie seq. rewrite: 114 MB/s\nbonnie seq. reads: 280 MB/s\nbonnie seeks: 570/s\n\nraid 10 (4 devices)\n-----------------------------\ndd writes: 90 MB/s\ndd reads: 200 MB/s\nbonnie seq. writes: 49 MB/s\nbonnie seq. rewrite: 56 MB/s\nbonnie seq. reads: 160 MB/s\nbonnie seeks: 590/s\n\nSo the results are rather different from your results (both master and\nslave).\n\nWhat surprises me a bit is the decrease of write performance between\nsigle drive and RAID 10. I've used bonnie++ 1.03e, so I'm wondering if\nthe 1.96 would give different results ...\n\nAll those benchmarks were performed with ext3.\n\nTomas\n",
"msg_date": "Wed, 16 Nov 2011 02:16:53 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware\n issues?"
},
{
"msg_contents": "On Tue, Nov 15, 2011 at 5:16 PM, Tomas Vondra <[email protected]> wrote:\n> Dne 14.11.2011 22:58, Cody Caughlan napsal(a):\n>> I ran bonnie++ on a slave node, doing active streaming replication but\n>> otherwise idle:\n>> http://batch-files-test.s3.amazonaws.com/sql03.prod.html\n>>\n>> bonnie++ on the master node:\n>> http://batch-files-test.s3.amazonaws.com/sql01.prod.html\n>>\n>> If I am reading this right, this is my first time using it, the\n>> numbers dont look too good.\n>\n> I've done some benchmarks on my own (m1.xlarge instance), and the\n> results are these (http://pastebin.com/T1LXHru0):\n>\n> single drive\n> ------------\n> dd writes: 62 MB/s\n> dd reads: 110 MB/s\n> bonnie seq. writes: 55 MB/s\n> bonnie seq. rewrite: 33 MB/s\n> bonnie seq. reads: 91 MB/s\n> bonnie seeks: 370/s\n>\n> raid 0 (4 devices)\n> -----------------------------\n> dd writes: 220 MB/s\n> dd reads: 380 MB/s\n> bonnie seq. writes: 130 MB/s\n> bonnie seq. rewrite: 114 MB/s\n> bonnie seq. reads: 280 MB/s\n> bonnie seeks: 570/s\n>\n> raid 10 (4 devices)\n> -----------------------------\n> dd writes: 90 MB/s\n> dd reads: 200 MB/s\n> bonnie seq. writes: 49 MB/s\n> bonnie seq. rewrite: 56 MB/s\n> bonnie seq. reads: 160 MB/s\n> bonnie seeks: 590/s\n>\n\nInteresting. I spun up a new m1.xlarge and did the same RAID10 config\n(4 drives) except with a chunk size of 512K (instead of 256K) and the\nmachine was completely idle. Bonnie:\n\nhttp://batch-files-test.s3.amazonaws.com/idle-512k-chunk.html\n\nWhich has similar-ish performance as yours, except for worse seeks but\na bit better seq. reads.\n\nThe other bonnies I sent over were NOT on idle systems. This one is\nthe master, which receives a heavy stream of writes and some reads\n\nhttp://batch-files-test.s3.amazonaws.com/sql01.prod.html\n\nAnd this is the slave, which is all writes and no reads:\nhttp://batch-files-test.s3.amazonaws.com/sql03.prod.html\n\nHow did you build your RAID array? Maybe I have a fundamental flaw /\nmisconfiguration. I am doing it via:\n\n$ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4\n/dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde\n$ pvcreate /dev/md0\n$ vgcreate lvm-raid10 /dev/md0\n$ lvcreate -l 215021 lvm-raid10 -n lvm0\n$ blockdev --setra 65536 /dev/lvm-raid10/lvm0\n$ mkfs.xfs -f /dev/lvm-raid10/lvm0\n$ mkdir -p /data && mount -t xfs -o noatime /dev/lvm-raid10/lvm0 /data\n",
"msg_date": "Tue, 15 Nov 2011 17:21:33 -0800",
"msg_from": "Cody Caughlan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware issues?"
},
{
"msg_contents": "On 16 Listopad 2011, 2:21, Cody Caughlan wrote:\n> How did you build your RAID array? Maybe I have a fundamental flaw /\n> misconfiguration. I am doing it via:\n>\n> $ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4\n> /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde\n> $ pvcreate /dev/md0\n> $ vgcreate lvm-raid10 /dev/md0\n> $ lvcreate -l 215021 lvm-raid10 -n lvm0\n> $ blockdev --setra 65536 /dev/lvm-raid10/lvm0\n> $ mkfs.xfs -f /dev/lvm-raid10/lvm0\n> $ mkdir -p /data && mount -t xfs -o noatime /dev/lvm-raid10/lvm0 /data\n>\n> --\n\nI don't think you have a flaw there. The workload probably skews the\nresults a bit on the master and slave, so it's difficult to compare it to\nresults from an idle instance. The amount of data written seems small, but\na random i/o can saturated the devices quite easily.\n\nI went with a very simple raid config - no LVM, default stripe size\n(better seeks, worse sequential performance), default read-ahead (could\ngive better seq. performance).\n\nTomas\n\n\n",
"msg_date": "Wed, 16 Nov 2011 03:16:15 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware\n issues?"
},
{
"msg_contents": "On 11/14/2011 01:16 PM, Cody Caughlan wrote:\n> We're starting to see some slow queries, especially COMMITs that are\n> happening more frequently. The slow queries are against seemingly\n> well-indexed tables.\n> Slow commits like:\n>\n> 2011-11-14 17:47:11 UTC pid:14366 (44/0-0) LOG: duration: 3062.784 ms\n> statement: COMMIT\n> 2011-11-14 17:47:11 UTC pid:14604 (48/0-0) LOG: duration: 2593.351 ms\n> statement: COMMIT\n>\n> These slow COMMITs are against tables that received a large number of\n> UPDATEs and are growing fairly rapidly.\n> \n\nLinux will fill its write cache with all of the writes coming out of \neach checkpoint. With a 16GB instance, I would expect that 5% * 16GB ~= \n800MB of writes are batched up when your system is slow. You should be \nable to confirm that by looking at the \"Dirty:\" line in /proc/meminfo\n\nWith 800MB queued up and I/O that is lucky to get 50MB/s, the sync calls \nat the end of each checkpoint are sometimes blocking for multiple seconds:\n\n> 2011-11-14 17:38:48 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n> 15121 buffers (3.3%); 0 transaction log file(s) added, 0 removed, 8\n> recycled; write=270.101 s, sync=2.989 s, total=273.112 s; sync\n> files=60, longest=1.484 s, average=0.049 s\n> 2011-11-14 17:39:15 UTC pid:3965 (-0) LOG: checkpoint starting: time\n> 2011-11-14 17:43:49 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n> 16462 buffers (3.6%); 0 transaction log file(s) added, 0 removed, 9\n> recycled; write=269.978 s, sync=4.106 s, total=274.117 s; sync\n> files=82, longest=2.943 s, average=0.050 s\n> \n\nWhen an individual sync call gets stuck for that long, clients can \neasily get stuck behind it too. There are a couple of techniques that \nmight help:\n\n-Switch filesystems if you're running a slow one. ext3 has bad latency \nbehavior here, XFS and ext4 are better.\n-Lower the dirty_* tunables like dirty_background_ratio or its bytes \nversion. This will reduce average throughput, but can lower latency.\n-Spread checkpoints out more so that less average writes are happening.\n-Decrease shared_buffers so less data is getting pushed out at \ncheckpoint time.\n-Reduce your reliability expectations and turn off synchronous_commit.\n\nYour server is sometimes showing multi-second latency issues with \nbonnie++ too; that suggests how this problem is not even specific to \nPostgreSQL. Linux is hard to tune for low latency under all \ncircumstances; fighting latency down under a heavy update workload is \nhard to do even with good hardware to accelerate write performance. In \nan EC2 environment, it may not even be possible to do without making \ntrade-offs like disabling synchronous writes. I can easily get \ntransactions hung for 10 to 15 seconds on one of their servers if I try \nto make that problem bad, you're only seeing the middle range of latency \nissues so far.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 15 Nov 2011 23:27:17 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware\n issues?"
},
{
"msg_contents": "On 16 Listopad 2011, 5:27, Greg Smith wrote:\n> On 11/14/2011 01:16 PM, Cody Caughlan wrote:\n>> We're starting to see some slow queries, especially COMMITs that are\n>> happening more frequently. The slow queries are against seemingly\n>> well-indexed tables.\n>> Slow commits like:\n>>\n>> 2011-11-14 17:47:11 UTC pid:14366 (44/0-0) LOG: duration: 3062.784 ms\n>> statement: COMMIT\n>> 2011-11-14 17:47:11 UTC pid:14604 (48/0-0) LOG: duration: 2593.351 ms\n>> statement: COMMIT\n>>\n>> These slow COMMITs are against tables that received a large number of\n>> UPDATEs and are growing fairly rapidly.\n>>\n> Linux will fill its write cache with all of the writes coming out of\n> each checkpoint. With a 16GB instance, I would expect that 5% * 16GB ~=\n> 800MB of writes are batched up when your system is slow. You should be\n> able to confirm that by looking at the \"Dirty:\" line in /proc/meminfo\n>\n> With 800MB queued up and I/O that is lucky to get 50MB/s, the sync calls\n> at the end of each checkpoint are sometimes blocking for multiple seconds:\n>\n>> 2011-11-14 17:38:48 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n>> 15121 buffers (3.3%); 0 transaction log file(s) added, 0 removed, 8\n>> recycled; write=270.101 s, sync=2.989 s, total=273.112 s; sync\n>> files=60, longest=1.484 s, average=0.049 s\n>> 2011-11-14 17:39:15 UTC pid:3965 (-0) LOG: checkpoint starting: time\n>> 2011-11-14 17:43:49 UTC pid:3965 (-0) LOG: checkpoint complete: wrote\n>> 16462 buffers (3.6%); 0 transaction log file(s) added, 0 removed, 9\n>> recycled; write=269.978 s, sync=4.106 s, total=274.117 s; sync\n>> files=82, longest=2.943 s, average=0.050 s\n>>\n>\n> When an individual sync call gets stuck for that long, clients can\n> easily get stuck behind it too. There are a couple of techniques that\n> might help:\n\nThe sync times I see there seem quite acceptable - 4.2s is not perfect,\nbut I wouldn't rate it as terrible. What actually annoys me is the amount\nof data written - it's just 17000 pages, i.e. about 130 MB for a\ncheckpoint (spread over 5 minutes). So it's just like 0.5 MB/s.\n\n> -Switch filesystems if you're running a slow one. ext3 has bad latency\n> behavior here, XFS and ext4 are better.\n\nHe's using xfs, IIRC. That's one of the better behaving ones, when it\ncomes to sync.\n\n> -Lower the dirty_* tunables like dirty_background_ratio or its bytes\n> version. This will reduce average throughput, but can lower latency.\n> -Spread checkpoints out more so that less average writes are happening.\n> -Decrease shared_buffers so less data is getting pushed out at\n> checkpoint time.\n> -Reduce your reliability expectations and turn off synchronous_commit.\n\nThe question here probably is whether those high latencies are caused or\nsignificantly influenced by the checkpoint, or are a \"feature\" of the\nstorage. Because if it's a feature, then all this is a futile attempt to\nfix it.\n\nI don't think he has problems with checkpoints - he's complaining about\nregular queries being slow (even plain SELECT, i.e. something that usually\ndoes not require a sync).\n\nNo doubt this may be connected, but a regular SELECT usually does not\nperform a sync, right? It may need to fetch some data and if the I/O is\nsaturated by a checkpoint, this may take time. But again - those bonnie\nresults were collected with on a running system, i.e. with checkpoints in\nprogress and all of that.\n\nAnd I'd expect most of the SELECT queries to be handled without actually\ntouching the devices, but by connecting\nhttps://gist.github.com/5ac1ae7d11facd72913f and\nhttps://gist.github.com/5ac1ae7d11facd72913f it seems that the larges\ntable (users) is almost completely in shared buffers, while the two other\nlarge tables (external_user and facebook_friends) are cached by about 30%.\nAnd I'd expect the rest of those tables to be in the page cache, so SELECT\nqueries on those tables should be fast.\n\nA commit obviously requires a sync on the WAL - I wonder if moving the WAL\nwould improve the performance here.\n\nThis is obviously based on an incomplete set of stats, and maybe I'm\nmissing something.\n\n> Your server is sometimes showing multi-second latency issues with\n> bonnie++ too; that suggests how this problem is not even specific to\n> PostgreSQL. Linux is hard to tune for low latency under all\n\nDon't forget those data were collected on a production system, i.e. it was\nactually under load. That probably skews the results a lot.\n\n> circumstances; fighting latency down under a heavy update workload is\n> hard to do even with good hardware to accelerate write performance. In\n> an EC2 environment, it may not even be possible to do without making\n> trade-offs like disabling synchronous writes. I can easily get\n> transactions hung for 10 to 15 seconds on one of their servers if I try\n> to make that problem bad, you're only seeing the middle range of latency\n> issues so far.\n\nAre you talking about EBS or ephemeral storage? Because all this is about\nephemeral (something like a \"virtualized local\" storage).\n\nTomas\n\n",
"msg_date": "Wed, 16 Nov 2011 17:34:10 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware\n issues?"
},
{
"msg_contents": "On 16 Listopad 2011, 2:21, Cody Caughlan wrote:\n> How did you build your RAID array? Maybe I have a fundamental flaw /\n> misconfiguration. I am doing it via:\n>\n> $ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4\n> /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde\n> $ pvcreate /dev/md0\n> $ vgcreate lvm-raid10 /dev/md0\n> $ lvcreate -l 215021 lvm-raid10 -n lvm0\n> $ blockdev --setra 65536 /dev/lvm-raid10/lvm0\n> $ mkfs.xfs -f /dev/lvm-raid10/lvm0\n> $ mkdir -p /data && mount -t xfs -o noatime /dev/lvm-raid10/lvm0 /data\n\nI'm not using EC2 much, and those were my first attempts with ephemeral\nstorage, so this may be a stupid question, but why are you building a\nRAID-10 array on an ephemeral storage, anyway?\n\nYou already have a standby, so if the primary instance fails you can\neasily failover.\n\nWhat are you going to do in case of a drive failure? With a server this is\nrather easy - just put there a new drive and you're done, but can you do\nthat on EC2? I guess you can't do that when the instance is running, so\nyou'll have to switch to the standby anyway, right? Have you ever tried\nthis (how it affects the performance etc.)?\n\nSo what additional protection does that give you? Wouldn't a RAID-0 be a\nbetter utilization of the resources?\n\nTomas\n\n",
"msg_date": "Wed, 16 Nov 2011 17:52:31 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware\n issues?"
},
{
"msg_contents": "\nOn Nov 16, 2011, at 8:52 AM, Tomas Vondra wrote:\n\n> On 16 Listopad 2011, 2:21, Cody Caughlan wrote:\n>> How did you build your RAID array? Maybe I have a fundamental flaw /\n>> misconfiguration. I am doing it via:\n>> \n>> $ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4\n>> /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde\n>> $ pvcreate /dev/md0\n>> $ vgcreate lvm-raid10 /dev/md0\n>> $ lvcreate -l 215021 lvm-raid10 -n lvm0\n>> $ blockdev --setra 65536 /dev/lvm-raid10/lvm0\n>> $ mkfs.xfs -f /dev/lvm-raid10/lvm0\n>> $ mkdir -p /data && mount -t xfs -o noatime /dev/lvm-raid10/lvm0 /data\n> \n> I'm not using EC2 much, and those were my first attempts with ephemeral\n> storage, so this may be a stupid question, but why are you building a\n> RAID-10 array on an ephemeral storage, anyway?\n> \n> You already have a standby, so if the primary instance fails you can\n> easily failover.\n> \n\nYes, the slave will become master if master goes down. We have no plan to try and resurrect the master in the case of failure, hence the choice of ephemeral vs EBS. \n\nWe chose RAID10 over RAID0 to get the best combination of performance and minimizing probability of a single drive failure bringing down the house.\n\nSo, yes, RAID0 would ultimately deliver the best performance, with more risk.\n\n> What are you going to do in case of a drive failure? With a server this is\n> rather easy - just put there a new drive and you're done, but can you do\n> that on EC2? I guess you can't do that when the instance is running, so\n> you'll have to switch to the standby anyway, right? Have you ever tried\n> this (how it affects the performance etc.)?\n> \n\nAs far as I know one cannot alter the ephemeral drives in a running instance, so yes, the whole instance would have to be written off.\n\n> So what additional protection does that give you? Wouldn't a RAID-0 be a\n> better utilization of the resources?\n> \n\nToo much risk.\n\n> Tomas\n> \n\n",
"msg_date": "Wed, 16 Nov 2011 09:31:50 -0800",
"msg_from": "Cody Caughlan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware issues?"
},
{
"msg_contents": "On 16 Listopad 2011, 18:31, Cody Caughlan wrote:\n>\n> On Nov 16, 2011, at 8:52 AM, Tomas Vondra wrote:\n>\n>> On 16 Listopad 2011, 2:21, Cody Caughlan wrote:\n>>> How did you build your RAID array? Maybe I have a fundamental flaw /\n>>> misconfiguration. I am doing it via:\n>>>\n>>> $ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4\n>>> /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde\n>>> $ pvcreate /dev/md0\n>>> $ vgcreate lvm-raid10 /dev/md0\n>>> $ lvcreate -l 215021 lvm-raid10 -n lvm0\n>>> $ blockdev --setra 65536 /dev/lvm-raid10/lvm0\n>>> $ mkfs.xfs -f /dev/lvm-raid10/lvm0\n>>> $ mkdir -p /data && mount -t xfs -o noatime /dev/lvm-raid10/lvm0 /data\n>>\n>> I'm not using EC2 much, and those were my first attempts with ephemeral\n>> storage, so this may be a stupid question, but why are you building a\n>> RAID-10 array on an ephemeral storage, anyway?\n>>\n>> You already have a standby, so if the primary instance fails you can\n>> easily failover.\n>>\n>\n> Yes, the slave will become master if master goes down. We have no plan to\n> try and resurrect the master in the case of failure, hence the choice of\n> ephemeral vs EBS.\n>\n> We chose RAID10 over RAID0 to get the best combination of performance and\n> minimizing probability of a single drive failure bringing down the house.\n>\n> So, yes, RAID0 would ultimately deliver the best performance, with more\n> risk.\n>\n>> What are you going to do in case of a drive failure? With a server this\n>> is\n>> rather easy - just put there a new drive and you're done, but can you do\n>> that on EC2? I guess you can't do that when the instance is running, so\n>> you'll have to switch to the standby anyway, right? Have you ever tried\n>> this (how it affects the performance etc.)?\n>>\n>\n> As far as I know one cannot alter the ephemeral drives in a running\n> instance, so yes, the whole instance would have to be written off.\n>\n>> So what additional protection does that give you? Wouldn't a RAID-0 be a\n>> better utilization of the resources?\n>>\n>\n> Too much risk.\n\nWhy? If I understand that correctly, the only case where a RAID-10\nactually helps is when an ephemeral drive fails, but not the whole\ninstance. Do you have some numbers how often this happens, i.e. how often\na drive fails without the instance?\n\nBut you can't actually replace the failed drive, so the only option you\nhave is to failover to the standby - right? Sure - with async replication,\nyou could loose a the not-yet-sent transactions. I see two possible\nsolutions:\n\na) use sync rep, available in 9.1 (you already run 9.1.1)\n\nb) place WAL on an EBS, mounted as part of the failover\n\nThe EBS are not exactly fast, but it seems (e.g.\nhttp://www.mysqlperformanceblog.com/2009/08/06/ec2ebs-single-and-raid-volumes-io-bencmark/)\nthe sequential performance might be acceptable.\n\nAccording to the stats you've posted, you've written about 5632 MB of WAL\ndata per hour. That's about 1.5 MB/s on average, and that might be handled\nby the EBS. Yes, if you have a peak where you need to write much more\ndata, this is going to be a bottleneck.\n\nTomas\n\n",
"msg_date": "Wed, 16 Nov 2011 19:16:43 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries / commits, mis-configuration or hardware\n issues?"
}
] |
[
{
"msg_contents": "Hi all. Been using postgres for years, and lurking on this newsgroup for a\nshort while now to help me gain the benefit of your expertise and\nexperience and learn how to get most out of postgresql possible.\n\nI do a fair bit of work on tables using composite keys. I have discovered\na couple of things lately that may or may not be worth looking at in terms\nof query planner.\n\nConsider my archetypal table. Based on real data.\n\ntable master\n franchise smallint,\n partnumber varchar\n\nIt is a spare parts list, combining part numbers from multiple suppliers\n(franchise). Part numbers are typically unique but sometimes there are\nduplicates. I have use cases which concern both finding parts by a\nspecific franchise or finding parts system wide. In my table I have follow\nstats:\n* Number of records : 2,343,569\n* Number of unique partnumber records : 2,130,379 (i.e. for a given\npartnumber there is on average, 1.1 records. i.e. a partnumber is used by\n1.1 suppliers. The partnumber with the most number of records = 8 records.\n* Number of unique suppliers : 35\n\nNow consider following query: its purpose is to render next 20 rows at an\naribtrary position. The position being after record matching franchise=10,\npartnumber='1' in partnumber then franchise order.\n\nselect * from master where partnum>='1' and (partnum>'1' or franchise>10)\norder by partnum,franchise limit 20;\n\nNow if I have a composite index on partnum + franchise. This query performs\nthe way you would expect and very quickly.\n\nBut if I have an index on partnum only the system seqscan's master. And\nyields poor performance. i.e.:\n============\n Limit (cost=143060.23..143060.28 rows=20 width=93) (actual\ntime=2307.986..2307.998 rows=20 loops=1)\n -> Sort (cost=143060.23..148570.14 rows=2203967 width=93) (actual\ntime=2307.982..2307.986 rows=20 loops=1)\n Sort Key: partnum, franchise\n Sort Method: top-N heapsort Memory: 19kB\n -> Seq Scan on master (cost=0.00..84413.46 rows=2203967\nwidth=93) (actual time=0.019..1457.001 rows=2226792 loops=1)\n Filter: (((partnum)::text >= '1'::text) AND\n(((partnum)::text > '1'::text) OR (franchise > 10)))\n Total runtime: 2308.118 ms\n\n\nI wonder, if it is possible and worthwhile, to setup the query planner to\nrecognize that because of the stats I indicate above, that a sort by\npartnum is almost exactly the same as a sort by partnum+franchise. And\ndoing a Index scan on partnum index, and sorting results in memory will be\ndramatically faster. The sort buffer only needs to be very small, will\nonly grow to 8 records only at most in my above example. The buffer will\nscan partnum index, and as long as partnum is the same, it will sort that\nsmall segment, as soon as the partnum increments when walking the index,\nthe buffer zeros out again for next sort group.\n\nArtificially simulating this in SQL (only works with foreknowledge of max\ncount of records for a given part. i.e. +8 ) shows the dramatic theoretical\nperformance gain over the above.\nexplain analyze select * from (select * from master where partnum>='1'\norder by partnum limit 20+8) x where partnum>'1' or franchise>10 order by\npartnum,franchise limit 20;\n=====\n Limit (cost=77.71..77.75 rows=16 width=230) (actual time=0.511..0.555\nrows=20 loops=1)\n -> Sort (cost=77.71..77.75 rows=16 width=230) (actual\ntime=0.507..0.524 rows=20 loops=1)\n Sort Key: x.partnum, x.franchise\n Sort Method: quicksort Memory: 21kB\n -> Subquery Scan x (cost=0.00..77.39 rows=16 width=230) (actual\ntime=0.195..0.367 rows=28 loops=1)\n Filter: (((x.partnum)::text > '1'::text) OR (x.franchise >\n10))\n -> Limit (cost=0.00..76.97 rows=28 width=93) (actual\ntime=0.180..0.282 rows=28 loops=1)\n -> Index Scan using master_searchpartkey on master\n(cost=0.00..6134000.35 rows=2231481 width=93) (actual time=0.178..0.240\nrows=28 loops=1)\n Index Cond: ((partnum)::text >= '1'::text)\n Total runtime: 0.695 ms\n\nOf course I could just make sure I create indexes with match my order by\nfields perfectly; which is exactly what I am doing right now. But I\nthought that maybe it might be worth while considering looking at allowing\nsome sort of in memory sort to be overlaid on an index if the statistics\nindicate that the sorts are very nearly ordered.\n\nAndrew\n\nHi all. Been using postgres for years, and lurking on this newsgroup for a short while now to help me gain the benefit of your expertise and experience and learn how to get most out of postgresql possible.I do a fair bit of work on tables using composite keys. I have discovered a couple of things lately that may or may not be worth looking at in terms of query planner.\nConsider my archetypal table. Based on real data. table master franchise smallint, partnumber varcharIt is a spare parts list, combining part numbers from multiple suppliers (franchise). Part numbers are typically unique but sometimes there are duplicates. I have use cases which concern both finding parts by a specific franchise or finding parts system wide. In my table I have follow stats:\n* Number of records : 2,343,569* Number of unique partnumber records : 2,130,379 (i.e. for a given partnumber there is on average, 1.1 records. i.e. a partnumber is used by 1.1 suppliers. The partnumber with the most number of records = 8 records. \n* Number of unique suppliers : 35Now consider following query: its purpose is to render next 20 rows at an aribtrary position. The position being after record matching franchise=10, partnumber='1' in partnumber then franchise order.\nselect * from master where partnum>='1' and (partnum>'1' or franchise>10) order by partnum,franchise limit 20;Now if I have a composite index on partnum + franchise. This query performs the way you would expect and very quickly.\nBut if I have an index on partnum only the system seqscan's master. And yields poor performance. i.e.:============ Limit (cost=143060.23..143060.28 rows=20 width=93) (actual time=2307.986..2307.998 rows=20 loops=1)\n -> Sort (cost=143060.23..148570.14 rows=2203967 width=93) (actual time=2307.982..2307.986 rows=20 loops=1) Sort Key: partnum, franchise Sort Method: top-N heapsort Memory: 19kB -> Seq Scan on master (cost=0.00..84413.46 rows=2203967 width=93) (actual time=0.019..1457.001 rows=2226792 loops=1)\n Filter: (((partnum)::text >= '1'::text) AND (((partnum)::text > '1'::text) OR (franchise > 10))) Total runtime: 2308.118 msI wonder, if it is possible and worthwhile, to setup the query planner to recognize that because of the stats I indicate above, that a sort by partnum is almost exactly the same as a sort by partnum+franchise. And doing a Index scan on partnum index, and sorting results in memory will be dramatically faster. The sort buffer only needs to be very small, will only grow to 8 records only at most in my above example. The buffer will scan partnum index, and as long as partnum is the same, it will sort that small segment, as soon as the partnum increments when walking the index, the buffer zeros out again for next sort group.\nArtificially simulating this in SQL (only works with foreknowledge of max count of records for a given part. i.e. +8 ) shows the dramatic theoretical performance gain over the above.explain analyze select * from (select * from master where partnum>='1' order by partnum limit 20+8) x where partnum>'1' or franchise>10 order by partnum,franchise limit 20;\n===== Limit (cost=77.71..77.75 rows=16 width=230) (actual time=0.511..0.555 rows=20 loops=1) -> Sort (cost=77.71..77.75 rows=16 width=230) (actual time=0.507..0.524 rows=20 loops=1) Sort Key: x.partnum, x.franchise\n Sort Method: quicksort Memory: 21kB -> Subquery Scan x (cost=0.00..77.39 rows=16 width=230) (actual time=0.195..0.367 rows=28 loops=1) Filter: (((x.partnum)::text > '1'::text) OR (x.franchise > 10))\n -> Limit (cost=0.00..76.97 rows=28 width=93) (actual time=0.180..0.282 rows=28 loops=1) -> Index Scan using master_searchpartkey on master (cost=0.00..6134000.35 rows=2231481 width=93) (actual time=0.178..0.240 rows=28 loops=1)\n Index Cond: ((partnum)::text >= '1'::text) Total runtime: 0.695 msOf course I could just make sure I create indexes with match my order by fields perfectly; which is exactly what I am doing right now. But I thought that maybe it might be worth while considering looking at allowing some sort of in memory sort to be overlaid on an index if the statistics indicate that the sorts are very nearly ordered.\nAndrew",
"msg_date": "Tue, 15 Nov 2011 09:22:46 +1100",
"msg_from": "Andrew Barnham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query planner suggestion,\n\tfor indexes with similar but not exact ordering."
},
{
"msg_contents": "On Mon, Nov 14, 2011 at 5:22 PM, Andrew Barnham\n<[email protected]> wrote:\n> I wonder, if it is possible and worthwhile, to setup the query planner to\n> recognize that because of the stats I indicate above, that a sort by partnum\n> is almost exactly the same as a sort by partnum+franchise. And doing a\n> Index scan on partnum index, and sorting results in memory will be\n> dramatically faster. The sort buffer only needs to be very small, will only\n> grow to 8 records only at most in my above example. The buffer will scan\n> partnum index, and as long as partnum is the same, it will sort that small\n> segment, as soon as the partnum increments when walking the index, the\n> buffer zeros out again for next sort group.\n\nThis has come up before and seems worthwhile, but nobody's implemented it yet.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 30 Nov 2011 15:50:47 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner suggestion, for indexes with similar but\n\tnot exact ordering."
}
] |
[
{
"msg_contents": "We have anywhere from 60-80 background worker processes connecting to\nPostgres, performing a short task and then disconnecting. The lifetime\nof these tasks averages 1-3 seconds.\n\nI know that there is some connection overhead to Postgres, but I dont\nknow what would be the best way to measure this overheard and/or to\ndetermine if its currently an issue at all.\n\nIf there is a substantial overheard I would think that employing a\nconnection pool like pgbouncer to keep a static list of these\nconnections and then dole them out to the transient workers on demand.\n\nSo the overall cumulative number of connections wouldnt change, I\nwould just attempt to alleviate the setup/teardown of them so quickly.\n\nIs this something that I should look into or is it not much of an\nissue? Whats the best way to determine if I could benefit from using a\nconnection pool?\n\nThanks.\n",
"msg_date": "Mon, 14 Nov 2011 16:42:00 -0800",
"msg_from": "Cody Caughlan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large number of short lived connections - could a connection pool\n\thelp?"
},
{
"msg_contents": "On Nov 14, 2011, at 4:42 PM, Cody Caughlan wrote:\n\n> We have anywhere from 60-80 background worker processes connecting to\n> Postgres, performing a short task and then disconnecting. The lifetime\n> of these tasks averages 1-3 seconds.\n\n[snip]\n\n> Is this something that I should look into or is it not much of an\n> issue? Whats the best way to determine if I could benefit from using a\n> connection pool?\n\nYes, this is precisely a kind of situation a connection pooler will help with. Not only with the the connection set up/tear down overhead, but also by using resources on your server better.... you probably don't actually have 60-80 cores on your server, so reducing that number down to just a few that are actually working will the Postgres finish them faster to work on others. Basically, the queueing happens off the postgres server, letting postgres use the box with less interruptions. \n\nNow, is it a problem to not use a pooler? That depends on if it's causing you grief or not. But if you think you'll get more connection churn or larger numbers of workers, then a connection pooler will only help more.",
"msg_date": "Mon, 14 Nov 2011 16:59:06 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large number of short lived connections - could a connection pool\n\thelp?"
},
{
"msg_contents": "On Mon, Nov 14, 2011 at 4:59 PM, Ben Chobot <[email protected]> wrote:\n> On Nov 14, 2011, at 4:42 PM, Cody Caughlan wrote:\n>\n>> We have anywhere from 60-80 background worker processes connecting to\n>> Postgres, performing a short task and then disconnecting. The lifetime\n>> of these tasks averages 1-3 seconds.\n>\n> [snip]\n>\n>> Is this something that I should look into or is it not much of an\n>> issue? Whats the best way to determine if I could benefit from using a\n>> connection pool?\n>\n> Yes, this is precisely a kind of situation a connection pooler will help with. Not only with the the connection set up/tear down overhead, but also by using resources on your server better.... you probably don't actually have 60-80 cores on your server, so reducing that number down to just a few that are actually working will the Postgres finish them faster to work on others. Basically, the queueing happens off the postgres server, letting postgres use the box with less interruptions.\n>\n> Now, is it a problem to not use a pooler? That depends on if it's causing you grief or not. But if you think you'll get more connection churn or larger numbers of workers, then a connection pooler will only help more.\n\nThanks for your input. Its not causing me grief per se. The load on\nthe pg machine is small. I guess I am just wondering if I am being\nstupid and leaving resources and/or performance on the table.\n\nBut it sounds that as a whole it would be a good use case for\npgbouncer and in the long run will prove beneficial. But no, its not\nobviously killing me right now.\n",
"msg_date": "Mon, 14 Nov 2011 17:04:20 -0800",
"msg_from": "Cody Caughlan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large number of short lived connections - could a\n\tconnection pool help?"
},
{
"msg_contents": "Am 15.11.2011 01:42, schrieb Cody Caughlan:\n> We have anywhere from 60-80 background worker processes connecting to\n> Postgres, performing a short task and then disconnecting. The lifetime\n> of these tasks averages 1-3 seconds.\n>\n> I know that there is some connection overhead to Postgres, but I dont\n> know what would be the best way to measure this overheard and/or to\n> determine if its currently an issue at all.\n>\n> If there is a substantial overheard I would think that employing a\n> connection pool like pgbouncer to keep a static list of these\n> connections and then dole them out to the transient workers on demand.\n>\n> So the overall cumulative number of connections wouldnt change, I\n> would just attempt to alleviate the setup/teardown of them so quickly.\n>\n> Is this something that I should look into or is it not much of an\n> issue? Whats the best way to determine if I could benefit from using a\n> connection pool?\n>\n> Thanks.\n>\nI had a case where a pooler (in this case pgpool) resulted in a 140% \napplication improvement - so - yes, it is probably a win to use a \npooling solution.\n\n\n",
"msg_date": "Tue, 15 Nov 2011 09:04:52 +0100",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large number of short lived connections - could a connection\n\tpool help?"
}
] |
[
{
"msg_contents": "Linux F12 64bit\nPostgres 8.4.4\n16 proc / 32GB\n8 disk 15KRPM SAS/Raid 5 (I know!)\n\n\nshared_buffers = 6000MB\t\t\t\n#temp_buffers = 8MB\t\t\t\nmax_prepared_transactions = 0\t\t\nwork_mem = 250MB\t\t\t\t\nmaintenance_work_mem = 1000MB\t\t\n\t\n\n\n\n\nWe now have about 180mill records in that table. The database size is\nabout 580GB and the userstats table which is the biggest one and the\none we query the most is 83GB.\n\nJust a basic query takes 4 minutes:\n\nFor e.g. select count(distinct uid) from userstats where log_date >'11/7/2011'\n\nSince we are looking for distinct we can't obviously use an index. But\nI'm wondering what should be expected and what is caused be tuning or\nlack there of? Doing an iostat I see maybe 10-15%, however the cpu\nthat this query is attached to is obviously in the 99-100% busy arena.\nOr am I really IOBound for this single query (sure lots of data\nbut?!).\n\nIt takes roughly 5.5 hours to do a concurrent re-index and this DB is\nvac'd nightly.\n\nJust not sure if this is what to expect, however there are many other\nDB's out there bigger than ours, so I'm curious what can I do?\n\nThanks\nTory\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 1.41 0.00 0.20 1.61 0.00 96.78\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 92.17 3343.06 1718.85 46273281004 23791660544\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 1.47 0.00 0.61 5.85 0.00 92.07\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 417.50 90372.00 0.00 180744 0\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 2.88 0.00 0.76 6.34 0.00 90.03\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 725.00 183560.00 148.00 367120 296\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 2.18 0.00 0.60 3.59 0.00 93.63\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 711.00 179952.00 240.00 359904 480\n\n[blue@adb01 ~]$ iostat -xd 2\nLinux 2.6.32.26-175.fc12.x86_64 (adb01) \t11/16/2011 \t_x86_64_\t(16 CPU)\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\navgrq-sz avgqu-sz await svctm %util\nsda 0.18 191.40 68.71 23.45 3343.22 1718.85\n54.92 0.12 4.61 2.05 18.94\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\navgrq-sz avgqu-sz await svctm %util\nsda 2.00 0.00 706.50 8.00 178832.00 128.00\n250.47 77.76 31.21 1.40 100.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\navgrq-sz avgqu-sz await svctm %util\nsda 4.98 17.41 584.58 35.32 148497.51 672.64\n240.64 38.04 227.07 1.61 99.55\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\navgrq-sz avgqu-sz await svctm %util\nsda 3.50 0.00 688.50 2.00 174556.00 32.00\n252.84 2.81 4.66 1.44 99.30\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\navgrq-sz avgqu-sz await svctm %util\nsda 1.00 10.00 717.50 1.50 182084.00 92.00\n253.37 2.43 3.37 1.38 99.45\n\n^C\n[blue@]$ iostat 2\nLinux 2.6.32.26-175.fc12.x86_64 (adb01) \t11/16/2011 \t_x86_64_\t(16 CPU)\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 1.41 0.00 0.20 1.61 0.00 96.78\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 92.17 3343.33 1718.85 46277115652 23791678248\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 7.79 0.00 0.51 8.51 0.00 83.20\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 235.50 45168.00 0.00 90336 0\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 5.90 0.00 0.35 4.46 0.00 89.29\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 160.00 14688.00 132.00 29376 264\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 8.01 0.00 0.51 12.80 0.00 78.67\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 163.50 11324.00 700.00 22648 1400\n",
"msg_date": "Wed, 16 Nov 2011 14:53:17 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance question 83 GB Table 150 million rows, distinct select"
},
{
"msg_contents": "On November 16, 2011 02:53:17 PM Tory M Blue wrote:\n> We now have about 180mill records in that table. The database size is\n> about 580GB and the userstats table which is the biggest one and the\n> one we query the most is 83GB.\n> \n> Just a basic query takes 4 minutes:\n> \n> For e.g. select count(distinct uid) from userstats where log_date\n> >'11/7/2011'\n>\n> Just not sure if this is what to expect, however there are many other\n> DB's out there bigger than ours, so I'm curious what can I do?\n\nThat query should use an index on log_date if one exists. Unless the planner \nthinks it would need to look at too much of the table.\n\nAlso, the normal approach to making large statistics tables more manageable is \nto partition them by date range.\n",
"msg_date": "Wed, 16 Nov 2011 15:27:57 -0800",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Wed, Nov 16, 2011 at 4:27 PM, Alan Hodgson <[email protected]> wrote:\n> On November 16, 2011 02:53:17 PM Tory M Blue wrote:\n>> We now have about 180mill records in that table. The database size is\n>> about 580GB and the userstats table which is the biggest one and the\n>> one we query the most is 83GB.\n>>\n>> Just a basic query takes 4 minutes:\n>>\n>> For e.g. select count(distinct uid) from userstats where log_date\n>> >'11/7/2011'\n>>\n>> Just not sure if this is what to expect, however there are many other\n>> DB's out there bigger than ours, so I'm curious what can I do?\n>\n> That query should use an index on log_date if one exists. Unless the planner\n> thinks it would need to look at too much of the table.\n\nAgreed. We'd need to know how selective that where clause is. Seeing\nsome forced index usage versus regular explain analyze would be\nuseful. i.e.\n\nset enable_seqscan=off;\nexplain analyze select ...\n\n> Also, the normal approach to making large statistics tables more manageable is\n> to partition them by date range.\n\nIf the OP's considering partitioning, they should really consider\nupgrading to 9.1 which has much better performance of things like\naggregates against partition tables.\n",
"msg_date": "Wed, 16 Nov 2011 16:32:09 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Wed, Nov 16, 2011 at 3:32 PM, Scott Marlowe <[email protected]>wrote:\n\n>\n> If the OP's considering partitioning, they should really consider\n> upgrading to 9.1 which has much better performance of things like\n> aggregates against partition tables.\n>\n>\nCould you elaborate on this a bit, or point me at some docs? I manage a\n600GB db which does almost nothing but aggregates on partitioned tables -\nthe largest of which has approx 600 million rows across all partitions.\n grouping in the aggregates tends to be on the partition column and rarely,\nif ever, would a group cross multiple partitions. We're on 9.0 and could\ndefinitely use some performance gains.\n\nOn Wed, Nov 16, 2011 at 3:32 PM, Scott Marlowe <[email protected]> wrote:\nIf the OP's considering partitioning, they should really consider\nupgrading to 9.1 which has much better performance of things like\naggregates against partition tables.\nCould you elaborate on this a bit, or point me at some docs? I manage a 600GB db which does almost nothing but aggregates on partitioned tables - the largest of which has approx 600 million rows across all partitions. grouping in the aggregates tends to be on the partition column and rarely, if ever, would a group cross multiple partitions. We're on 9.0 and could definitely use some performance gains.",
"msg_date": "Wed, 16 Nov 2011 15:52:35 -0800",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "Hi.\n\nOn 16 Listopad 2011, 23:53, Tory M Blue wrote:\n>\n> We now have about 180mill records in that table. The database size is\n> about 580GB and the userstats table which is the biggest one and the\n> one we query the most is 83GB.\n>\n> Just a basic query takes 4 minutes:\n>\n> For e.g. select count(distinct uid) from userstats where log_date\n> >'11/7/2011'\n>\n> Since we are looking for distinct we can't obviously use an index. But\n> I'm wondering what should be expected and what is caused be tuning or\n> lack there of? Doing an iostat I see maybe 10-15%, however the cpu\n> that this query is attached to is obviously in the 99-100% busy arena.\n> Or am I really IOBound for this single query (sure lots of data\n> but?!).\n\nWhat do you mean by \"can't use an index\"? The query may use an index to\nevaluate the WHERE condition, no matter if there's a distinct or not.\n\nThe index-only scans that might be used to speed up this query are\ncommitted in 9.2 - but even that might use index both for plain count and\ncount distinct.\n\nBut you're right - you're not bound by I/O (although I don't know what are\nthose 15% - iowait, util or what?). The COUNT(DISTINCT) has to actually\nkeep all the distinct values to determine which are actually distinct.\n\nIf there's enough memory (work_mem) to keep all the values, this may be\ndone using a hash table (hash aggregate). Otherwise it has to sort them.\nYou can see this in explain plan (which you haven't posted).\n\nAnyway this is actually a rather CPU intensive - how exactly depends on\nthe data type. Comparing integers is much easier / cheaper than comparing\ntext values. What data type is the 'uid' column?\n\n> It takes roughly 5.5 hours to do a concurrent re-index and this DB is\n> vac'd nightly.\n>\n> Just not sure if this is what to expect, however there are many other\n> DB's out there bigger than ours, so I'm curious what can I do?\n\nWell, not much. Use an integer data type for the 'uid' column (unless\nyou're already using it). Then you can use more work_mem so that a hash\naggregate is used (maybe it's already used, we need to see the explain\nplan to check).\n\nThen you could precompute the distinct values somehow - for example if\nthere are only a few distinct values for each day, you could do something\nlike this every day\n\nINSERT INTO userstats_distinct\nSELECT DISTINCT date_trunc('day', log_date), uid FROM userstats\n WHERE log_date BETWEEN date_trunc('day', log_date) - interval '1 day'\n AND date_trunc('day', log_date);\n\nand then just\n\nSELECT COUNT(DISTINCT uid) FROM userstats_distinct\n WHERE log_date > '11/7/2011';\n\n\nThe point is to preaggregate the data to the desired granularity (e.g.\nday), and how it improves the performance depends on how much the amount\nof data decreases.\n\nAnother option is to use estimates instead of exact results - I've\nactually written an extension for that, maybe you'll find that useful.\nIt's available on github (https://github.com/tvondra/distinct_estimators)\nand pgxn (http://pgxn.org/tag/estimate/). I've posted a brief description\nhere:\n\nhttp://www.fuzzy.cz/en/articles/aggregate-functions-for-distinct-estimation/\n\nand the current extensions actually performs much better. It's not that\ndifficult to reach 1% precision. Let me know if this is interesting for\nyou and if you need a help with the extensions.\n\nTomas\n\n",
"msg_date": "Thu, 17 Nov 2011 00:59:38 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "Thanks all, I misspoke on our use of the index.\n\nWe do have an index on log_date and it is being used here is the\nexplain analyze plan.\n\n\n\n'Aggregate (cost=7266186.16..7266186.17 rows=1 width=8) (actual\ntime=127575.030..127575.030 rows=1 loops=1)'\n' -> Bitmap Heap Scan on userstats (cost=135183.17..7240890.38\nrows=10118312 width=8) (actual time=8986.425..74815.790 rows=33084417\nloops=1)'\n' Recheck Cond: (log_date > '2011-11-04'::date)'\n' -> Bitmap Index Scan on idx_userstats_logdate\n(cost=0.00..132653.59 rows=10118312 width=0) (actual\ntime=8404.147..8404.147 rows=33084417 loops=1)'\n' Index Cond: (log_date > '2011-11-04'::date)'\n'Total runtime: 127583.898 ms'\n\nPartitioning Tables\n\nThis is use primarily when you are usually accessing only a part of\nthe data. We want our queries to go across the entire date range. So\nwe don't really meet the criteria for partitioning (had to do some\nquick research).\n\nThanks again\nTory\n",
"msg_date": "Wed, 16 Nov 2011 16:45:01 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Wed, Nov 16, 2011 at 4:59 PM, Tomas Vondra <[email protected]> wrote:\n\n> But you're right - you're not bound by I/O (although I don't know what are\n> those 15% - iowait, util or what?). The COUNT(DISTINCT) has to actually\n> keep all the distinct values to determine which are actually distinct.\n\nActually I meant to comment on this, he is IO bound. Look at % Util,\nit's at 99 or 100.\n\nAlso, if you have 16 cores and look at something like vmstat you'll\nsee 6% wait state. That 6% represents one CPU core waiting for IO,\nthe other cores will add up the rest to 100%.\n",
"msg_date": "Wed, 16 Nov 2011 18:57:54 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Wed, Nov 16, 2011 at 4:52 PM, Samuel Gendler\n<[email protected]> wrote:\n> Could you elaborate on this a bit, or point me at some docs? I manage a\n> 600GB db which does almost nothing but aggregates on partitioned tables -\n> the largest of which has approx 600 million rows across all partitions.\n> grouping in the aggregates tends to be on the partition column and rarely,\n> if ever, would a group cross multiple partitions. We're on 9.0 and could\n> definitely use some performance gains.\n\nIt's covered in the release notes for 9.1:\n\nhttp://developer.postgresql.org/pgdocs/postgres/release-9-1.html\n",
"msg_date": "Wed, 16 Nov 2011 18:58:50 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On 11/16/2011 04:53 PM, Tory M Blue wrote:\n> Linux F12 64bit\n> Postgres 8.4.4\n> 16 proc / 32GB\n> 8 disk 15KRPM SAS/Raid 5 (I know!)\n>\n>\n> shared_buffers = 6000MB\t\t\t\n> #temp_buffers = 8MB\t\t\t\n> max_prepared_transactions = 0\t\t\n> work_mem = 250MB\t\t\t\t\n> maintenance_work_mem = 1000MB\t\t\n> \t\n>\n>\n>\n>\n> We now have about 180mill records in that table. The database size is\n> about 580GB and the userstats table which is the biggest one and the\n> one we query the most is 83GB.\n>\n> Just a basic query takes 4 minutes:\n>\n> For e.g. select count(distinct uid) from userstats where log_date>'11/7/2011'\n>\n\nHow'd you feel about keeping a monthly summary table? Update it daily, with only a days worth of stats, then you could query the summary table much faster.\n\nThat's what I do for my website stats. I log details for a month, then summarize everything into a summary table, and blow away the details. You wouldn't have to delete the details if you wanted them, just keeping the summary table updated would be enough.\n\n-Andy\n",
"msg_date": "Wed, 16 Nov 2011 20:11:18 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On 17 Listopad 2011, 2:57, Scott Marlowe wrote:\n> On Wed, Nov 16, 2011 at 4:59 PM, Tomas Vondra <[email protected]> wrote:\n>\n>> But you're right - you're not bound by I/O (although I don't know what\n>> are\n>> those 15% - iowait, util or what?). The COUNT(DISTINCT) has to actually\n>> keep all the distinct values to determine which are actually distinct.\n>\n> Actually I meant to comment on this, he is IO bound. Look at % Util,\n> it's at 99 or 100.\n>\n> Also, if you have 16 cores and look at something like vmstat you'll\n> see 6% wait state. That 6% represents one CPU core waiting for IO,\n> the other cores will add up the rest to 100%.\n\nAaaah, I keep forgetting about this and I somehow ignored the iostat\nresults too. Yes, he's obviously IO bound.\n\nBut this actually means the pre-aggregating the data (as I described in my\nprevious post) would probably help him even more (less data, less CPU).\n\nTomas\n\n",
"msg_date": "Thu, 17 Nov 2011 03:27:35 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Wed, Nov 16, 2011 at 6:27 PM, Tomas Vondra <[email protected]> wrote:\n> On 17 Listopad 2011, 2:57, Scott Marlowe wrote:\n>> On Wed, Nov 16, 2011 at 4:59 PM, Tomas Vondra <[email protected]> wrote:\n>>\n>>> But you're right - you're not bound by I/O (although I don't know what\n>>> are\n>>> those 15% - iowait, util or what?). The COUNT(DISTINCT) has to actually\n>>> keep all the distinct values to determine which are actually distinct.\n>>\n>> Actually I meant to comment on this, he is IO bound. Look at % Util,\n>> it's at 99 or 100.\n>>\n>> Also, if you have 16 cores and look at something like vmstat you'll\n>> see 6% wait state. That 6% represents one CPU core waiting for IO,\n>> the other cores will add up the rest to 100%.\n>\n> Aaaah, I keep forgetting about this and I somehow ignored the iostat\n> results too. Yes, he's obviously IO bound.\n\nI'm not so sure on the io-bound. Been battling/reading about it all\nday. 1 CPU is pegged at 100%, but the disk is not. If I do something\nelse via another CPU I have no issues accessing the disks,\nwriting/deleting/reading. It appears that what was said about this\nbeing very CPU intensive makes more sense to me. The query is only\nusing 1 CPU and that appears to be getting overwhelmed.\n\n%util: This number depicts the percentage of time that the device\nspent in servicing requests.\n\nOn a large query, or something that is taking a while it's going to be\nwriting to disk all the time and I'm thinking that is what the util is\ntelling me, especially since IOwait is in the 10-15% range.\n\nAgain just trying to absorb\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.93 0.00 0.60 9.84 0.00 88.62\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\navgrq-sz avgqu-sz await svctm %util\nsda 0.00 86.50 3453.00 1.50 55352.00 16.00\n16.03 5.24 0.66 0.29 100.00\n\nI mean await time and service time are in the .29 to .66 msec that\ndoesn't read as IObound to me. But I'm more than willing to learn\nsomething not totally postgres specific.\n\nBut I just don't see it... Average queue size of 2.21 to 6, that's\nreally not a ton of stuff \"waiting\"\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\navgrq-sz avgqu-sz await svctm %util\nsda 0.00 3.50 3060.00 2.00 49224.00 20.00\n16.08 2.21 0.76 0.33 99.95\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.80 0.00 0.51 11.01 0.00 87.68\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\navgrq-sz avgqu-sz await svctm %util\nsda 0.00 5.00 3012.50 3.00 48200.00 92.00\n16.01 2.11 0.74 0.33 99.95\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.93 0.00 0.60 9.84 0.00 88.62\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\navgrq-sz avgqu-sz await svctm %util\nsda 0.00 86.50 3453.00 1.50 55352.00 16.00\n16.03 5.24 0.66 0.29 100.00\n",
"msg_date": "Wed, 16 Nov 2011 18:42:55 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Wed, Nov 16, 2011 at 7:42 PM, Tory M Blue <[email protected]> wrote:\n> On Wed, Nov 16, 2011 at 6:27 PM, Tomas Vondra <[email protected]> wrote:\n>> On 17 Listopad 2011, 2:57, Scott Marlowe wrote:\n>>> On Wed, Nov 16, 2011 at 4:59 PM, Tomas Vondra <[email protected]> wrote:\n>>>\n>>>> But you're right - you're not bound by I/O (although I don't know what\n>>>> are\n>>>> those 15% - iowait, util or what?). The COUNT(DISTINCT) has to actually\n>>>> keep all the distinct values to determine which are actually distinct.\n>>>\n>>> Actually I meant to comment on this, he is IO bound. Look at % Util,\n>>> it's at 99 or 100.\n>>>\n>>> Also, if you have 16 cores and look at something like vmstat you'll\n>>> see 6% wait state. That 6% represents one CPU core waiting for IO,\n>>> the other cores will add up the rest to 100%.\n>>\n>> Aaaah, I keep forgetting about this and I somehow ignored the iostat\n>> results too. Yes, he's obviously IO bound.\n>\n> I'm not so sure on the io-bound. Been battling/reading about it all\n> day. 1 CPU is pegged at 100%, but the disk is not. If I do something\n\nLook here in iostat:\n\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util\n> sda 0.00 3.50 3060.00 2.00 49224.00 20.00\n> 16.08 2.21 0.76 0.33 99.95\n\nSee that last column, it's % utilization. Once it hits 100% you are\nanywhere from pretty close to IO bound to right on past it.\n\nI agree with the previous poster, you should roll these up ahead of\ntime into a materialized view for fast reporting.\n",
"msg_date": "Wed, 16 Nov 2011 20:02:56 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Wed, Nov 16, 2011 at 8:02 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Nov 16, 2011 at 7:42 PM, Tory M Blue <[email protected]> wrote:\n>> On Wed, Nov 16, 2011 at 6:27 PM, Tomas Vondra <[email protected]> wrote:\n>>> On 17 Listopad 2011, 2:57, Scott Marlowe wrote:\n>>>> On Wed, Nov 16, 2011 at 4:59 PM, Tomas Vondra <[email protected]> wrote:\n>>>>\n>>>>> But you're right - you're not bound by I/O (although I don't know what\n>>>>> are\n>>>>> those 15% - iowait, util or what?). The COUNT(DISTINCT) has to actually\n>>>>> keep all the distinct values to determine which are actually distinct.\n>>>>\n>>>> Actually I meant to comment on this, he is IO bound. Look at % Util,\n>>>> it's at 99 or 100.\n>>>>\n>>>> Also, if you have 16 cores and look at something like vmstat you'll\n>>>> see 6% wait state. That 6% represents one CPU core waiting for IO,\n>>>> the other cores will add up the rest to 100%.\n>>>\n>>> Aaaah, I keep forgetting about this and I somehow ignored the iostat\n>>> results too. Yes, he's obviously IO bound.\n>>\n>> I'm not so sure on the io-bound. Been battling/reading about it all\n>> day. 1 CPU is pegged at 100%, but the disk is not. If I do something\n>\n> Look here in iostat:\n>\n>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n>> avgrq-sz avgqu-sz await svctm %util\n>> sda 0.00 3.50 3060.00 2.00 49224.00 20.00\n>> 16.08 2.21 0.76 0.33 99.95\n>\n> See that last column, it's % utilization. Once it hits 100% you are\n> anywhere from pretty close to IO bound to right on past it.\n>\n> I agree with the previous poster, you should roll these up ahead of\n> time into a materialized view for fast reporting.\n\nA followup. A good tool to see how your machine is running over time\nis the sar command and the needed sysstat service running and\ncollecting data. You can get summary views of the last x weeks rolled\nup in 5 minute increments on all kinds of system metrics.\n",
"msg_date": "Wed, 16 Nov 2011 20:04:38 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Wed, Nov 16, 2011 at 7:02 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Nov 16, 2011 at 7:42 PM, Tory M Blue <[email protected]> wrote:\n>> On Wed, Nov 16, 2011 at 6:27 PM, Tomas Vondra <[email protected]> wrote:\n>>> On 17 Listopad 2011, 2:57, Scott Marlowe wrote:\n>>>> On Wed, Nov 16, 2011 at 4:59 PM, Tomas Vondra <[email protected]> wrote:\n>>>>\n>>>>> But you're right - you're not bound by I/O (although I don't know what\n>>>>> are\n>>>>> those 15% - iowait, util or what?). The COUNT(DISTINCT) has to actually\n>>>>> keep all the distinct values to determine which are actually distinct.\n>>>>\n>>>> Actually I meant to comment on this, he is IO bound. Look at % Util,\n>>>> it's at 99 or 100.\n>>>>\n>>>> Also, if you have 16 cores and look at something like vmstat you'll\n>>>> see 6% wait state. That 6% represents one CPU core waiting for IO,\n>>>> the other cores will add up the rest to 100%.\n>>>\n>>> Aaaah, I keep forgetting about this and I somehow ignored the iostat\n>>> results too. Yes, he's obviously IO bound.\n>>\n>> I'm not so sure on the io-bound. Been battling/reading about it all\n>> day. 1 CPU is pegged at 100%, but the disk is not. If I do something\n>\n> Look here in iostat:\n>\n>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n>> avgrq-sz avgqu-sz await svctm %util\n>> sda 0.00 3.50 3060.00 2.00 49224.00 20.00\n>> 16.08 2.21 0.76 0.33 99.95\n>\n> See that last column, it's % utilization. Once it hits 100% you are\n> anywhere from pretty close to IO bound to right on past it.\n>\n> I agree with the previous poster, you should roll these up ahead of\n> time into a materialized view for fast reporting.\n>\nYa I'm getting mixed opinions on that. avg queue size is nothing and\nawait and svctime is nothing, so maybe I'm on the edge, but it's not\n\"at face value\", the cause of the slow query times. I think the data\nstructure is, however as it seems I need to query against all the\ndata, I'm unclear how to best set that up. Partitioning is not the\nanswer it seems.\n",
"msg_date": "Wed, 16 Nov 2011 19:16:23 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On 17 Listopad 2011, 4:16, Tory M Blue wrote:\n> On Wed, Nov 16, 2011 at 7:02 PM, Scott Marlowe <[email protected]>\n> wrote:\n>> On Wed, Nov 16, 2011 at 7:42 PM, Tory M Blue <[email protected]> wrote:\n>>> On Wed, Nov 16, 2011 at 6:27 PM, Tomas Vondra <[email protected]> wrote:\n>>>> On 17 Listopad 2011, 2:57, Scott Marlowe wrote:\n>>>>> On Wed, Nov 16, 2011 at 4:59 PM, Tomas Vondra <[email protected]> wrote:\n>>>>>\n>>>>>> But you're right - you're not bound by I/O (although I don't know\n>>>>>> what\n>>>>>> are\n>>>>>> those 15% - iowait, util or what?). The COUNT(DISTINCT) has to\n>>>>>> actually\n>>>>>> keep all the distinct values to determine which are actually\n>>>>>> distinct.\n>>>>>\n>>>>> Actually I meant to comment on this, he is IO bound. Look at % Util,\n>>>>> it's at 99 or 100.\n>>>>>\n>>>>> Also, if you have 16 cores and look at something like vmstat you'll\n>>>>> see 6% wait state. That 6% represents one CPU core waiting for IO,\n>>>>> the other cores will add up the rest to 100%.\n>>>>\n>>>> Aaaah, I keep forgetting about this and I somehow ignored the iostat\n>>>> results too. Yes, he's obviously IO bound.\n>>>\n>>> I'm not so sure on the io-bound. Been battling/reading about it all\n>>> day. 1 CPU is pegged at 100%, but the disk is not. If I do something\n>>\n>> Look here in iostat:\n>>\n>>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n>>> avgrq-sz avgqu-sz await svctm %util\n>>> sda 0.00 3.50 3060.00 2.00 49224.00 20.00\n>>> 16.08 2.21 0.76 0.33 99.95\n>>\n>> See that last column, it's % utilization. Once it hits 100% you are\n>> anywhere from pretty close to IO bound to right on past it.\n>>\n>> I agree with the previous poster, you should roll these up ahead of\n>> time into a materialized view for fast reporting.\n>>\n> Ya I'm getting mixed opinions on that. avg queue size is nothing and\n> await and svctime is nothing, so maybe I'm on the edge, but it's not\n\nWhat do you mean by \"nothing\"? There are 3060 reads/s, servicing each one\ntakes 0.33 ms - that means the drive is 100% utilized.\n\nThe problem with the iostat results you've posted earlier is that they\neither use \"-xd\" or none of those switches. That means you can's see CPU\nstats and extended I/O stats at the same time - use just \"-x\" next time.\n\nAnyway the results show that \"%iowait\" is about 6% - as Scott Marlowe\npointed out, this means 1 core is waiting for I/O. That's the core running\nyour query. Try to execute the query 16x and you'll see the iowait is\n100%.\n\n> \"at face value\", the cause of the slow query times. I think the data\n> structure is, however as it seems I need to query against all the\n> data, I'm unclear how to best set that up. Partitioning is not the\n> answer it seems.\n\nI'm not sure I understand what you mean by accessing all the data. You can\ndo that with partitioning too, although the execution plan may not be as\nefficient as with a plain table. Try to partition the data by date (a\npartition for each day / week) - my impression is that you're querying\ndata by date so this is a \"natural\" partitioning.\n\nAnyway what I've recommended in my previous post was intelligent reduction\nof the data - imagine for example there are 1000 unique visitors and each\nof them does 1000 actions per day. That means 1.000.000 of rows. What you\ncan do is aggregating the data by user (at the end of the day, thus\nprocessing just the single day), i.e. something like this\n\nSELECT uid, count(*) FROM users WHERE log_date ... GROUP BY uid\n\nand storing this in a table \"users_aggregated\". This table has just 1000\nrows (one for each user), so it's 1000x smaller.\n\nBut you can do this\n\nSELECT COUNT(DISTINCT uid) FROM users_aggregated\n\nand you'll get exactly the correct result.\n\n\nTomas\n\n",
"msg_date": "Thu, 17 Nov 2011 04:47:23 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "Tory,\n\nA seq scan across 83GB in 4 minutes is pretty good. That's over\n300MB/s. Even if you assume that 1/3 of the table was already cached,\nthat's still over 240mb/s. Good disk array.\n\nEither you need an index, or you need to not do this query at user\nrequest time. Or a LOT more RAM.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Wed, 16 Nov 2011 21:19:01 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Wed, Nov 16, 2011 at 7:47 PM, Tomas Vondra <[email protected]> wrote:\n> On 17 Listopad 2011, 4:16, Tory M Blue wrote:\n>> On Wed, Nov 16, 2011 at 7:02 PM, Scott Marlowe <[email protected]>\n>> wrote:\n>>> On Wed, Nov 16, 2011 at 7:42 PM, Tory M Blue <[email protected]> wrote:\n>>>> On Wed, Nov 16, 2011 at 6:27 PM, Tomas Vondra <[email protected]> wrote:\n>>>>> On 17 Listopad 2011, 2:57, Scott Marlowe wrote:\n>>>>>> On Wed, Nov 16, 2011 at 4:59 PM, Tomas Vondra <[email protected]> wrote:\n>>>>>>\n>>>>>>> But you're right - you're not bound by I/O (although I don't know\n>>>>>>> what\n>>>>>>> are\n>>>>>>> those 15% - iowait, util or what?). The COUNT(DISTINCT) has to\n>>>>>>> actually\n>>>>>>> keep all the distinct values to determine which are actually\n>>>>>>> distinct.\n>>>>>>\n>>>>>> Actually I meant to comment on this, he is IO bound. Look at % Util,\n>>>>>> it's at 99 or 100.\n>>>>>>\n>>>>>> Also, if you have 16 cores and look at something like vmstat you'll\n>>>>>> see 6% wait state. That 6% represents one CPU core waiting for IO,\n>>>>>> the other cores will add up the rest to 100%.\n>>>>>\n>>>>> Aaaah, I keep forgetting about this and I somehow ignored the iostat\n>>>>> results too. Yes, he's obviously IO bound.\n>>>>\n>>>> I'm not so sure on the io-bound. Been battling/reading about it all\n>>>> day. 1 CPU is pegged at 100%, but the disk is not. If I do something\n>>>\n>>> Look here in iostat:\n>>>\n>>>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n>>>> avgrq-sz avgqu-sz await svctm %util\n>>>> sda 0.00 3.50 3060.00 2.00 49224.00 20.00\n>>>> 16.08 2.21 0.76 0.33 99.95\n>>>\n>>> See that last column, it's % utilization. Once it hits 100% you are\n>>> anywhere from pretty close to IO bound to right on past it.\n>>>\n>>> I agree with the previous poster, you should roll these up ahead of\n>>> time into a materialized view for fast reporting.\n>>>\n>> Ya I'm getting mixed opinions on that. avg queue size is nothing and\n>> await and svctime is nothing, so maybe I'm on the edge, but it's not\n>\n> What do you mean by \"nothing\"? There are 3060 reads/s, servicing each one\n> takes 0.33 ms - that means the drive is 100% utilized.\n>\n> The problem with the iostat results you've posted earlier is that they\n> either use \"-xd\" or none of those switches. That means you can's see CPU\n> stats and extended I/O stats at the same time - use just \"-x\" next time.\n>\n> Anyway the results show that \"%iowait\" is about 6% - as Scott Marlowe\n> pointed out, this means 1 core is waiting for I/O. That's the core running\n> your query. Try to execute the query 16x and you'll see the iowait is\n> 100%.\n\nYes this I understand and is correct. But I'm wrestling with the idea\nthat the Disk is completely saturated. I've seen where I actually run\ninto high IO/Wait and see that load climbs as processes stack.\n\nI'm not arguing (please know this), I appreciate the help and will try\nalmost anything that is offered here, but I think if I just threw\nmoney at the situation (hardware), I wouldn't get any closer to\nresolution of my issue. I am very interested in other solutions and\nmore DB structure changes etc.\n\nThanks !\nTory\n",
"msg_date": "Wed, 16 Nov 2011 21:23:33 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Wed, Nov 16, 2011 at 9:19 PM, Josh Berkus <[email protected]> wrote:\n> Tory,\n>\n> A seq scan across 83GB in 4 minutes is pretty good. That's over\n> 300MB/s. Even if you assume that 1/3 of the table was already cached,\n> that's still over 240mb/s. Good disk array.\n>\n> Either you need an index, or you need to not do this query at user\n> request time. Or a LOT more RAM.\n\nThanks josh,\n\nThat's also the other scenario, what is expected, maybe the 4 minutes\nwhich turns into 5.5 hours or 23 hours for a report is just standard\nbased on our data and sizing.\n\nThen it's about stopping the chase and start looking at tuning or\nredesign if possible to allow for reports to finish in a timely\nfashion. The data is going to grow a tad still, but reporting\nrequirements are on the rise.\n\nYou folks are the right place to seek answers from, I just need to\nmake sure I'm giving you the information that will allow you to\nassist/help me.\n\nMemory is not expensive these days, so it's possible that i bump the\nserver to the 192gb or whatever to give me the headroom, but we are\ntrying to dig a tad deeper into the data/queries/tuning before I go\nthe hardware route again.\n\nTory\n",
"msg_date": "Wed, 16 Nov 2011 21:33:19 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Thu, Nov 17, 2011 at 12:23 AM, Tory M Blue <[email protected]> wrote:\n\n>> What do you mean by \"nothing\"? There are 3060 reads/s, servicing each one\n>> takes 0.33 ms - that means the drive is 100% utilized.\n>>\n>> The problem with the iostat results you've posted earlier is that they\n>> either use \"-xd\" or none of those switches. That means you can's see CPU\n>> stats and extended I/O stats at the same time - use just \"-x\" next time.\n>>\n>> Anyway the results show that \"%iowait\" is about 6% - as Scott Marlowe\n>> pointed out, this means 1 core is waiting for I/O. That's the core running\n>> your query. Try to execute the query 16x and you'll see the iowait is\n>> 100%.\n>\n> Yes this I understand and is correct. But I'm wrestling with the idea\n> that the Disk is completely saturated. I've seen where I actually run\n> into high IO/Wait and see that load climbs as processes stack.\n>\n> I'm not arguing (please know this), I appreciate the help and will try\n> almost anything that is offered here, but I think if I just threw\n> money at the situation (hardware), I wouldn't get any closer to\n> resolution of my issue. I am very interested in other solutions and\n> more DB structure changes etc.\n\nBut remember, you're doing all that in a single query. So your disk\nsubsystem might even be able to perform even more *througput* if it\nwas given many more concurrent request. A big raid10 is really good\nat handling multiple concurrent requests. But it's pretty much\nimpossible to saturate a big raid array with only a single read\nstream.\n\nWith a single query, the query can only run as fast as the single\nstream of requests can be satisfied. And as the next read is issued\nas soon as the previous is done (the kernel readahead/buffering the\nseq scan helps here), your iostat is going to show 100% util, because\nthe there is always the next read \"in progress\", even if the average\nqueue size is low (1). If you had a 24 spindle array, you could add\nanother 20 queries, and you could see the queue size go up, but the\nutil would still only be 100%, latency would stay about the same, even\nthough your throughput could be 20 times greater.\n\nSo, as long as you have a single query scanning that entire 83GB\ntable, and that table has to come from disk (i.e. not cached kernel\nbuffers in ram), you're going to be limited by the amount of time it\ntakes to read that table in 8K chunks.\n\nOptions for improving it are:\n\n1) Making sure your array/controller/kernel are doing the maximum\nread-ahead/buffering possible to make reading that 83GB as quick as\npossible\n2) Changing the query to not need to scan all 83GB.\n\n#2 is where you're going to see orders-of-magnitude differences in\nperformance, and there are lots of options there. But because there\nare so many options, and so many variables in what type of other\nqueries, inserts, updates, and deletes are done on the data, no one of\nthem is necessarily \"the best\" for everyone.\n\nBut if you have the ability to alter queries/schema slightly, you've\ngot lots of avenues to explore ;-) And folks here are more than\nwilling to offer advice and options that may be *very* fruitful.\n\n1) Multicolumn index (depending on insert/update/delete patterns)\n2) partition by date (depending on query types)\n3) rollup views of history (depending on query types)\n4) trigger based mat-view style rollups (depending on\ninsert/update/delete patterns coupled with query types)\n\n\na.\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.\n",
"msg_date": "Thu, 17 Nov 2011 09:17:07 -0500",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On 17 Listopad 2011, 15:17, Aidan Van Dyk wrote:\n> With a single query, the query can only run as fast as the single\n> stream of requests can be satisfied. And as the next read is issued\n> as soon as the previous is done (the kernel readahead/buffering the\n> seq scan helps here), your iostat is going to show 100% util, because\n> the there is always the next read \"in progress\", even if the average\n> queue size is low (1). If you had a 24 spindle array, you could add\n> another 20 queries, and you could see the queue size go up, but the\n> util would still only be 100%, latency would stay about the same, even\n> though your throughput could be 20 times greater.\n\nThis is probably the reason why interpreting iostat results is tricky.\nIt's quite straightforward with a single drive, but once you get to arrays\nit's much more difficult. The %util remains 100% but it may actually mean\nthe I/O is not saturated because some of the drives just sit there doing\nnothing. hat's why it's important to look at await and svctime - when\n\"await >> svctime\" it's another sign of saturation.\n\nTory, you've mentioned you're on a 8-drive RAID5, but all the iostat\nresults you've posted are about \"sda\". I'm kinda used \"sda\" is a regular\ndrive, not an array - are you sure it's the right device? Are you using a\ncontroller or a sw-raid? With a sw-based RAID you can easily see stats for\neach of the drives (so it's easier to see what's going on in the array).\n\n> So, as long as you have a single query scanning that entire 83GB\n> table, and that table has to come from disk (i.e. not cached kernel\n> buffers in ram), you're going to be limited by the amount of time it\n> takes to read that table in 8K chunks.\n\nI don't think he's doing that - the explain plan he posted earlier showed\na bitmap index scan, and as the table is a \"log\" of actions (just growing\nand ordered by log_time) this is pretty-much the best available plan. So\nit reads only the interesting part of the table (not the whole 84GB) in a\nsequential way.\n\n> Options for improving it are:\n>\n> 1) Multicolumn index (depending on insert/update/delete patterns)\n\nI don't think this is going to help him (unless the query is much more\ncomplicated than he presented here). This might be interesting with\nindex-only scans and index on (log_time, uid) but that's in 9.2dev.\n\n> 2) partition by date (depending on query types)\n\nNo, this is not going to help him much as he's already scanning only the\ninteresting part of the table (thanks to the bitmap index scan). It might\neliminate the first step (reading the index and preparing the bitmap), but\nthat's not the dominant part of the execution time (it takes about 8s and\nthe whole query takes 127s).\n\n> 3) rollup views of history (depending on query types)\n\nThis is probably the most promising path - prebuild some intermediate\nresults for a day, aggregate just the intermediate results later. I've\nalready described some ideas in my previous posts.\n\n> 4) trigger based mat-view style rollups (depending on\n> insert/update/delete patterns coupled with query types)\n\nNot sure if this can work with 'count(distinct)' - you'd have to keep all\nthe distinct values (leading to 3) or maintain the counters for every\ninterval you're interested in (day, hour, ...). Anyway this is going to be\nmuch more complicated than (3), I wouldn't use it unless I really want\ncontinuously updated stats.\n\nTomas\n\n\n\n",
"msg_date": "Thu, 17 Nov 2011 17:05:31 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
},
{
"msg_contents": "On Thu, Nov 17, 2011 at 11:17 AM, Aidan Van Dyk <[email protected]> wrote:\n> But remember, you're doing all that in a single query. So your disk\n> subsystem might even be able to perform even more *througput* if it\n> was given many more concurrent request. A big raid10 is really good\n> at handling multiple concurrent requests. But it's pretty much\n> impossible to saturate a big raid array with only a single read\n> stream.\n\nThe query uses a bitmap heap scan, which means it would benefit from a\nhigh effective_io_concurrency.\n\nWhat's your effective_io_concurrency setting?\n\nA good place to start setting it is the number of spindles on your\narray, though I usually use 1.5x that number since it gives me a\nlittle more thoughput.\n\nYou can set it on a query-by-query basis too, so you don't need to\nchange the configuration. If you do, a reload is enough to make PG\npick it up, so it's an easy thing to try.\n",
"msg_date": "Thu, 17 Nov 2011 13:55:40 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance question 83 GB Table 150 million rows,\n distinct select"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.